Hi Everyone,
It seems that some of you are having a difficult time understanding or others even believing the possibility of this predictive technology.
To help here is an excerpt of George Ure's article about how the Webot (as he calls HPH) system works, hope this is helpful.
This is an excerpt the rest is available at:
http://urbansurvival.com/simplebots.htm
Hope the new Avatar is more agreeable to the sensitive eyes out there.
Namaste
PEACE OUT
Web Bot Technology:
He described how technology worked. A system of spiders, agents, and wanderers travel the Internet, much like a search engine robot, and look for particular kinds of words. It targets discussion groups, translation sites, and places were regular people post a lot of text.
When a "target word" was found, or something that was lexically similar, the web bots take a small 2048 byte snip of surrounding text and send it to a central collection point. The collected data at times approached 100 GB sample sizes and we could have used terabytes. The collected data was then filtered, using at least 7-layers of linguistic processing in Prolog, which was then reduced to numbers and then a resultant series of scatter chart plots on multiple layers of Intellicad (
http://www.cadinfo.net/icad/icadhis.htm ). Viewed over a period of time, the scatter chart points tended to coalesce into highly concentrated areas. Each dot on the scatter chart might represent one word or several hundred.
To define meanings, words or groups of words have to be reduced to their essence. You know how lowest common denominators work in fractions, right? Well the process is like looking for least common denominators among groups of words.
The core of the technology therefore is to look at how the scatter chart points cluster - condensing into high "dot density" areas which we call "entities" and then dissolving or diffusing over time as the entities change. Do a drill down into a dot and you get a series of phrases...
Our first published work in the area occurred in early July of 2001 and is available at
http://www.urbansurvival.com/tip.htm.
What becomes obvious when reading about the technology is that it sometimes reads a bit like the I Ching (the Chinese Book of Changes) because the technology doesn't come out and say "go look for a terrorist attack over there" What it does is gives phrases that would be associated with how people talk about an event, or more accurately, how they change their speech to reflect their thought processes because of an event (after).
The web bot technology apparently taps in to an area of preconscious awareness. It's here that you run into the ramifications of Dean Radin's work at the Boundary Institute and the work of the Princeton Global Consciousness Project - both of which Art has talked about on his show.
The Global Consciousness Project registered what appears to have been a disturbance in "the force" or the regularly orderly operation of life associated with 9/11:
http://www.boundaryinstitute.org/randomness.htm. Supposed "random" numbers generated all over the world appeared to become less random immediately prior to 9/11.
The second point is contained in Dean Radin's paper at
http://www.boundaryinstitute.org/art...mereversed.pdf ("Time-reversed human experience: Experimental evidence and implications"). The mind-bending evidence in Radin's work is that in a laboratory, people begin to react to an event as early as 6-seconds before it takes place. In other words, if you are about to show someone a horribly grotesque picture of something, they will already be physically reacting to it before the picture actually becomes visible. Up to 6-seconds, or so, and in a lab! In quantum terms, Radin's work demonstrates that people are physically able to perceive 6-seconds into the future.
Cont @
http://urbansurvival.com/simplebots.htm