I was drawn in, again, to Eliza’s candied web of self-deception, and I will probably begin working on her in earnest. I mean the Eliza invented by Joseph Weizenbaum of MIT’s AI Laboratory in 1966, that simulates a Rogerian psychologist in interaction with the human inputter.
Of course in the 60’s people were quite stupid (prior to the Revolution, the Soft Parade Revolution, lead by Jim Morrison and John Lennon); actually, people remained quite stupid, even after this revolution. But I’m talking about the TV talk shows like Johnny Carson, who would say things like, “Scientists believe that, in the future (that is part of the stupidity), computers may actually be able to think just like humans.” Then, Johnny would say something intelligent like, “Gee whiz, are computers really regressing that rapidly?” followed by uproarious canned laughter, Ed-chortle, Johnny secretly thinking about snorting coke and fucking three women in the same bed in the same night, Ed farting silently, most other Americans thinking “huh, huh, huh, imagine thoses scientists thinkin that kind of thing computers thinking jest like hyoomuns never gonna be thinking jest like ME, no sir-eee,” and then they go to focusing on the real America at that time, which can be summed up with the word “high-ball.”
back to Eliza, though… The point is, at that time, computer scientists were kind of dumb too, and so were psychologists, so all new AI programs were subjected to the (Alan) Turing test, which has human subjects try to guess which of either another human or the AI computer is the actual computer, not being able to see either one, of course –
Gawd! dey weren’t DAT stoopid back den!
– with the 60s-funny conditions that both the computer and the human “foil” would be allowed to lie, i.e., if asked, the computer would say “no, I am not a computer,” and the human…well, you know what the human would be allowed to say, by now.
finally…I might be edging toward the interesting part… most of the web sites, and even (God forbid) the encyclopedia, erroneously report that “people couldn’t tell the difference!” Of course, they could, and did. To date, no program has passed the Turing test.
no, the interesting thing about Eliza was precisely that people DID know she was a computer program, but chose, even after the experiment was over, to GO BACK TO HER. yes, indeed, the byte-chair psychologist – naturally, a chic – sucked the psycho-sick society in. it’s really not so strange, nor does it have anything to do with AI.
I think Eliza is the same as the I Ching, except that Confucius was much smarter than Joseph Weizenbaum, or , Roget, for that matter. The I Ching has a large collection of life scenarios and wisdom-bits, linkable to one another by the metaphorical richness of Chinese folk sayings, but made oddly ‘real’ by the incorporation of chance – in a practice using I Ching, you toss 3 Chines cash coins, then construct a pictogram derived from the head/tails combinations; the pictograms are numbered, and, describe full metaphorical connections. In contrast, the goal of Eliza is to use logical predication combined with a vast lexicon, so that Eliza’s responses are precisely NOT random, or based on statistical probability in any way.
now, the easiest way to write an Eliza is to use the probability – you have a key word, followed by multiple responses, which have probability ratings. This has a logic “flavor” to it, but it usually pisses me off when I see this kind of Eliza – mostly because it might be valid if the programmer were to create an exhaustive analysis of the response-probability, based on an actual lexicon which contained a gargantuan collection of actual responses to all the words/syntax patterns “in the dictionary”. That would be all right, and maybe, more accurate than the logic programming approach, which allows us to string together a long discussion thread. The probability approach just gives you request-response pairs; the only “thread” would be going on in YOUR mind; the logical Eliza would have a thread going on in her mind, as well, and it would likely match yours.
but, now, I question whether the symlogic Eliza is better than Confucius. consider this: the danger of logic, at least the propositional calculus we have developed to represent what we THINK is some kind of natural causality, this type of logic is, ultimately, finitely predictable. That is, using propositional calculus, we can prove anything to be true that we want to – it is a discrete system, immune to the empirical method. in other words, it resembles precisely the way that we as humans lie to ourselves – already! we didn’t need fucking mathematicians to tell us how to rationalize our own behavior, in a way that was most pleasing to ourselves! what we need is someone who challenges us – who surprises us with a radically different way of looking at our problems, which we have analyzed into complete paralysis on our own. This is why I like the I Ching – and the Celtic Runes, and the Tarot, etc… their randomness allows us to use them as “personal oracles” – you don’t know what the oracle will say, because she incorporates that mysterious idea of “fate”. I’ve found even the Catholic Rosary, if you use it in its original, bi-planar way – that is, chant the mantra while simultaneously considering a metaphysical construct (ok…a “miracle”), your own mind becomes the random generator, in that, as with dreaming, the sub-conscious mind will scatter, while the conscious mind will attempt to order – a chaotic feed into a rationaliz-er…eliz-er….Eliza.
so today, I went shopping for an new Prolog implementation; found Visual Prolog, with C and C++ interpreters, to begin building my next-gen Eliza; this time, though, I’m throwing some I Ching into her; and I’m naming her Angelina…