As I said before, there are a lot of technical specs I don’t comprehend in the Yorick Wilcks article in Computational Linguistics, but fortunately there is also a lot of narrative about various chatbots past and future. Here are a couple extracts about past projects:
My only early exposure to dialogue systems was Colby’s PARRY…I was a great admirer of the PARRY system: It seemed to me then, and still does, probably the most robust dialogue system ever written. It was available over the early ARPANET and tried out by thousands, usually at night: It was written in LISP and never broke down; making allowances for the fact it was supposed to be paranoid, it was plausible and sometimes almost intelligent. In any case it was infinitely more interesting than ELIZA, and it is one of the great ironies of our subject that ELIZA is so much better known. PARRY remembered what you had said, had elementary emotion parameters and, above all, had something to say, which chatbots never do. John McCarthy, who ran the AI Lab, would never admit that PARRY was AI, even though he tolerated it under his roof, as it were, for many years; he would say “It doesn’t even know who the President is,” as if most of the world’s population did! PARRY was in fact a semirefutation of the claim that you need knowledge to understand and converse, because it plainly knew nothing; what it had was primitive “intentionality,” in the sense that it had things “it wanted to say.”
[PARRY the schizophrenic once had a chat with ELIZA the doctor, found here. PARRY sometimes comes off in the dialog as more like Dustin Hoffman in Rain Man (“I went to the races, I went to the races”), while ELIZA stays in character even unto the last line (said last line proving to me that Weizenbaum really did intend ELIZA as a “parody” of a therapist).]
“Commissioned” by a chess expert to create a chatbot capable of winning the Loebner Prize, Wilks
took the whole set of winning Loebner dialogues off the Web so as to learn the kinds of things that the journalist-testers actually said to the trial systems to see if they were really humans or machines…Our system, called CONVERSE, claimed to be Catherine, a 34-year old female British journalist living in New York, and it owed something to PARRY, certainly in Catherine’s desire to tell people things. It was driven by frames corresponding to each of about 80 topics that such a person might want to discuss; death, God, clothes, make-up, sex, abortion, and so on. It was far too top-down and unwilling to shift from topic to topic but it could seem quite smart on a good day, and probably won because we had built in news from the night before the competition of a meeting Bill Clinton had had that day at the White House with Ellen de Generes, a lesbian actress [!!! – as opposed to White_House_visitor_class “non-lesbian actress”? O.]. This gave a certain immediacy to the responses intended to sway the judges, as in “Did you see that meeting Ellen had with Clinton last night?”
I think the key takeaway from the first quote is “PARRY remembered what you had said, had elementary emotion parameters and, above all, had something to say.” I think this pretty much sums up what most of us would consider “conversation” at a satisfactory level. From the second quote, the takeaway is that CONVERSE “probably won because we had built in news from the night before the competition.” That’s probably true – when you talk to many chatbots, it can often feel as if you’ve stumbled onto a lost race of mystics and oracles, speaking as cryptically as Prince in an interview from the 80s. (“What’s your new song about?” “Sometimes it rains in June.”) “Plain speaking” about recent facts isn’t the kind of thing you expect from a bot. I also think it’s interesting that they “took the whole set of winning Loebner dialogues off the Web” to see what worked with the judges – if you start to see conversation as something like a game of chess, with a non-infinite number of winning “moves” and combinations, eventually given enough processing power you can create a system that “makes moves” in conversation the same way a Deep Blue would move pieces in a chess game.
As to what Wilcks is working on now, well…COMPANIONS sounds a lot like Alex (emphasis mine):
COMPANIONS aims to change the way we think about the relationships of people to computers and the Internet by developing a virtual conversational “Companion.” Thiswill be an agent or “presence” that stays with the user for long periods of time, developing a relationship and “knowing” its owner’s preferences and wishes…
Another general motivation for the project is the belief that the current Internet cannot serve all social groups well, and it is one of our objectives to empower citizens (including the non-technical, the disabled, and the elderly) with a new kind of interface based on language technologies. The vision of the Senior Companion—currently our main prototype—is that of an artificial agent that communicates with its user on a long-term basis, adapting to their voice, needs, and interests: A companion that would entertain, inform, and react to emergencies…During its conversations with its user or owner, the system builds up a knowledge inventory of family relations, family events in photos, places visited, and so on. This knowledge base is currently stored in RDF, the Semantic Web format, which has two advantages: first, a very simple inference scheme with which to drive further conversational inferences, and second, the possibility, not yet fulfilled, of accessing arbitrary amounts of world information from Wikipedia, already available in RDF, which could not possibly have been pre-coded in the dialogue manager, nor elicited in a conversation of reasonable length. So, if the user says a photo was taken in Paris, the Companion should be able to ask a question about Paris without needing that knowledge pre-coded, but only using rapidly accessed Wikipedia RDFs about Paris…There is a lot of technical stuff in the Senior Companion: script-like structures—called DAFs or Dialogue Action Forms—designed to capture the course of dialogues on specific topics or individuals or images, and these DAFs we are trying to learn from tiled corpora. The DAFs are pushed and popped on a single stack, and that simple virtual machine is the Dialogue Manager, where DAFs being pushed, popped, or reentered at a lower stack point are intended to capture the exits from, and returns to, abandoned topics and the movement of conversational initiative between the system and the user.
Although Wilcks’ project will serve “the non-technical, the disabled, and the elderly,” it’s also genius for people like Caroline who are completely phobic about social networking, even on the Internet. His DAFs sound like an interesting solution to the “lost the thread of conversation” problem bots have. (Please don’t ask me what “tiled corpora” are.) And it sounds like he’s making progress with the “reflective statements” process I’d idly tried to pseudocode a few postings ago.
I think I’m on the right track with Alex. The difference in my plan for him vs. what Wilcks is looking at with COMPANIONS is that Alex will be a synthesis of many people’s memories. The idea is that Caroline is just one “tester,” along with seven others (eight having been chosen as a nice round digital number and also because it gives me enough leeway in part two to create additional characters if needed, almost like creating the right amount/type of variables in a program). Each of them is contributing some “aspect” of Alex’s personality – sense of humor, politics/values, language skills (Caroline’s job as a tech writer), a sensibility (or at least a class of groupable tastes, i.e. Stuff White People Like). The problem I see with something like COMPANIONS is that while some people just want somebody to talk to, especially elderly people who might otherwise spend hours discussing their long distance plan just to fend off loneliness, COMPANIONS sounds a little too, well, formal, too reflective. Like an employee, or a tour guide, or a tutor. It’ll know you, true, but if it only reflects you to yourself, it doesn’t make it an interesting friend. Wilcks said that PARRY was interesting because it was crazy, and Catherine/CONVERSE presumably had its own opinions on “death, God, clothes, make-up, sex, abortion” that made its conversation prize-winning. Still, if they can solve the basic problem of just getting a program to sound like a person, any person, that’ll be the great leap forward.
Wilcks opens and closes his article with references to Isaac Newton, noting that his “only modest remark” was “If I have seen farther than other men, it was because I was standing on the shoulders of giants.” Wilcks notes that in his field, “We have no Newtons and will never have any,” i.e. that there is too much to learn and test and rework for one man to make the sort of breakthroughs Newton made, that progress is granular and stepped. In this novel, Christopher is a sort of Newton, but I’m seeing through my reading that only in making him both a genius in his own work, and a thief of the work of others, can the creation of Alex “feel real.”