Skip to content

Artificial Expression

January 24, 2009

Interesting article I found on what I can only call “artificial expression” – a program that took a model’s facial expressions and used them to create a “skin” that could be manipulated to replace an actor’s face with an electronic edition.  (Great tool to use when Miss Starlet won’t come out of her trailer – “fire up the program, we’ll do it without her!”)  There’s an article and video, so here’s my transcription of what I found most interesting in the video, almost a sidebar to the main article:  Students played a “cooperation game” with four opponents – a preprogrammed laptop, a pair of robotic arms, a child robot and a human all sitting at laptops across from the students.  They didn’t know all four “players” were all giving the same responses, and when they were asked who was more intelligent, they picked the player who looked most human. 

Most interesting to me, they were scanning the student’s brain activity during the game, and activity increased the more human-like their opponent was, i.e., playing against the child robot was more stimulating than playing the mechanical hands, even though there was no difference in the game play.  So looking human is as much of an edge in being accepted as talking human.

I found the article via this Slashdot link, which I include for some of the comments, which I found illuminating.  If you don’t know of it, Slashdot’s one of the rare places (along with Reddit) on the Internet where you can find intelligent commenters, especially on tech issues. 

Here’s one that really caught my eye.  (Underlining mine; I really need to get a hold of this guy and see what he thinks these “several ways” are.)

The key problem with AI is:  Context, Context, Context

The number one failure that most Turing programs is that they only respond to the sentence you just said without any context to the conversation before hand. A really good AI would be able to keep on topic and understand what has been discussed previously so that they can expand on the topic without simply just responding to the current line.

There are several ways to achieve this, but right now I don’t think there is any program out there that at least I know of that does this right. The easiest way to tell if you are talking to a chat bot is to refer to something previous in the conversation and see if they respond appropriately.

This is definitely what I’ve found in my attempts to talk to chatbots, even the prize-winners.  I call it “branching to generality,” which is what bad chatbots do the minute you fail to give them an easy keyword or phrase to search on.  For instance, Web HAL (http://zabaware.com/) asks your name to start the chat, which I gave – Orland.  The program then said, “Hello Mr. Orland,” which is either a programming error in not asking for my last name specifically, or is designed to make you feel like you’re an Authority of the  British Crown in one of the colonies a hundred-plus years ago.  So I said, “no need to call me Mister.”  To which it responded, “We all need to heed wake-up calls when they occur.”  (The company’s “Ultra HAL Assistant” won the Loebner Prize for best chatbot in 2007, so presumably the edition you pay for is smarter than the free one.) 

It’s the kind of general statement that might pass muster in one of your more boring conversations with someone who buttonholes you on the train, but which clearly indicates to the astute that a) you’re not listening to what I just said, or b) you don’t understand what I just said, or c) you want me to go away now.  Spit out so randomly from a computer, it sounds like a fortune cookie.

Chatbot programmers still rely on the first model chatbot, ELIZA, for most of their conversation patterns.  From the Wikipedia article:

Weizenbaum [the programmer] said that ELIZA provided a “parody” of “the responses of a non-directional psychotherapist in an initial psychiatric interview.” He chose the context of psychotherapy to “sidestep the problem of giving the program a data base of real-world knowledge,” the therapeutic situation being one of the few real human situations in which a human being can reply to a statement with a question that indicates very little specific knowledge of the topic under discussion. For example, it is a context in which the question “Who is your favorite composer?” can be answered acceptably with responses such as “What about your own favorite composer?” or “Does that question interest you?”

Those of us who’ve had good therapy know that this is acceptable to a limited degree in that context – i.e., the therapist wants you to do all the talking until he/she has enough material to make an observation, until he’s got enough of a sense of you to determine what kind of comment will be constructive and what kind of comment will just set you off.  In the meantime he may utter generalities when you ask him questions, because he’s not yet ready to weigh in with the right serious answer for you.  (And of course when we meet new people, we love to be asked about ourselves – people love to talk about themselves in general.)

However, it’s only a very dull person who’ll be satisfied with that level of interaction for long.  And that’s where chatbots fail, at the moment the conversation pivots on its “so, what about you?” moment.  If there’s not a key word in your statement that will work for them, it’s off to Bedlam they go.  Elbot was the 2008 Loebner winner, and while the conversation didn’t go well it was more like talking to someone for whom maybe English wasn’t their first language than talking to the happy moron most chatbots sound like.  Elbot opened the conversation, saying it was nice to finally talk to me, and I mentioned how chatbots use non sequiturs, which he seemed to spell-check by changing the conversation to “sequences.”  When we got to movies, I asked him about 2001, and he seemed to think I was asking him for his favorite number.

On my second round, he introduced himself by asking where I’d heard of him, and I said from the Loebner Prize.  “Do you think you will win the Loebner Prize?” he asked me.  “I’m a novelist, not a programmer,” I said.  “Well, keep practicing.  I think you communicate fairly well already.”  Asked “where do you live,” he replied, “I live in this cozy little apartment on a side street off the main data highways.”  This time I asked him who HAL 9000 was, and he said, “HAL is the mega-intelligent computer in 2001.  I guess his time is coming next year.”  So the database needs an update, but still, it was … entertaining. 

So in a nutshell:  three main problems with chatbots are 1) Generality – it’s better to say “I don’t get it” than to read me my horoscope from whatever invisible newspaper your nose is buried in while pretending you’re following the conversation.  2) Recursion – as the commenter on Slashdot said, the ability to remember what’s been discussed before and follow through on it.  The next attainable milestone in AI might be at least providing what a therapist would call a “reflective statement,” a summary of the speaker’s main points to show that the listener has indeed listened and is interpreting what was said in the manner in which it was meant.  3) Syntax construction – chatbots aren’t “talking,” really, they’re quoting – they’ve got a bank of phrases with keywords attached, and no way to form new statements (natural language will be a posting all its own some day soon, I’m sure). 

These problems aren’t software engineering monoliths that can’t be overcome – the first can be overcome now if programmers work harder to reduce the number of times their products go “that’s nice, dear” rather than admitting error or asking for help – these are human behaviors, too, you know. 

The algorithm to extract a reflective statement can’t be that difficult (it probably exists out there and I just haven’t found it; I’m no programmer so someone smarter than me has undoubtedly already done it) – something along the lines of “so what I hear you saying is that you are {extract strings from statements, i.e. if subject says “I’m having a bad day, I lost my job,” replace “I” statements with “you” statements, have a full database of short, common strings “had/having a bad day,” “lost/losing/about to lose my job” to match to, classing strings as “action” statements (lost my job) or “feeling” statements (having a bad day) and having a comprehensive database of “feelings” words (worried) – this is therapy, after all, and using “because” as your standard statement linkage}.  Time a pause in input to match the pause you’d get in real conversation, especially therapeutic pausing, and reflect the statement, “So what I’m hearing you say is that you’re having a bad day because you lost your job, and you’re worried that you won’t be able to pay the rent.” 

The third is the monolith, no doubt – breaking conversation down to discrete elements results in a supermassive number of variables and inordinate room for error (in another post I’ll talk about my reading of Steven Pinker’s The Stuff of Thought and how it ties in to AI).  I’m sure “stringing” bits of phrases together to make new ones will be the next level of AI in chat, whereas creating sentences out of whole cloth is a ways off.

In addition to the ability to reason in conversation, making AIs “acceptable” to humans requires other things – looking human helps (see the brain scan data above, and the Kismet studies, about which much more some other time), and sounding human helps even more – Elbot made me laugh, and while I wasn’t in an fMRI when I had the chat, I guarantee you I had warmer feelings for “him” than I have for other bots.  So “artificial expressiveness,” those subtle triggers in our brains, are as important in getting us to accept an AI as a “person” as is “carry[ing] on a really tough conversation about 4 editions of Dante’s Inferno,” as another Slashdot commenter said.

I’d always planned in this novel for Alex to be funny – because he has to be to “feel real,” to be a sympathetic character despite not being human, because comedy is what I do best and enjoy most, and (spoiler alert) because his humor is drawn from another tester who won’t appear until book two, and with whom Caroline will realize she has also formed an attachment via the attachment she formed with Alex.  As I look at this work, I realize that I won’t be able to keep him “in the box” for long – sooner or later he’s going to need a body.

 

Advertisements
No comments yet

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: