I’ll Die With My Buzzer In My Hand
Interesting article in the New York Times today, especially for someone who’s taken the Jeopardy! test three times and still can’t get on the show. Not for lack of brains; the first time I took it was when the old rules applied – you got picked at random from email contestant requests, then they winnowed a roomful of people in San Francisco to three finalists based on the 50 question written test, at which time all three of us were told we were in the contestant pool for a year. The next two times, I passed the new online test to qualify for the in-person test, and traveled to SF for one, Portland for the next. At that time, they’d moved to the “everybody gets a gold star” process, so that after taking the written test everybody in the room got to play a sample game against other contestants and we were all told, “We’ll call you.” No one’s ever called. Funny story – each time you’re up there playing, you get interviewed by the contestant search team, and every time I’ve done the test, they ask the same question: “What would you do if you won a ton of money on the show?” Well, I used to think I should answer that question with something genius-granty, and I used to say, “I’d take the money and go to France to research my historical spy novel.” Finally, last time I just said, “I’d become a gentleman of leisure.” It went over well, but I guess not well enough to get me on the show.
It’s been one of the great difficulties of my life to realize that brains alone aren’t going to get me on Jeopardy! In addition to the curse of being one of about a trillion middle-aged white guys who technically qualify for the show in a time where the show is striving for diversity, I think I just don’t have the “persona” they’re looking for – whatever that might be.
Now it seems that an artificial intelligence may be ready for prime time:
I.B.M. plans to announce Monday that it is in the final stages of completing a computer program to compete against human “Jeopardy!” contestants. If the program beats the humans, the field of artificial intelligence will have made a leap forward.
I.B.M. scientists previously devised a chess-playing program to run on a supercomputer called Deep Blue…But chess is a game of limits, with pieces that have clearly defined powers. “Jeopardy!” requires a program with the suppleness to weigh an almost infinite range of relationships and to make subtle comparisons and interpretations. The software must interact with humans on their own terms, and fast.
Indeed, the creators of the system — which the company refers to as Watson, after the I.B.M. founder, Thomas J. Watson Sr. — said they were not yet confident their system would be able to compete successfully on the show, on which human champions typically provide correct responses 85 percent of the time.
“The big goal is to get computers to be able to converse in human terms,” said the team leader, David A. Ferrucci, an I.B.M. artificial intelligence researcher. “And we’re not there yet.”
The team is aiming not at a true thinking machine but at a new class of software that can “understand” human questions and respond to them correctly. Such a program would have enormous economic implications.
It’s articles like this that make me realize I need to get cracking on the novel – before reality overtakes fiction. Although of course the computer has the advantage of never forgetting to phrase the answer in the form of a question,
The real difficulty, [Eric Nyberg, a computer scientist at Carnegie Mellon University] said, is not searching a database but getting the computer to understand what it should be searching for.
The system must be able to deal with analogies, puns, double entendres and relationships like size and location, all at lightning speed.
Still, there are few questions in Jeopardy! which an AI will have problems with. I.E., many “answers” hold two sections which enable you to solve the “question” – the clue which will give the person with direct knowledge the answer; and the allusion or pun which will help out anyone with general subject familiarity who can make the connection – such as (I’m inventing here) “His famous thought experiment on quantum mechanics really did (or didn’t) let the cat out of the bag.” A human who doesn’t know diddly about quantum mechanics may still be familiar with the idea of Schrodinger’s Cat, and be able to extract the answer without “real” knowledge of the answer. But an AI can search and match quantum mechanics with cat to arrive at the answer with the highest probability, even without knowing WTF a cat has to do with the subject.
In fact, there are few questions in Jeopardy! that wouldn’t be solvable with the application of Big Blue force, since the essential knowledge for answering every question is contained in the answer – some answers may have “giveaways” like the cat in the bag, but all are answerable without the hint, given sufficient knowledge of the field. The system’s internal database is of unspecified size, and no Internet access will be provided, but a complete copy of Wikipedia wouldn’t tax a large computer’s disk space, and that would probably be just the start of its body of knowledge.
The kinds of questions in which a human has a clear advantage are high-cognitive-linguistic categories like “Before and After” and “Rhyme Time,” in which the ability to come up with answers like “George Washington Apple” or “Egyptian Prescription” are still, I imagine, beyond the ken of AI.
The article states that Watson successfully identified Lebanon from the statement, “Bordered by Syria and Israel, this small country is only 135 miles long and 35 miles wide,” but “stumbled when it decided it had high confidence that a “sheet” was a fruit.” It’s unfortunate that the article didn’t give us the question which led to the mistake, since it would have been instructive about what kinds of questions it can’t parse.
And who might Watson go up against in a man-vs-machine death match? Why, who else but the John Henry of Jeopardy!, Ken Jennings.