I’ll Die With My Buzzer In My Hand IV
An article in ITWorld (via Slashdot) on how IBM’s Watson will “compete” on Jeopardy! next month doesn’t make it look good for the machine’s prospects – if, that is, the show’s writers “engineer” their questions correctly.
Watson’s software takes each question and
analyzes it, identifying any names, dates, geographic locations or other entities. It also examines the phrase structure and the grammar of the question for hints of what the question is asking.
Sometimes the question is an obvious one, and a query to a specific database will do the trick. Most times, however, the question will kick off five or 10 searches across different data sources, each an interpretation of what the question might be.
For this challenge, IBM has amassed an immense amount of reference material, including multiple encyclopedias, millions of news stories, novels, plays and other digital books. Some of the material is in structured databases; other material resides in unstructured text files.
The process is iterative. A set of results may require a new set of searches to be undertaken. "So, now you might have hundreds of processes, each generating additional candidate answers. Imagine that fan-out," Ferrucci said. An end-result may have 10,000 sets of possible questions and their corresponding answers.
Were I the King of Jeopardy!, I would design the two rounds of play to make a statement. In the first round, I would set up the board so that most if not all the questions are “literal,” questions about geography (so damned many of them these days on Jeopardy!, really messes up my game) or dates in history or
"This measurement of cloth is equal to 40 yards." (Answer: What is a bolt?).
This would give a clear-cut marker to the most basic “man vs. machine” question: who can parse and process questions and answers to “facts” fastest and most accurately.
The next round, however, would be weighted towards the allusive and elusive questions the answers to which the human mind (unless IBM has some huge surprise in store not noted in this article) can parse much more agilely than a computer. Casting aside all categories with “searchable” facts like measurement / cloth / 40 yards, which yields “bolt” pretty quickly and obviously, they should use the “rhyme time” category – can a computer figure out from “Presidential Rhyme Time” that “George W’s rumps” equals “Bush’s Tushes”? – and questions like “Literary Email Addresses,” the very structure of which (email@example.com) will confound a computer. Thirty questions focusing completely on intuition and context, the two areas in which AI has lacked talent so far, would do much to illustrate where AI needs to go if its makers want it to really “compete” with “wet brain” information processing.