To Err is Human
The Economist has a brief piece on AI in games and a “Turing Test” for game AI. It’s a contest organized by the IEEE at a”symposium on computational intelligence and games,” called the 2K Bot Prize (Take-Two Interactive is the parent company, creator of “Bioshock,” which even a non-gamer like me has heard of). The goal is for the judges not to converse with an AI to determine whether or not it’s human, but to play a game against it. The article doesn’t mention the game in question, but the contest site does – “The game used for the competition will be based on a modified version of the DeathMatch game type for the First-Person Shooter, Unreal Tournament 2004.” From the article:
The aim is to trick human judges into thinking they are playing against other people in such a game. The judges will be pitted against both human players and “bots” over the course of several battles, with the winner or winners being any bot that convinces at least four of the five judges involved that they are fighting a human combatant. Last year, when the 2K BotPrize event was held for the first time, only one bot fooled any judges at all as to its true identity—and even then only two of them fell for it.
Computers can, of course, be programmed to shoot as quickly and accurately as you like. To err, however, is human, so too much accuracy does tend to give the game away. According to Chris Pelling, a student at the Australian National University in Canberra who was one of last year’s finalists and will compete again this year, a successful bot must be smart enough to navigate the three-dimensional environment of the game, avoid obstacles, recognise the enemy, choose appropriate weapons and engage its quarry. But it must also have enough flaws to make it appear human. As Jeremy Cothran, a software developer from Columbia, South Carolina, who is another veteran of last year’s competition, puts it, “it is kind of like artificial stupidity”.
This is much easier than fooling a human in conversation. Game players, like athletes, “know” through thousands of hours of experience the typical patterns of certain kinds of players, so it’s easier to code these patterns, including errors and “tells,” in a game-playing AI than it would be to accommodate the infinite possibilities of conversation. Think of Bart Simpson playing rock, paper, scissors, thinking to himself, “Good ol’ rock, nothing beats that!” – as Lisa thinks, “Poor predictable Bart, always takes rock.”
Games are always the best arenas for AI as the rules are fixed as to how you can move and when, so finite sets of actions are available – even errors. Ask a conversational AI a question like “why are you blue?” and it has no table to go to for the next conversational “move” – the typical chatbot programmer just lets the bot “ branch to non sequitur.” (“I think you are silly!”) Now, if they add a chat window to the competition, and demand something more complex than “all ur bases r belong to us” as interaction, that’ll be a great leap forward.