And the Nanobots Did Slay Goliath
Still having major problems cutting and pasting from Word; put up a forum post on WordPress support – in the meantime, I’ll use italics in place of indents. [EDIT – Windows Live Writer is making it easy to redo old posts; now that I’m actually getting traffic thanks to Slate, I’m going back and doing some overdue housekeeping.]
Up next on the reading list is Affective Computing. I really do need to get back into chapter three; I’m just putting it off, to be honest. Only working 20 hours a week right now, and that only because the boss doesn’t want to lay me off, so I can’t say I “don’t have time or energy” for it. Funny, I used to dread research as the hardest part of writing, now I dive into it to avoid writing. Site traffic is approximately zero these days, literally, with the exception of some spike days that seem to coincide with spam comments. I know that the whole point of writing a novel online is…to have a novel online, not to just keep posting more research. Soon, I think, I hope, knowing that waiting for “something to happen” isn’t going to cut the mustard.
One of those “gee whiz” articles from Malcolm Gladwell in the most recent New Yorker, this time about “Davids” vs. “Goliaths.” In a nutshell: history proves that Davids win when they don’t play by Goliath’s rules. There’s a good deal about basketball strategy which I skipped, having no knowledge of or interest in the sport, some historical stuff (mostly focusing on Lawrence of Arabia), and then, this, which was very interesting to the Project:
In 1981, a computer scientist from Stanford University named Doug Lenat entered the Traveller Trillion Credit Squadron tournament, in San Mateo, California. It was a war game. The contestants had been given several volumes of rules, well beforehand, and had been asked to design their own fleet of warships with a mythical budget of a trillion dollars. The fleets then squared off against one another in the course of a weekend…Lenat had developed an artificial-intelligence program that he called Eurisko, and he decided to feed his program the rules of the tournament. Lenat…was not a war-gamer. He simply let Eurisko figure things out for itself. For about a month…Eurisko ground away at the problem, until it came out with an answer. Most teams fielded some version of a traditional naval fleet—an array of ships of various sizes, each well defended against enemy attack. Eurisko thought differently. “The program came up with a strategy of spending the trillion on an astronomical number of small ships like P.T. boats, with powerful weapons but absolutely no defense and no mobility,” Lenat said. “They just sat there. Basically, if they were hit once they would sink. And what happened is that the enemy would take its shots, and every one of those shots would sink our ships. But it didn’t matter, because we had so many.” Lenat won the tournament in a runaway.
The next year, Lenat entered once more, only this time the rules had changed. Fleets could no longer just sit there. Now one of the criteria of success in battle was fleet “agility.” Eurisko went back to work. “What Eurisko did was say that if any of our ships got damaged it would sink itself—and that would raise fleet agility back up again,” Lenat said. Eurisko won again.
…“Eurisko was exposing the fact that any finite set of rules is going to be a very incomplete approximation of reality,” Lenat explained. “What the other entrants were doing was filling in the holes in the rules with real-world, realistic answers. But Eurisko didn’t have that kind of preconception, partly because it didn’t know enough about the world.” So it found solutions that were, as Lenat freely admits, “socially horrifying”: send a thousand defenseless and immobile ships into battle; sink your own ships the moment they get damaged.
The price that the outsider pays for being so heedless of custom is, of course, the disapproval of the insider… “In the beginning, everyone laughed at our fleet,” Lenat said. “It was really embarrassing. People felt sorry for us. But somewhere around the third round they stopped laughing, and some time around the fourth round they started complaining to the judges. When we won again, some people got very angry, and the tournament directors basically said that it was not really in the spirit of the tournament to have these weird computer-designed fleets winning. They said that if we entered again they would stop having the tournament. I decided the best thing to do was to graciously bow out.”
But in fact, the strategies are only “socially horrifying” if the boats are operated by onboard humans, forced to drown at the first shot taken. If the same level of AI that designed and ran the strategy could replace the people onboard (or even if remote operators such as those used in Predators were employed, a solution not available when the contest occurred in 1981), then it’s actually less horrifying, pace the objections in Wired for War, since fewer people are actually dying in the conflict (at least fewer of “our” people). So in short, the AI, unencumbered by the “that’s impossible/unthinkable” handbrake in humans, comes up with a solution that is successful but unacceptable (kamikaze boats), but which can then be modified by the “human factor” (let’s remote-control the boats), returning the solution to a state both successful and acceptable. I would think that would be the future of human/AI cooperation in the future.