Skip to content

Do be careful! That’s concentrated evil.

September 2, 2009

August is almost over, and my summer slump with it.  In fact, I’m almost even ready to deal with the novel as well as the research.  Thank FSM for the coming of fall and, it’s forecast, a high actually in the 70s this weekend.  I am such a shut-in in hot weather, it’s going to be great to actually be able to get out and about after work instead of running for the shelter of the AC.

I know a lot of people would laugh off someone’s cat dying as something strong enough to knock you off your orbit, but – to get it off my chest, as I’m going to have to deal with it in the novel – after a certain amount of loss in your life, you either get extremely numb or more sensitive to each loss.  Me, I get more sensitive each time and then try and numb the feelings as best I can.  That cat was 17 years old, and when she died, I lost the last link to a time when I had more friends living than dead.  In fact, the friends I have now are all acquired within the last eight years, and she and I are, or were, some of the last survivors of AIDS in San Francisco in the early 90s.  (I know, with that confession, there goes my last chance of ever getting a job with insurance.)  I know one other person from that time who’s still alive.  So yeah, it takes it out of you, especially when you’re writing or trying to write a “serious” novel.

Anyway.  Link roundup is overdue.  In relation to my suspended (soon to be resumed) review of The Craftsman, which has some criticism of the mindset behind CAD, here’s an article by Mythbuster Jamie Hyneman in which he says pretty much the same thing. 

Related to my review of the increasingly prescient Wired for War, here’s another PopSci article, on the boom in Predator and similar drones and the exponentially increasing need for pilots.

Three more links in the AI department.  An article in the New York Times about “sentiment analysis,” in which our evil marketing overlords attempt to read our minds based on tweets, comments, reviews and other popular opinion sources.  As anyone passingly familiar with computational linguistics could tell you, this is a sticky wicket:

A quick search on Tweetfeel, for example, reveals that 77 percent of recent tweeters liked the movie “Julie & Julia.” But the same search on Twitrratr reveals a few misfires. The site assigned a negative score to a tweet reading “julie and julia was truly delightful!!” That same message ended with “we all felt very hungry afterwards” — and the system took the word “hungry” to indicate a negative sentiment.

Translating the slippery stuff of human language into binary values will always be an imperfect science, however. “Sentiments are very different from conventional facts,” said Seth Grimes, the founder of the suburban Maryland consulting firm Alta Plana, who points to the many cultural factors and linguistic nuances that make it difficult to turn a string of written text into a simple pro or con sentiment. “ ‘Sinful’ is a good thing when applied to chocolate cake,” he said.

The simplest algorithms work by scanning keywords to categorize a statement as positive or negative, based on a simple binary analysis (“love” is good, “hate” is bad). But that approach fails to capture the subtleties that bring human language to life: irony, sarcasm, slang and other idiomatic expressions. Reliable sentiment analysis requires parsing many linguistic shades of gray.

In the “Selfish Gene” department, here’s an article on robots who are evolving the capacity to deceive other robots:

The team programmed small, wheeled robots with the goal of finding food: each robot received more points the longer it stayed close to "food" (signified by a light colored ring on the floor) and lost points when it was close to "poison" (a dark-colored ring). Each robot could also flash a blue light that other robots could detect with their cameras.

"Over the first few generations, robots quickly evolved to successfully locate the food, while emitting light randomly. This resulted in a high intensity of light near food, which provided social information allowing other robots to more rapidly find the food," write the authors.

The team "evolved" new generations of robots by copying and combining the artificial neural networks of the most successful robots. The scientists also added a few random changes to their code to mimic biological mutations.

Because space is limited around the food, the bots bumped and jostled each other after spotting the blue light. By the 50th generation, some eventually learned to not flash their blue light as much when they were near the food so as to not draw the attention of other robots, according to the researchers. After a few hundred generations, the majority of the robots never flashed light when they were near the food. The robots also evolved to become either highly attracted to, slightly attracted to, or repelled by the light.

Because robots were competing for food, they were quickly selected to conceal this information," the authors add.

And, best of all, this article on the quest to create a “pure evil” AI, fulfilling the wildest dreams of every poorly-informed reporter relying on Terminator tropes every time the subject of AI comes up:

This exercise resulted in "E," a computer character first created in 2005 to meet the criteria of Bringsjord’s working definition of evil. Whereas the original E was simply a program designed to respond to questions in a manner consistent with Bringsjord’s definition, the researchers have since given E a physical identity: It’s a relatively young, white man with short black hair and dark stubble on his face. Bringsjord calls E’s appearance "a meaner version" of the character Mr. Perry in the 1989 movie Dead Poets Society. "He is a great example of evil," Bringsjord says, adding, however, that he is not entirely satisfied with this personification and may make changes.

The researchers have placed E in his own virtual world and written a program depicting a scripted interview between one of the researcher’s avatars and E. In this example, E is programmed to respond to questions based on a case study in Peck’s book that involves a boy whose parents gave him a gun that his older brother had used to commit suicide.

The researchers programmed E with a degree of artificial intelligence to make "him" believe that he (and not the parents) had given the pistol to the distraught boy, and then asked E a series of questions designed to glean his logic for doing so. The result is a surreal simulation during which Bringsjord’s diabolical incarnation attempts to produce a logical argument for its actions: The boy wanted a gun, E had a gun, so E gave the boy the gun.

Bringsjord and his team by the end of the year hope to have completed the fourth generation of E, which will be able to use artificial intelligence and a limited set of straightforward English (no slang, for example) to "speak" with computer users.

Actually, “E” looks more like a soccer hoolie to me.

Funny aside – the traffic patterns for the site are almost exactly the same for August as they’ve been when I was posting regularly – 2, 3, 16, 0, entirely unrelated to whether I’m writing or not.  So I guess that’ll learn me not to take stats too seriously.

Advertisements
No comments yet

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: