Skip to content

Ay CaRoomba

July 26, 2009

An article in the New York Times today on AI – the “will EROs kill us all” debate again.  (We might as well coin an acronym for the Evil Robotic Overlords school of thought, omnipresent as it is.)  This time, though, it’s AI scientists themselves, who are

debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload, from waging war to chatting with customers on the phone.

The scientists of the Association for the Advancement of Artificial Intelligence “focused particular attention on the specter that criminals could exploit artificial intelligence systems as soon as they were developed…What happens if artificial intelligence technology is used to mine personal information from smart phones?”  The scientists want to make sure AI is created “ethically,” and that dangerous experiments are well-contained in sealed environments. 

I think there are three key points that can’t be ignored in this debate.  One, AI doesn’t kill people, programmers program AI to kill people.  The danger is not the technology, but the hand on the keyboard behind it.  I am more concerned about Hugo de Garis, an AI researcher who is enthusiastically giving his talents to the totalitarian dictators of China to aid them in their pursuit of the lead spot in the technology race, than about some for-profit hacker.  We are more in danger from AI systems being used by governments to spy on and oppress their own people (already a reality as facial recognition systems are matched to omnipresent cameras) than we are from “criminals” outside the system who wish only to rob us, not jail or torture or kill us. 

Secondly, while it is valuable to "assess the possibility of ‘the loss of human control of computer-based intelligences,’” it’s more important to ask ourselves what we’re afraid of, and what it says about ourselves.  As I’ve said before, there’s an absurdly primitive assumption going on that concludes that intelligence + opportunity = the will to power; i.e., that the human desire to rule over others is somehow hard-wired into us next to or as part of our intelligence, that an AI with the capacity to rule us would do so because, well, that’s what a person would do!  The arrogance and insecurity and amalgamated nutjobbery that would make a person go crazy with power must surely affect a computer, we reason, because what powerful genius doesn’t want to rule others?  If power corrupts us, why, it would just have to corrupt a mere machine! 

Finally, this atavistic fear of a powerful “other” can also be traced back to our own subhuman days, when anything that was bigger than you and smarter than you and faster than you probably was seeing you as lunch.  But computers don’t need to eat us to survive, or kill us to take our watering hole, and those old fear responses just aren’t helpful in this case.

Hopefully the final report will be less concerned with criminals and more with governments – and corporations.  When you think today about how corporations use and misuse the vast catalog of data at the Medical Information Bureau, for instance, to deny coverage or even refuse to hire people who would jack up their health premiums, it’s hard to be more concerned about an AI using data to screw up your life than a human with legal and “legitimate” access, or to be more concerned about an identity thief than a person who can deny you a livelihood, or even continued life, based on the most personal data on you, your health records.  The ethical dangers we face already are what these scientists should be addressing, and how AI could help us solve the problems people create, rather than being used to build profiles of “risky” employees or redline professions or geographical areas in sophisticated and undetectable and apparently legal ways.

Wired for War author P. W. Singer wrote a good article at Slate.com a while ago on this subject; pity Times reporter John Markoff didn’t reference it – instead the article is full of the same tropes with which he filled an article on the same subject on May 29th, timed to the release of the phenomenally crappy Terminator movie – Kurzweil, Singularity, Vinge, Joy, Rapture of the Nerds.  That article also blacked out Singer’s work, which is odd considering how much work Singer put into his excellent book on the subject of today’s article, which gives only passing mention to Predator drones and military weaponry autonomy.

Weird:  original title of the article in the print version was “Ay Robot! Scientists Worry Machines May Outsmart Man.”  The “Ay Robot!” part is gone off the web version.  Did someone denounce the joke?

Advertisements
No comments yet

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: