Skip to content

Alex and Me (conclusion)

April 6, 2009

I’ve been familiar with the story of Alex the parrot for a while, but like so many others, it wasn’t until he died and his story went viral that I realized how important Irene Pepperberg’s work was, not just in the field of human-animal communications and animal cognition but in terms of “the possible.” 

“The fortress of human uniqueness” Pepperberg criticizes is a concept that has been applied to AI as well as animals – man’s need to be, so to speak, the alpha dog.  As the commenter on that New York Times article I linked to last week said, the “philosophical obsession with whether computers are brain-like is really based on fear, a kind of modern day Copernican assault on our primacy.  Again.”  It’s astonishing to me how people with advanced degrees like the authors of that article can so casually toss off semi-sarcastic certainties about the threat of our “robotic overlords.”  I still have a lot of reading to do on this project, but I’m looking forward to finding someone who’s studying the cognitive dissonance inherent in those who simultaneously hold that computers can never equal man’s reasoning power, and, that computers will soon be all-knowing and all-powerful and rule us as overlords/Terminators/post-nervous-breakdown HAL 9000s. 

Alex the parrot, of course, is not an AI.  He is a living being who clearly displays innate intelligence and personality, which merely needed to be unlocked in a systematic and loving fashion.  If AIs have personality, it will be the personality(ies) that we give them.  Just as we teach our children our own politics and prejudices, so the first real AIs will reflect our own.  FSM help me paraphrasing the NRA, but robots don’t kill people – programmers do.  Of course there will be mechanical disasters such as the computer-controlled machine gun I wrote about in my Wired for War review, but as we see in the news every day, people are more prone to mass-murdering malfunctions than machines.  The danger from AI or robots is no different than the danger from any homicidal religious lunatic’s skill at making IEDs or pipe bombs; it’s the intent of the user that makes technology deadly. 

If AIs reason, they will be following the steps we lay out for them, accessing their databases as we access ours, contrasting and comparing and pattern recognizing just as Alex learned “three corner wood” and “green bean.”  As a parrot, Alex should be the most adorable and harmless creature to delight us with the knowledge that we aren’t alone in the universe when it comes to thinking and talking, and yet, so many people have denounced Pepperberg and other animal linguists out of what can only be described as an identity crisis.  I suppose we can hardly be surprised when a reasoning AI or the possibility of it is denounced and renounced with even more fervor than that reserved for our fellow members of the animal kingdom. 

I had a bit of a project identity crisis reading this book.  Pepperberg herself refers to her work as “The Alex Project” multiple times; I swear I had no idea when I named this site.  I wondered if I should change the name, but honestly, I do think I’m working in the same spirit.  “The Alex Project,” I’ll state for anyone who cares, is definitely a homage to Pepperberg’s, and Alex’s, work.  If someone sues me, I’ll change it, but I think it’s appropriate.

A funny aside:  was Alex gay?  When Pepperberg was at MIT, she and Alex shared their work space with a couple of grad students:

Spencer had been a grad student with me at Tucson, where he had been Alex’s absolute favorite person.  I had a special relationship with Alex, obviously, but in general Alex preferred guys, especially tallish guys with longish hair, like Spencer.  Alex would often pad around the Tucson lab, looking for Spencer.  When Spencer picked him up, Alex would run up his arm, perch on his shoulder, and perform the Grey’s mating dance.  Spencer was the only person Alex called by name.  He used to say, “Come here, Ser.”

Alex enjoyed sitting in a lobby close to the lab, looking out the window.  “He liked to wolf-whistle at boys who walked through the lobby, much to the consternation of the girl students tending him.”

During the early part of 2007, Alex became hypersexual with his favorites.  Poor Steve Patriarco.  For about six months, whenever Steve picked up Alex, he raced up to Steve’s shoulder, where he puffed up his feathers, danced from foot to foot, and regurgitated food.  It got to be quite ridiculous.  Alex was also distinctly uninterested in working during this period.

Up next:  Joseph Weizenbaum’s classic, Computer Power and Human Reason.  Weizenbaum is famous for creating the first chatbot, ELIZA, and I’m looking forward to the book.  I’ve only done the preface so far, but it looks promising.  Having created as a joke a “parody of a Rogerian therapist,” and then seen it taken seriously by some therapists themselves as a viable alternative to the human version, Weizenbaum came to the conclusion, as he states in his introduction, that “there are limits to what computers ought to be put to do.”  I like the way he frames this – not what they can do, but what they ought to be put to do.  I’m looking forward to this book.

No comments yet

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: