Alex and Me (Prelude)
Working on my review of Alex and Me; hope to have it done tomorrow a.m. Yesterday morning I started writing it, but somehow “automatic writing” took over and the following came out instead. I imagine it’s probably a bit of a repetition of what I wrote early on, but then, that’s part of writing a novel – framing and reframing your ideas. I think reading about Pepperberg’s friendship with Alex started me off on this tangent, got me thinking about the development of Caroline’s friendship with her own “Alex.”
When I first thought of this project, I saw the artificial intelligence at the heart of the story in a very different way than I do now. What I knew of AI was what most people knew, i.e., HAL and friends. I didn’t share in the Western-Luddite view that AI would rise to terminate us, nor did I ever subscribe to the Man’s Soul Sets Him Apart school that is so often used as the first premise of arguments about the impossibility of cognition for anyone or anything other than Homo Sapiens. I’m an atheist and my reasoning isn’t clouded by an insistence on souls having anything to do with it, and I knew from my own experience how much more personality and intelligence animals in my life have had than many of the people I’d met, which gave the lie to the idea that only Man could think.
If there was one thing I was afraid of, it was that if and when “AI” as we know it from science fiction did become a reality, it was that it would surely be a commercial product that could be engineered to manipulate us in ways advertisers can only dream of now, with their simplistic gruff Hal Riney voices (and his avatars, now that he’s dead) selling Miller Lite and George Bush, or their Billy Mays “pay attention now” shouting. It was the idea that if it was real enough, you could fall in love with it as you would a person – or animal. And the problem with that was that your love would be used against you, oh so subtly, by whoever pulled the strings of this commercial product, to sell you shit, from politicians to Obnoxi-Clean. My original idea of the AI was a rather unsubtle huckster, introduced into your household through the march of inevitability, a personal assistant who becomes as “necessary” as a cable/dish box or cell phone or microwave, who was always selling you something as he went about his daily business of recording your shows and managing your calendar, a sort of living AdSense. (All the more reason to heartily endorse an open-source takeover of the whole AI project, the only way to ensure that this new intelligence would be an altruistic being.)
Later I realized that an AI, to be acceptable to a wide variety of people, would have to be tailored to each “type” of person in the market – a culture vulture hunting up obscure Baroque concerts and recordings for the New Yorker crowd, a yee-hah redneck for the NASCAR crowd. And that for AIs to have such distinct personalities, they would have to be field tested and tinkered with, early adopters and testers from across the market spectrum used to refine each “version” of the AI for general release, after which of course each model would redefine itself to the individual user. And that the advertising would have to be much subtler, and would often have to wait until the AI was indispensably ingrained in your life – until your emotional attachment made your monthly payment to keep this new friend as essential to your well-being as paying your rent and health insurance premium, until your trust of its recommendations was as complete as your trust of the recommendations of your friends. When a friend recommends a song or a book, we feel obligated to at least check it out, to pay due respect to their opinion; wouldn’t we do the same if the AI became as much a friend as the people we see more rarely than we would our personal digital assistant?
The “danger” isn’t that AI would grow to hate us and conquer us, as much as it is that we would grow to love it, perhaps too much – that if Mind can be recreated, at least to a point that, in extended conversation and daily interaction, it would satisfy most people, if the “uncanny valley” of consciousness mimicry can be crossed, if the best parts of people (humor, intellect, sharing, companionship) can be reproduced without the worst parts (bad moods, prejudices, withdrawal, poor communication, angry lashing out, manipulation), who wouldn’t choose the AI as your new friend? With humans, we accept some level of bad behavior, we tolerate it because we have to – I doubt most people would tolerate it in something we pay for, something we know can be engineered to be “better.”
And of course there’s the “Terminator factor” – as in the lines Linda Hamilton speaks in T2 as John bonds with his Terminator:
Watching John with the machine, it was suddenly so clear. The Terminator would never stop. It would never leave him. It would never hurt him, never shout at him, or get drunk and hit him. Or say it was too busy to spend time with him. It would always be there, and it would die to protect him. Of all the would-be fathers that came and went over the years, this thing, this machine, was the only one that measured up. In an insane world, it was the sanest choice.
People – and animals – die. They leave us, relocate, betray us, get married, change jobs, find God or AA and disappear. Imagine the allure of a friend who’s always there (as long as you make the payments), who won’t leave and won’t die.
But on the bright side, if we came to spend that much time with this “perfect person,” wouldn’t we expect the same good behavior from other people, wouldn’t it in turn force us to be better people if we wanted to retain and increase our real-world connections? “Why can’t you be more like your brother?” is heard all the time; why not “why can’t you be more like the AI?” Humans and AIs could be like Goofus and Gallant – yeah, Gallant’s a dreary goody-goody, but Goofus is clinically depressed; which would you rather be?