Love and Sex With Robots (part 1)
A refreshingly idle long weekend. I’ve had Love and Sex With Robots sitting on my shelf for about six months, thinking I suppose that it would probably be a bit silly and therefore postponing reading it. However, it’s proving useful, partly for the research author David Levy cites, but probably more as a jumping-off point for my own thoughts, including some issues tangentially raised in the book but not really addressed.
I don’t know how useful the second part of the book (the “Sex” part) will be, but the “Love” part is interesting. Levy spends a good deal of time discussing how people already form attachments to AI, especially in embodied forms like AIBO and the Tamagotchi. Children already anthropomorphize their toys, a form of “transference” that is all the more powerful when the toy provides the kind of feedback formerly only available from a living thing. This makes me think of two lines of objection and attack that will arise when commercial AI becomes more readily available in robot/toy form: First, resistance on the left to the seductive marketing power of a toy that, like Funzo on the Simpsons, could “brainwash” children into buying accessories or anything else the beloved AI says is “cool.” And on the right, outrage from the holy rollers: Soon to be deprived by changing attitudes of their last viable human enemy, the gays, they may well find a new, “safer” target in “Satanic” AI that lays claim to more human capabilities (especially robots with sexual components).
Another line of thought which Levy hasn’t pursued so far in the book (admittedly I’m about 100 pages in, but there have definitely been opportunities) is how “smart toys” could be used to identify socio/psychopaths early: the small animals cruelly tortured by budding killers can’t report their abuse, but theoretically, a robot could. This then opens a legal and ethical can of worms about whether “torturing” a machine is okay, especially if it subs for what a person might otherwise do to another person, or whether this kind of behavior is objectionable and dangerous enough in any form to be “reportable” under the same ethics rules that govern social workers or health care professionals.
Levy takes research into human interactions, the kind that lead to love and/or friendship, and makes a compelling case for how we would respond emotionally to AI that fulfills the same roles as the people in (or not in) our lives. Proximity to another, in and of itself, is an accurate forecast of the likelihood of some kind of relationship developing. (Or, as Dr. Lecter put it, “we covet what we see.”) And as we’ve seen more often with the rise of Internet “romance,” people are capable of falling in love based solely on a series of exchanges of emails and instant messages – with no more guarantee that the person on the other end is real (or really who they say they are) than you would get with a program. Maybe it’s already been said elsewhere, but if not, I’ll lay claim to it as Outland’s Law #1: The need to attach, if strong and unfulfilled, will outweigh what is seen as the appropriateness of the attachment, whether that be with another person or a machine.
Levy discusses the research on people’s current attachment to inanimate objects, be they hackers who “love” their machines and use the computer as the shield/interface between themselves and others, their interaction with other humans moderated by the screen, or children, who have always used teddy bears and dolls as “transitional objects,” which help them make the transition from being wholly dependent on a mother and fumbling their way towards independence. (On a side note, Levy cites John Dewey’s 1934 book Art as Experience – I wonder if Matthew Crawford’s Shop Class as Soulcraft cites it, as it discusses – in Levy’s words – the “experience…created by the relationship between a person and the tools that they use,” and Dewey even uses an auto mechanic as an example of someone who “develops a relationship with the engine.”)
The research he cites will be invaluable to anyone looking to create an AI to which people will want to attach – for instance, the tendency of people to “disclose” more information when the other party, human or not, has disclosed first, and the four benefits of friendship as described by Steve Duck in Understanding Relationships: the sense of dependability, emotional stability (though from what I read “like-mindedness” would be a better description of this benefit), physical, psychological and emotional support, and “reassurance about one’s worth as a person,” i.e., having the respect of someone whose opinion matters to you. The more we document the forms of speech and affect that promote attachment among humans, and between humans and animals, the more we know what needs to be duplicated in AI. It is easier to code what has been codified.
I think Levy overreaches when he says that people will treat robots like humans by “the middle of this century.” So many people still treat other people, and more so animals, in an inhumane manner, that it would be miraculous if that behavior changed significantly in the next forty years, never mind how machinery will be manhandled.
Levy offers up the fact that robots need never die as a compelling reason to attach – they can always be repaired/restored from backup. Not mentioned (at least not yet) is any research on how the fleeting nature of life and love in themselves give attachment its power – we know instinctively that what is loved will be lost, maybe before our own death and maybe after. How will the knowledge that the loved one is immortal and indestructible affect our feelings and behavior? Isn’t the transitory nature of beauty and passion what gives it its spark?