Skip to content

Affective Computing (part 1)

June 30, 2009

I’m halfway through Rosalind Picard’s Affective Computing, and I’m glad I picked it up. It’s rare that a book about technology is still relevant, never mind ahead of the curve, 12 years after publication. But much of the book’s relevance stems from the fact that AI is still far behind the curve when it comes to incorporating human emotion into its calculations. Picard opens by stating that “I never expected to write a book addressing emotions.” A scientist by training, and a woman in science who has felt the pressure to be even less emotional than her male counterparts, Picard has nonetheless arrived at the conclusion that computers will hit a wall of usefulness unless we can incorporate an ability, if not to “feel” emotions, then the ability to sense ours, and act accordingly.

She wisely sets aside philosophical considerations as to whether or not a computer can “have” feelings, and focuses on how a computer can best read and react to human feelings in the most constructive manner. She points out how humans lacking emotion, due to frontal-lobe disorders or other anomalies, fail to make rational decisions since the emotional pain of learning from experience is essential to improved decision-making in future situations. While we can’t expect computers to feel pain, it’s still essential that we remember the value of regret, anger, disappointment and other emotions in our own process improvements. We don’t operate purely on algorithms, but computers do – and without incorporating some echo of our own not entirely rational decision-making processes, we can never expect them to make “humane” decisions or recommendations that will work for us. Our intuition functions as a deep set of rules encompassing not just raw data but experiences and the feelings associated with those experiences; we “just know” some things will turn out to be correct – hunches about people, the weather, a project – based on a wider set of data than cataloged facts alone.

Affective response is part of passing the Turing Test, Picard argues. “To fool the test-giver, the computer would need to be capable of recognizing emotion and synthesizing a suitable affective response.” Picard notes later in the book that Weizenbaum’s ELIZA was “effective in part because users were generous in attributing understanding to it,” but it’s also true that it was designed to mimic a person whose mandate was to suppress any affective response – the therapist as neutral Rogerian observer – thus elegantly dodging the need to prove “humane” behavior.

For a computer program to best serve us, Picard argues, it needs to be aware of our feelings, and able to formulate appropriate responses. Awareness can be supplied through a variety of channels, few of which are technologically available yet (although Microsoft’s new Natal physical movement recognition system is a great leap forward) – the ability to read broad markers like facial expressions, voice intonation and gestures, but also subtle cues like body temperature, pupil dilation, heart rate, blood pressure and perspiration.  A computer suitably equipped with the ability to not only register but correctly interpret these verbal and non-verbal cues would be able to change its communication style as well as a (sensitive) person does.

But emotional states aren’t easily coded into a rule set, Picard realizes. Each person has a different physiological response to the same stimuli and context is critical, as is experience with that person – someone might read as “a cold fish” if, when presented with something shocking or heart-rending, he doesn’t blink, but long familiarity with him may prove that he’s just one of those people who’s highly emotionally suppressive – the feelings are there, the expression is not.

The more time we spend with a computer, the more likely it is to be able to sense your “private” emotions, expressed through pulse, temperature, dilation, etc.  Picard makes an obvious, yet surprising point: since many of us spend more time touching our keyboards than we spend touching people, “computers are in a unique position to sense affective signals that are personal, as well as to perceive those that are public.”

So what are the uses of a computer that can feel, and interpret, our emotional states?  Plenty, and Picard details them.

Suppose that late in the afternoon Chris sometimes gets in a peculiar mood where he cannot concentrate on work and finds it relaxing to play 30 minutes of games on the Web. It may be that his physiological signals form a characteristic pattern where he gets in this state…it may occur with enough regularity that it is useful for the computer to identify its physiological pattern, and to notice that it is a good predictor of some of Chris’s behaviors. If typically this state precedes Chris’s request for some game software, then the next time the computer detects this state it might preload the software, saving time if Chris decides to play.

Picard describes what we can call the “affective bandwidth” of different communication forms, i.e., how much emotion is carried across. “Email usually communicates the least affect, phone slightly more, videoconferencing more still, and ‘in person’ communication the most.” Picard envisions a computer that can read when you’re “typing happily” from the speed and intensity of your keystrokes, maybe even from reading your face, and pass on some of that message along with the email.

Moreover, computers could have the ability to induce emotion – to know when to crack a joke, show a funny video, communicate exciting news, or express compassion in such a way that it alters the user’s mood, even to the extent of “motivat[ing] adaptive behaviors.” One of the trickiest parts of coding affective AI will be creating the “judgment engine” that guides the AI’s hand in deciding when and how to intervene without seeming to “interfere.” Currently, computers are “like autistics who are also good at memorizing patterns and lists, but not good at understanding emotional significance, or at responding suitably.” Ironically, Picard notes, it may be an emotionally intelligent program that could be most helpful to autistic people in learning how to live with people:

One way to help autistic people is to have a trained person sit down with them and repeatedly walk through situations to help them learn how to understand and respond. However, the helper is prone to lose patience in the tedious repetition of endless situations…Computers with an ability to teach this understanding – via games, exploratory worlds, virtual social scenarios, and other interactions that provide repetitive reinforcement, could be developed with present technology.

Such programs could also function as “affective mirrors” for everyone – you could practice a job interview, a speech, a difficult personal conversation, and “see” how you appear to others through the feedback the program gives you from its reading of your “sentic” markers.

Twelve years ago, when Picard wrote the book, there was significantly more resistance to the idea of a computer “knowing” you than there is today. If anything, we’ve moved into a state of compulsive oversharing, in which it will hardly be uncomfortable for a non-judgmental computer to know “too much” about you, especially if you are one of the millions who tweet and blog and update Facebook status constantly. The idea of an “all-seeing” digital assistant who “watches” your eating, sleeping , TV watching, shopping, and exercise patterns is hardly an uncomfortable concept when we record and share so much of that information already. And given the benefits of a system that could be – well, a friend and companion as much as a helper, it’s easy to imagine millions of potential consumers embracing it.

The downside, as Picard notes, is that given a database of information so “highly personal, intimate and private,” that “these models might be particularly sought after, e.g., in lawsuits, insurance matters, and by prospective employers.” The more your feelings are recorded and classified,

[T]he opportunity arises for someone, somewhere, to know something about your feelings, and possibly to try to control them. Who cares about your feelings? Politicians certainly do, as well as advertisers, co-workers, marketers, prospective employers, and potential lovers. One can imagine some malevolent dictator requiring people to wear emotion ‘meters’ of some sort…

My theory is that people are more willing to surrender their privacy the surer they are that no disastrous consequences will ensue. I.E., in the old days, being frank about your sex life could have you shunned or branded with a scarlet “A,” and even in recent times (say, the 1950s) just being divorced could make you unemployable or unable to find a decent apartment. The less shame, the less power the judgment of others has over you, the more willing you are to reveal your life – the risk of punishment receding as the reward of connection with like-minded others increases. So why not share with a computer, if the risk/reward ratio alters likewise? Safeguards would be needed, legal and social – i.e., if you were to use your computer as a therapist, the company that made the “therabot” would not be able to access your conversations, nor could any potential employer or insurer. (Or, in an ideal world, one’s illnesses would be no obstacle to employment because a company’s health insurance premiums, if we still don’t have national health, wouldn’t be jacked up by taking on someone with chronic illness.)

Idle thought for the novel – in the future, would a child have the same AI throughout her life? Would it change personas as the child developed? Children tire of real and imaginary friends, retiring and replacing them, but a core AI with, eventually, decades of “reading” its person’s thoughts and emotions would be too valuable to discard. It could change or even switch out its personality or personalities over time, remaining fresh and useful. The question, which I propose to lay out in part two of the book, is what kind of control over us would that give a corporate entity who supplied the hardware and software? Imagine a subscription-based service, where if you stopped making your payments, your lifetime companion simply disappeared one day…all the more reason for a free, open source approach to this kind of AI.

In part two of her book, Picard lays out her thoughts on “Building Affective Computing.” Given how long it’s taken me to get around to reading part one, I imagine it may be a while before I get to/through it, so there may well be other posts on other subjects before I finish the book.

Advertisements
No comments yet

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: