Skip to content

Computer Power and Human Reason (Part 1)

April 28, 2009

Joseph Weizenbaum’s Computer Power and Human Reason is 33 years old now, out of print and yet nonetheless consistently referred to in discussions about AI.  This is because, unlike many predictive tech books, Weizenbaum’s book holds up well, mostly because he focuses on the human issues around technology that haven’t really changed since publication.

He was prompted to write the book after his horrified reaction to the over-enthusiastic adoption of his creation, ELIZA, the therapy chatbot he’d created as “a parody of a Rogerian therapist.”  A number of psychiatrists hailed ELIZA as a breakthrough, what one called a potential “therapeutic tool which can be made widely available to mental hospitals and psychiatric centers suffering a shortage of therapists.”  Weizenbaum was also “startled” at how quickly “how very deeply people conversing with DOCTOR [ELIZA] became emotionally involved with the computer and how unequivocally they anthropomorphized it.”  He was concerned that people ascribed more powers to ELIZA than it actually possessed, based on the “enormously exaggerated attributions an even well-educated audience is capable of making, even strives to make, to a technology it does not understand…if, as appeared to be the case, the public’s attributions are wildly misconceived, then public decisions are bound to be misguided and often wrong.”

Weizenbaum lays his concerns out in three points:  1, possibly less relevant now in our more “holistic” (and more computer-savvy) era, his concern that “man has finally been recognized as nothing but a clock-work,” that even a therapist is nothing but a rules-application machine.  B. F. Skinner’s Beyond Freedom and Dignity was only six years old when Weizenbaum wrote, its themes of “cultural engineering” and “operant conditioning” reflecting the then-popular idea that man was a machine, programmable and modifiable.  Our vastly increased scientific knowledge has taught us much more about the complexity of life, especially the human brain, and experience has taught us that “cultural engineering” (“spreading freedom” to societies that wish only to bag and bind their women) is a miserable failure.  However, in his day, Weizenbaum’s fears were well-founded.

2 and 3 run together, as Weizenbaum expressed his concern about human autonomy – he accepts that people bind emotionally to machines since we have always attached to our tools, and “one would expect man to cathect more intensely to instruments that would couple directly to his own intellectual, cognitive and emotional functions than to machines that merely extend the power of his muscles.”  However, he is concerned that the machine has become the intermediary between man and the world, that man cannot act in the world without machines.  Moreover, we’ve surrendered our autonomy to “machines that operate for long periods of time entirely on the basis of their own internal realities,” be that the bank’s computers, the nuclear missile system, or Twitter, panic ensuing when any of these systems break down.

Man’s separation from nature began with the first technological intermediary – the clock, on which Weizenbaum quotes Lewis Mumford:

The clouds that could paralyze the sundial…were no longer obstacles to time-keeping…the bells of the clock tower almost defined urban existence.  Time-keeping passed into time-serving and time-accounting and time-rationing.  As this took place, Eternity ceased gradually to serve as the measure and focus of human actions.

This paved the path for a world view in which man ceased eating when he was hungry and ceased working when the sun set, and instead ate at noon and worked till 5.  Weizenbaum argues that this started us on the path of “rejection…of direct experiences,” that we began to, as they’d say it in Neoconese, create our own reality.  I have a hard time agreeing with him here; to me, the fact that “experiences of reality had to be representable as numbers” is part of our inborn need to divide and classify those experiences within a framework which language can’t provide – say, our desire to know the answer to “how hot is it out there?”  The answers “hot, really hot, super hot” are so relative to the speaker as to be useless to the listener, whereas “91 degrees” can mean “pleasant weather” to one person and “oh god crank up the AC” to another.  Without the numbers to divide “late afternoon” into 5 o’clock for the night owl and 3 o’clock for the morning person, we’re unable to communicate tangible knowledge in a common framework.

Weizenbaum is good on how the computer became “indispensable,” mostly because our society was engineered on a permanently expansionistic basis.  Had computers not come along to make it easy to expand banking into arcane lines of business, for instance, or set assembly lines to making more cars at an ever-faster rate of production, the “inability to act” in that fashion “might in some other historical situation have been an incentive for modifying the task to be accomplished, perhaps doing away with it altogether, or for restructuring the human organizations whose inherent limitations were, after all, seen as the root of the trouble.”  Instead, the ease with which computers allowed production to boom postponed the day when we would have to question the underlying ever-expansionist production/consumption model, allowed us to ignore alternatives (i.e. light rail/high speed rail vs. the automobile).  And, of course, once a massive system is in place and in motion, with so many people and economies invested in it, only catastrophe can stop its momentum.  As Weizenbaum puts it, “A person falling into a manhole is rarely helped by making it possible for him to fall faster or more efficiently.”

However, time has proven him wrong in his setting of computers alongside mass media as one-way channels of communication.  Rather than the isolated, priesthood-managed mainframe of his time, we have the PC, the Internet, “social media,” “crowdsourcing, the “netroots,” etc.  No doubt in too many cases we’ve still (as in the financial industry) “turned the processing of information on which decisions must be based over to enormously complex computer systems.”  Yet today, we tell the computer what to do more than it tells us.

Weizenbaum discusses language as a “game,” in that it has a set of rules like chess, with some “moves” being legal and others not (i.e., “I am for the making with you of the computer” is illegal, even though a human can extract the meaning of it from the Palinese).  A computer (as we’ve seen since Weizenbaum’s time) can indeed “play master-class chess,” because the rules are set and the possible moves in any situation, though large, are finite, and the known outcomes of the move, contained in a database of past games, can predict the probability of the outcome of the move in the current game.  A chess master may work from intuition, from “just knowing” what the next move is, and as Weizenbaum says, “knows more than he can tell.”

But the language problem is one of “knowing,” i.e., the human mind operates on a system which chooses what to say next based on a vast realm of knowledge and, more importantly, motive, which we have not yet been able to codify into “telling” a machine how to do it.  “The question of what we can get a computer to do is, in the final analysis, the question of what we can bring a computer to know.”

Advertisements
No comments yet

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: