Computer Power and Human Reason (conclusion)
In sum, Weizenbaum’s arguments hold up against a certain school of thought, that which believes judges and psychiatrists could be handily replaced with artificial intelligences. But this feels like a straw man thirty years later; we have learned the hard way that technocratic solutions to human problems, imposed top-down, almost always fail, be they the regime changes of our last administration or the forced sterilization program of Indira Gandhi’s India which occurred around the time of the publication of this book. It’s Weizenbaum’s criticisms not of technology but of our uses and abuses of it that hold up after thirty years – his portrait of the “compulsive programmer” still paints a true picture of your average “Otaku”; his (and via quotation Minsky’s) assertion that our programs become increasingly unknowable to a single programmer the more complex and amended they become; his assertion that our “right brain” or intuitive side is what makes us superior decision makers to any pure “reasoning” machine (two years before Drawing on the Right Side of the Brain introduced “right brain” thinking to the general public).
We can’t criticize anyone from the past for failing to see the future, especially in the technology realm. Weizenbaum was forward-thinking in acknowledging that AI could, and might, be able to reproduce a number of “human” decision making processes; though he sometimes loses sight of the possible, such as when he acknowledges that it is “possible, in principle” to build a chess grandmaster AI, but dismisses the idea since calculating all potential games and moves “would take eons to complete on the fastest computers imaginable.” He was wrong in seeing computers as “instrument[s] for the destruction of history,” a case he makes based on the discarding of information in “incompatible formats.” In fact, rather than annihilating “history, memory itself,” the creation of “’data’ that are in ‘one standard format’ and that ‘can easily be told to the machine’” has actually preserved history, spread copies of once-obscure documents to hard drives in thousands if not millions of far-flung places, preserved out of print books from oblivion, saved the memories of witnesses to history which might have moldered and crumbled in their family’s attics. And, as noted in part 1 of this review, the “one-way” media stream he abhorred has been exploded by social media, YouTube, you name it.
Where Weizenbaum was accurate then, but mercifully wrong now, was in his take on people and their relation to science – the difference in the public grasp of general knowledge in the last 30 years is awesome to think about. He is openly disdainful of the public who accepts science as their forebears accepted magic; with no understanding of and no interest in the mechanics. Back then, without a doubt, a shot of a man in a white coat endorsing the low-tar cigarette was reason enough to believe, but in the mean time the gap between “us” and “them” has narrowed significantly – not just in that the priesthood has been tarnished by those among it who sold out to corporate interests (insisting global warming was a hoax at the behest of Exxon; fabricating clinical trial results at the behest of Big Pharma), but because the layman’s interest in, and access to, scientific data has increased vastly. How well would a book about the history of zero or the neuroscience behind great art have fared thirty years ago?
So how do his philosophical statements hold up? In his summary, he states “there are two kinds of computer applications that either ought not to be undertaken at all, or, if they are contemplated, should be approached with utmost caution.” In the first group, the kind he calls “simply obscene,” he includes the proposal “that an animal’s visual system and brain be coupled to computers,” hopefully because he objects to the damage done to the animal. In this group he also puts “all projects that propose to substitute a computer system for a human function that involves interpersonal respect, understanding and love…respect, understanding and love are not technical problems.” Of course, creating a virtual model of such a system is what my book is all about, so I disagree – not with the argument that emotions can’t be outsourced to computers, but with the argument that no artificial intelligence can take the place of human counseling. Based on experience, most patients can vouch that a good model of a good doctor is far preferable to a real, apathetic, ineffective or damage-increasing person. At the very least, the artificial therapist or physician can be counted on to “First, do no harm,” to bring no prejudices, never to dash through five-minute billing-mill visits. No doubt, there is no artificial substitute for the warm and nurturing relationship one develops with a great doctor; but while you are kissing frogs to find one, it would help to have at least a competent baseline AI to sustain you. In fact, I’d argue that the baseline of competency established by a well-functioning AI could be the litmus test which we could use to reduce the number of incompetent teachers, doctors, and other placeholders protected by their own kind and administrative inertia.
His second set of objections hold more weight, as they focus on computer programs “which can easily be seen to have irreversible and not entirely foreseeable side effects.” He cites computer speech recognition as such a problem – and, eerily, foreshadows our own national security state:
Such listening machines , could they be made, will make monitoring of voice communication very much easier than it now is. Perhaps the only reason that there is very little government monitoring of telephone conversations in many countries of the world is that such surveillance takes so much manpower…speech recognition machines could delete all ‘uninteresting’ conversations and present transcripts of only the remaining ones to their masters.
Which, unfortunately, is exactly where we are now. As P. W. Singer noted in Wired For War, many technologists don’t focus on the end uses of their technologies, only on the “coolness” of the problem to be solved and its various solutions.
Maybe the real “singularity” has already been reached:
Until recently society could always meet the unwanted and dangerous effects of its new inventions by, in a sense, reorganizing itself to undo or to minimize these effects. The density of cities could be reduced by geographically expanding the city. An individual could avoid the terrible effects of the industrial revolution in England by moving to America. And America could escape many of the consequences of the increasing power of military weapons by retreating behind its two oceanic moats. But those days are gone. The scientist and the technologist can no longer avoid the responsibility for what he does by appealing to the infinite powers of society to transform itself in response to new realities and to heal the wounds he inflicts on it.
Scientists must take responsibility for their creations, accept that their creations aren’t just “elegant solutions” to mathematical problems but affect other human beings in powerful ways every day. If, thirty years after this book, we still don’t have “truly safe automobiles, decent television, decent housing for everyone, or comfortable, safe and widely distributed mass transit,” it’s because “people have chosen to make and to have just exactly the things we have made and do have,” the SUVs and McMansions and financial instruments we chose so recently. In a memorable passage, Weizenbaum takes the example of commercials:
It is hard, when one sees a particularly offensive television commercial, to imagine that adult human beings sometime and somewhere sat around a table and decided to construct exactly that commercial and to have it broadcast hundreds of times. But that is what happens. These things are not the products of anonymous forces. They are the products of groups of men who have agreed among themselves that this pollution of the consciousness of the people serves their purposes.
It’s this call that survives the test of time, that outlives the anachronisms and contemporary anxieties of 1977, this idea that we are responsible for what we make and do, that we cannot shrug our Rumsfeldian shoulders and say “stuff happens.” Like voice recognition, AI will bring curses as well as blessings – i.e., a software agent powerful enough to function as a therapist could also be used as an interrogator. The book has served me most in this fashion, to remind me how great new technologies are always both used and misused, and I’ll write part 2 accordingly.