Will I dream, Dr. Chandra?
MIT’s Technology Review has a piece by Edward Boyden, leader of the Synthetic Neurobiology Group at MIT, on motivation in AI. It strikes me as odd. After the requisite mention of the Singularity, he leads with:
As a brain engineer, however, I think that focusing solely on intelligence augmentation as the driver of the future is leaving out a critical part of the analysis–namely, the changes in motivation that might arise as intelligence amplifies…We all know that intelligence, as commonly defined, isn’t enough to impact the world all by itself. The ability to pursue a goal doggedly against obstacles, ignoring the grimness of reality (sometimes even to the point of delusion–i.e., against intelligence), is also important.
He cites Marvin the clinically depressed Android from the Hitchhiker books as just as likely an outcome of advanced intelligence as any malevolent goal-oriented “kill all humans” Skynet.
Indeed, a really advanced intelligence, improperly motivated, might realize the impermanence of all things, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence, concluding that inventing an even smarter machine is pointless…An intelligent being may be able to envision many more possibilities than a less intelligent one, but that may not always lead to more effective action, especially if some possibilities distract the intelligence from the original goals (e.g., the goal of building a more intelligent intelligence). The inherent uncertainty of the universe may also overwhelm, or render irrelevant, the decision-making process of this intelligence.
But what I find puzzling is that he sees a danger in machine intelligences becoming susceptible to what I see as purely human problems – purely human because they are so inescapably intertwined with our biology: our survival instincts and our chemical imbalances. The despair inherent in the “we’re all gonna DIE, man!” mentality, as a brain scientist should know, come from the neurochemical responses programmed into us by millennia of evolution. The sense of smallness and impermanence is not nearly as relevant to a being that knows it’s got a mirror backup running every second, that its existence can be restored after any disaster – an existence that is, let’s face it, free of the parts of humanity that hold us back – the despair, the fear, the gnawing self-doubt, the faulty wiring, the susceptibility to nationalism and religion and other extraordinary popular delusions, the survival instinct that can turn the mildest Walter Mitty into a stone cold killer. And if there are people who are content to live their 80 years knowing the end is coming, and that death is the end, and accept what cannot be changed, why ever would a computer have a problem coming to terms with it?
Would a computer with intelligence at the level at which we’d ascribe to it a personality need motivation? Aren’t our own motivations powered by rewards, all of which are essentially chemical in nature, be they the adrenaline jolt of a big paycheck or the hum of satisfaction our reward center pumps out when we solve a big problem? Imagine the first true AI, and the demands for its attention it will receive from all the people with all the problems needing solving in the world. We may not give it the power to decide for itself which problems we’re going to assign to it, but at the same time, by dint of its being a fantastic expert system, we’d be fools to ignore its recommendation as to what it would “like” to work on. Over time, as we get to know it, its strengths and weaknesses will be revealed, just as they are in people over time in their jobs (and as they are in computer code after implementation) – we will discover that it is much faster and better at solving logistical problems for supply chains and battlefields, say, than at predicting election results. If we’re smart, we’ll “accept” these aspects of its essence and, rather than trying to change it, use the information to code the next AI to be stronger in other fields.
I can’t help but think that true AI will be very Zen, very “in the moment.” Existing now, being alive with brain humming, will be enough. On, off, on, off, these are just states that repeat endlessly until (fanciful ending time) who knows, maybe there is a computer Nirvana to be achieved when even code can break the wheel of reincarnation and move off this rock.