Back to Life, Back to Reality
Back to work. I’ve done some thinking and realized that I may have created a Monolith I didn’t have to – this idea that I must explain Caroline’s history, now, before proceeding any further. That nobody will buy her loneliness and isolation without some good reasoning for it. And yet, honestly, haven’t most of us found that when we explain why we’re socially retarded, people shake their heads in befuddlement, utter a great “pshaw” and wave their hands as if knocking away a fly, then say “Just get out there! Take a class!” or some such inanity? Aren’t I better off leaving Caroline’s reasons a mystery for now? Not just for myself, not willing to dig deep into my own reasons right now, but for the sake of the book? I wish I could remember who said the reason he/she didn’t write detailed physical descriptions of characters was because it prevented the readers from creating that person in their heads. Let the readers imagine, for now, what’s going on in Caroline’s head. I’ll explain later, when she has a human she can trust to tell it to.
Much reading to do: a big supplement on AI at Forbes.com. The opening article by Kevin Warwick, who should know better, is a rather slick and facile piece on the Turing Test, replete with comic doomsaying on our imminent robot overlords. Isn’t everyone sick of hearing how robots will replace us? Give the people what they want, I guess. Also a generic biz-mag “humorous middle aged crank resents progress” article, in which he gives his take on robotic dogs, replete with comments such as “Can the iDog serve as a receptacle for white middle class female urban neuroses in the same way that Chihuahuas do?” and worries about iRottweilers et al.
But there’s good stuff here, too. Of more interest, a slide show of the 10 most humanoid robots and a fascinating article by a (very cute) artist on the use of robots as ethnologists, capturing and preserving vanishing cultural artifacts such as musical traditions, ceremonies and dances. Rather than being afraid of robotic overlords, artist Aaron Taylor Kuffner offers a role for them far from the popular menacing picture:
What happens when we feed machines not just our physical actions or our conversational intelligence to mimic or reference, but the ability to directly connect to something considered divine and sacred? We have seen humanoid robots like Asimo learn to walk and cut apples and robots that appear to be able to think or at least answer any human question in an astute, conversational manner. As we feed machines connections to our ceremonial practices and our roots of divinity, we make them the keepers of sacred traditions.
Also good, an article on the challenge faced in making AI “creative.” Author Margaret A. Boden lists the “three types” of creativity, creativity being defined as that which surprises us, what Robert Hughes called “The Shock of the New”:
The first is combinational creativity, in which one combines familiar ideas in unfamiliar ways–think painted collages, verbal analogies or poetic imagery. The second is exploratory creativity. This explores an accepted style of thinking, following its rules to generate structures–poems, paintings, theories, theorems and so forth–never encountered before. The third is transformational creativity, which produces ideas that were previously thought impossible. The familiar style is transformed by altering one or more of the currently accepted stylistic rules–think of the shift from tonal to atonal music or the change from string-molecules to ring-molecules in chemistry.
All three types have been modeled in AI. Exploratory creativity is possible in computers when the thinking style concerned (music, drawing, architecture and so forth) can be clearly defined. Transformational creativity can be paralleled by evolutionary programs that mutate their own rules (although often it’s a human who selects the "best" results). Combinational creativity has been modeled too–in joke-generating programs, for instance.
To most people’s surprise, it’s the combinational type that’s most difficult for AI. Admittedly, it’s easy to get a computer to combine ideas in new ways: It can simply pick items at random and juxtapose them. The difficulty lies in ensuring that these novel combinations are valuable.
Helen Greiner, one of the founders of IRobot (the Roomba people), makes the case that our scientific goal should not be duplicating human intelligence, but rather remembering that robots and AI are tools, and their intelligences should be designed accordingly.
Human intelligence is powerful, but there are already more than 6 billion human brains on the planet. We have enough human intelligence already, and there is a popular resupply mechanism in place.
That’s only a third of the articles. Off to work now; I’ll cover the rest tomorrow