More on the AI roundup at Forbes.com. A shockingly loosely reasoned and written article by the “President of the Singularity Institute,” whose previous credits include “founder and chief strategist at SirGroovy.com, an online music licensing firm” – i.e. marketing bullshit guy. A good deal about the animal kingdom and the differences between animals, and then off about how while making a robot dog doesn’t prepare you for making a robot horse, somehow making a robot human prepares you to make, in the same year you achieve a “capable robotic servant” (2060 is the number he throws out; isn’t such a late date Singulariblasphemy?), “robots [that] would probably match the achievements of Einstein, Shakespeare and Beethoven.” Huh? Then at the end he discusses the imminence of AI as a given and says that “we will only get one shot at this,” so make sure you code Don’t Be Evil into the first real AI’s autoexec.bat. Because, after all, if we get it wrong our robotic overlords blah blah blah.
The problem with “technical” (to stretch the definition here) material in a non-technical magazine is that the editors don’t have enough of a grasp of the subject to call bullshit on a writer. Someone surely should have done so on this one.
An article on “superintelligence” in AI posits that “Superintelligence would be the last invention biological man would ever need to make, since, by definition, it would be much better at inventing than we are.” Right, and we should have closed the patent office in 1843 because there was nothing left to invent (apocryphal story, but instructive and entertaining nonetheless). Solving massive raw data problems is not inventing, let alone creating, something which computers are yet to accomplish.
David Gelertner has a very good piece on “theoretical AI,” and the nature of human thought and the “cognitive spectrum”:
Many researchers define thought as "problem solving," which is wrong. Sometimes your mind is in fact focused on some problem and working toward a solution analytically or–more typically–on the basis of recollected experience. But at other times you are looking out a window and letting memories and observations drift through your mind like slow-moving clouds. This too is a kind of thinking, though it is not problem-solving–not consciously directed at any goal…At the upper, analytical end, each thought is logically related to the one right before it in your train of thought. As you approach mind-drifting relaxation, each thought is suggested by the previous one, not logically but in free-association style. Emotion plays only a minor role, or none at all, in analytical thought, but it grows increasingly prominent as you move through mind-drifting into daydreaming and (at last) to actual dreaming. As you move down-spectrum you dim the lights, so to speak, and emotions emerge like the stars after sunset.
The ability to reflect, incorporate “soft” or anecdotal data, and employ “counterfactual” statements in coming to conclusions is another piece of intelligence, according to Judea Pearl, which we’ll need to figure out how to code if we’re really going to have machine intelligence that’s useful in human problems.
On the pure, raw data end of the argument, Peter Norvig makes the case for “unsupervised machine learning,” in which supermassive amounts of data in and of themselves allow a program to make correct decisions:
Consider the task of building a program to correct spelling errors. A supervised approach would start with a dictionary of known words and have a teacher instruct the program on each correct word. But new words are coined all the time, and standard dictionaries don’t yet know "Wii" or "iPhone." An unsupervised approach would have the computer read billions of Web pages and sort the correctly spelled new terms from the typos. This approach sounds like chaos, but in fact it works quite well and is used in practice by search engines today.
Which is not, Lee Gomes points out, a result to be confused with intelligence:
Google’s computers know nothing about, say, "the causes of the Civil War," certainly not in the way an AI researcher from the 1960s would have tried to make a computer "know" about the subject. But Google’s machines are unparalleled at fishing out Web pages that contain that string of 22 letters–then making some excellent guesses about which of those pages might be the most useful, such as those pages linked to from university history departments.
The rest of the articles, in “Living with AI” section, tomorrow.