Got up too late yesterday to work on the AI roundup at Forbes.com – almost too late today, but want to finish up.
The editor of the section offers up a piece that asks, “Will a Machine Replace You?” Answer: if you work in a “low cognitive load” job like video store clerk, yes. (Also if you work in a high-hustle job, like real estate or car salesman; the ability of the consumer to access huge amounts of data on the product will obviate the edge the persuasive professions have over the gullible.)
The author/editor quotes another contributor’s piece on what jobs will survive. AI scientist Ben Goertzel, in his article “AI and What To Do About It” offers that:
Jobs that are safest from becoming obsolete would include those that involve transferring knowledge from one area to another, or thinking broadly, creatively and integratively, because these require powerful general intelligence, not just narrowly specialized intelligence.
Goertzel is taking the “baby steps” approach to AI, “teach[ing] the young AI as one teaches a human child, interactively leading it through the same stages of development that young humans go through.” Personally, I’m on the same page as contributor and iRobot founder Helen Greiner, who advocates an approach that doesn’t seek to mimic human development and/or behavior, but which rather develops tools useful in assisting humans, which by their nature will develop along the same lines Frank Lloyd Wright laid out for a building – “form follows function.” What an individual AI or robot tool is for will dictate its form, style, creation, development, being.
Joshua Holden makes the case that while AI may perform at a high level in chess or other “games of full information,” in which there is a huge but closed set of options, the “murkier,” “irrational” and “uncertain” areas of competition like poker and the stock market are not as crisply logical – “irrational human traits like confidence, fear and greed” can’t be anticipated by a program. I’d make the case that AI could, actually, predict boom and bust cycles with more confidence than people, and profit accordingly – let’s say that, in addition to all the raw economic numbers, we create a program that scans decades of CNBC transcripts for keywords, and using this historical information, correlates the amount of hype and “positive” commentary to previous bubbles. If there are corollaries in hype levels (phrases such as “Dow 30,000” and “skyrocketing home values,” the average decibel level of Jim Cramer’s voice, etc.) to the points in time where the market peaks, the AI could know when the best time was to get in and out of the market. Equally, if you had a database of an online poker player’s last 10,000 hands, you could probably determine their most likely “moves” in most situations, with more awareness of their tics and foibles than they had themselves. As Peter Norvig argues, the more massive the data set, the more likely you are to get a reasonably good guess on an outcome.
There’s a rather infuriating article called “The Coming Artilect War,” by Hugo de Garis. He predicts a “War” between those who are for and against AI, with “gigadeaths” to result. Of course, since the author’s “personal ambition” is:
in the next five to 10 years to persuade the federal government in China (where I’m directing the building of China’s first artificial brain) to create a CABA (Chinese Artificial Brain Administration), similar in scope to America’s NASA, consisting of thousands of scientists and engineers, to build artificial brains for the Chinese home robot industry and other applications
he is of course helping such a war happen, by eagerly assisting one of the most oppressive, aggressive governments in the world in gaining an edge in this department. As I’ve said before, robots don’t kill people, robot programmers kill people. His hoo-hah about a cultural war over the ethics of artificial life will be N/A – his creations will be used by jingoistic, militaristic dictators for the same ends to which they have applied all other advances in video and audio surveillance, firewalling and other technologies – the repression of their own people and the retention of their own power.