Time for a links roundup. No progress on the book this last week – computer problems with my home testing project, computer problems at work. Amazing how stress punches out my creativity. But things back to “normal” this week, so I hope to get chapter three converted to first person and posted some weekday this week, and be able to move ahead with chapter four this weekend.
In the meantime, here’s what I’ve been reading and saving. A somewhat frustrating article in h+ magazine in which 21 “artificial general intelligence” experts are asked to project future outcomes based on a number of scenarios:
[W]e asked the experts to think about three possible scenarios for the development of human-level AGI: if the first AGI that can pass the Turing test is created by an open source project, the United States military, or a private company focused on commercial profit. For each of these three scenarios, we asked them to estimate the probability of a negative-to-humanity outcome if an AGI passes the Turing test. Here the opinions diverged wildly. Four experts estimated a greater than 60% chance of a negative outcome, regardless of the development scenario. Only four experts gave the same estimate for all three development scenarios; the rest of the experts reported different estimates of which development scenarios were more likely to bring a negative outcome. Several experts were more concerned about the risk from AGI itself, whereas others were more concerned that humans who controlled it could misuse AGI.
The problem with the vagueness in the article is that “The detailed results of our survey will be written up for publication in the scientific literature,” in other words firewalled inside periodicals that cost $20,000 a year for a subscription. If 17 “experts” detailed different scenario estimates based on military, commercial, or open source teams being first to develop AGI, I’d like to hear their opinions as to which is most likely and which is more likely to result in misuse.
I don’t buy this:
One predicted “in thirty years, it is likely that virtually all the intellectual work that is done by trained human beings such as doctors, lawyers, scientists, or programmers, can be done by computers for pennies an hour. It is also likely that with AGI the cost of capable robots will drop, drastically decreasing the value of physical labor. Thus, AGI is likely to eliminate almost all of today’s decently paying jobs.”
I’m with Kasparov that AI is going to be an enhancement to human endeavor – yes, AI can scan the DSMs and PDRs and CFRs and periodic tables and routine libraries far better than we can, making the best selections from existing data, but the real “intellectual work” people do will actually increase in output and quality as the preparatory grunt work we have to do decreases exponentially. What AI can do is turn all us knowledge workers into executives with a staff of physician assistants / paralegals / researchers / secretaries to whom we can delegate. Librarians weren’t obsolesced by computers – it just gave them another library to curate, another tool with which to do it. The death of the typewritten (or handwritten) library card index merely freed librarians from a monotonous, repetitive task – I know there are many who will wax rhapsodic about how the real craftsman needs to build his own hammer and the great lessons learned from typing that card out yourself, etc. I’m not one of them.
I do worry about the automation of all physical labor, dividing society into knowledge-working Eloi and Walmart-greeting Morlocks, the Morlocks being tricked and manipulated by wicked Eloi with slogans about “elites” and “socialism” and “the Rapture” into keeping quiet and consenting to their abuse, until they rise up and initiate a second Dark Age.
A couple of literary links. A reworking of The Odyssey is getting a lot of good press; what fascinated me is author Zachary Mason and his path to its creation.
Mr. Mason, 35, a computer scientist specializing in search recommendation systems and keywords, once worked at Amazon.com. He avoided writing workshops and M.F.A. programs as a matter of principle, and produced “The Lost Books” at night, during lunch breaks and on weekends and vacations…
He approaches literature almost as if it were a branch of science, governed by laws that are quantifiable and predictable, as when he talks of devising an algorithm, later discarded, to determine an optimum chapter order for his novel or when he compares writing to the annealing of metals.
“What I’m interested in scientifically is understanding thought with computational precision,” he explained. “I mean, the romantic idea that poetry comes from this deep inarticulable ur-stuff is a nice idea, but I think it is essentially false. I think the mind is articulable and the heart probably knowable. Unless you’re a mystic and believe in a soul, which I don’t, you really don’t have any other conclusion you can reach besides that the mind is literally a computer.”
Mason isn’t as autistic as that sounds – he’s a savvy enough self-marketer to have sent copies of his book to reviewers “inside a custom-made miniature wooden Trojan horse.” I do love this “scientific approach” to the material, especially as it’s a great way to keep my distance from emotions that could derail my progress.
I found another fascinating piece via a Salon review of Elif Batuman’s book on Russian literature. I’d read one part of it in Harper’s, a funny article on her attendance at a Tolstoy conference in Russia (sorry, firewalled). What was really interesting in the Salon article, though, was a link to her dissertation comparing novel-writing to double-entry bookkeeping. Well, I had to read that, or at least the abstract. (She’s funny there too, opening with a bit about “imitatio vs. mimesis” but kindly letting the general reader know that “this is theory but it will be over soon.”) In a nutshell, it’s about how the first-person narrator, when used to tell someone else’s story, functions as a “division of labor.” The experience of the hero is required for there to be a story, but the labor of the narrator / servant / sidekick is necessary to bring it to the page. Don Quixote is too fast and too busy living his crazy, interesting life to tell his own story; Sancho Panza is too dull to tell his own, and the same could be said of Johnson and Boswell, or the saints and their chroniclers. The main character “gets” experience and the narrator “spends” it writing the story.
Caroline is writing her own life, but Alex is the hero, the center of the novel, the only character who might literally live forever. Without Alex, Caroline has no plot. So this article has been helpful in making me think about who the novel is really about, and how I’m going to frame it from now on out.