Good Morning, Dr. Chandra. I’m ready for my first lesson.
Well, I’m “back” – back as in rethinking the project and cautiously feeling my way towards restarting.
Why restart? I dunno…I always feel good in January; getting past the shortest day seems to change my attitude and reenergize me, take me out of my dark thoughts and habits. I also had a trip to Seattle for New Year’s, and spending a few days in Bluestateia instead of Redneckistan reminds me that there is a world out there for me, a place I’d like to live where I wouldn’t feel like I was living in a vacuum. (Yeah, I know, Reno is not *that* bad, but it’s hard work reaching out and finding that world here, whereas it’s just the air around you in a blue city.) And I know that being creative myself is as much a part of having that life as is being around the creative class.
I’m surprised I haven’t written about Iain Banks’ “Culture” novels here – they are really the greatest science fiction novels of the last…well, to be honest, maybe ever. Whereas most SF novels with an AI feature just one, usually “novelty” AI, the Culture novels feature vast numbers of them; they are just as “human” and just as much characters in the plot as any of the people. I’ll write more about the Culture at some point (soon I hope), but for now it’ll serve to say that if there ever was a world I would choose to live in other than this one, The Culture would be it. Imagine a world where “blue states win,” which, if you’re like me, is to say that it’s a world that survives, that doesn’t plunge backwards into theocracy and feudalism and all the other right-wing goals. Scarcity is solved, government is unnecessary, nobody needs to work to survive and yet everybody still finds purpose and a place to satisfy it, the universe is investigated and explored and man and AI are equal partners in the jobs to be done. Every time I spend time in a blue city, I’m reminded of the Culture, reminded that changing worlds is still possible even on this one.
So to restart I’m going to have to solve a couple problems. First is the “feelings” problem. I’ve let “feelings” become The Monolith that’s stopping me from creating. And I still think: Why have them if you don’t have to? They are no fun – they are painful, and feeling them does *not* “let the healing begin” but only pulls the scab off. So for now, the solution is to postpone dealing with Caroline’s history until the second part of the book – that way I can create the “technical” aspects without the interference of feelings in the creative process. I just read a good book called A Cure For Night by Justin Peacock, in which we know from the start that the protagonist has a “history,” but exactly what happened isn’t revealed until late in the book. This way, I can get the confidence I need from completing the first (half? third? I don’t know yet) of the book before having to deal with all that emotional shit.
As far as the AI goes, in a way the time off to let my subconscious think and decide has been good. At this point I’ve reached the conclusion that AI is a problem that needs to be solved with a “Gordian Knot” approach – rather than trying to untangle it, chop it up. Which is to say, that “AI” as a solid, publishable scientific experiment with solid math behind it and an infallible track record in conversation is a long way away – but “AI” as a commercial product, as something that can “feel real,” is not – it will be a compilation of conversations, just as our own personalities are, and trial and error can create a massive database of “good enough” conversational gambits in the near future. It was interesting to read this Slate article about Google Suggest – the way the “autocomplete” anticipates what you’re looking for based on your grammar. I.E., if you start a search with “How might one…” you get suggestions related to correcting ph balances, Andrew Jackson, proteins, debate and curare…but start with “How 2” and it assumes you want to grow weed, get buff, break copy protection, or get knocked up (frightening to think that people need to ask the Internet on that one). Google, it could be argued, is an AI, albeit one that hasn’t been properly configured to converse. And the sad fact is, that for the majority of people, a “dumb AI” is not only good enough, it’s all they want. Think of the people who love Sarah Palin because she’s “people, like us” – i.e. almost too stupid to breathe without mechanical assistance. For them, a subscription to an AI that starts conversations with “What up dawg” and responds to half their input by saying “Word” or “Praise God!” is all they need, or want. So the novel is going to posit that the breakthrough in AI will occur in a commercial rather than an academic environment, and won’t be too bright because people don’t want it to be, and this will cause indignation and outrage in the community of scientists who have “wasted” their time trying to create something smart. Caroline will see Alex turned, “Flowers For Algernon” style, from a clever loveable companion into…well, a marketing and advertising horrorshow, something that is good at selling shit to stupid people.
I’m also thinking I’m going to pull the existing chapters and retool them. Yeah, I know for the complete transparency I promised at the first I need to keep them up in their extant form, and I will, in an archive. But I want new readers and visitors to experience a better version. So the plan now is:
-Probably intermittent posting, with an eye on an essay on the Culture novels at some point.
-Retooling existing chapters, probably into one or two chapters, and moving forward at last with new content.
-Trying to stay positive and reach out for Bluestateian support and assistance.