Skip to content

My First Review

August 17, 2013

And it’s a good one, from the Czech Republic.  I’d thank the author, but I can’t find an email link on the page.  I’d wondered why my blog was getting a steady amount of traffic from there, and now the click-thru link from this site explains it.  Google Translate seems to do a decent job of getting the gist of it:

If I had to summarize the book somehow, this is 182 pages long Technothriller, which plays a major role in the style Eliza chatbot, IQ Pokyd, or ALICE
The story follows a lonely, teenage girl Caroline who gets from his almost-friend Christopher somewhat unusual, but well-paid work; aims to improve chatbot project, in which Christopher secretly works. Her job is to "chat" with a bot and Caroline quickly realizes that this is no ordinary limited conversational program, but advanced artificial intelligence that quickly becomes her only friend.
The story then gaining momentum at a time when the program is sold to corporations that they start to use for advertising and Caroline comes as a long building relationships with test versions, making it needs to address the question of friendship with people.
</ spoiler>
The book is very well written and the author’s writing style is to see a certain maturity, a person gets to me after a few books. Thanks psychological side line, where the question of friendship and loneliness will love the people who do not have too sci-fi / Technothriller.
What I was pleasantly pleased with the technical level, where it is clearly seen that the author had done their homework and fairly well versed in current technologies and what is actually available on the market. It was interesting to see the large number of references to recent events, such as the events surrounding Wikileaks, or the death of Aaron Swarze.

The book is still free on Smashwords using coupon WH35N through August 24.  All I ask is that you leave a review *somewhere*, please.  I’m dyin’ here.

The Wallace Effect II

June 29, 2013

About a year into this project (three plus years ago), I wrote a post about a program called "StatsMonkey,” which wrote sports stories based on the game’s stats – and didn’t do a half bad job of it.  I realized then that I needed to get crackin’ on the book because at that rate, real AIs were going to come along before my fictional one.  Like Darwin seeing Alfred Russell Wallace’s work, and realizing that Origin of Species had better not sit in a drawer much longer, it was do or die time.

Well, long after that, I finally finished the book.  And not, it appears, a minute too soon.  A viable form of the book’s commercial version of Alex just hit the streets.

Netflix has unveiled a talking, human-like interface named “Max,” who greets subscribers with a game-show host’s exuberance and invites them to play irreverent games, such as “Max’s Mystery Call” and “Celebrity Mood Ring.” It seeks to inject humor into the analytical process of offering viewing suggestions — and to further differentiate the service from competitors such as or Hulu.

… “People have wide tastes,” said Todd Yellin, Netflix’s vice president of product innovation. “When they talk to Max, he finds out what they’re in the mood for.”

…“Our job, as the peanut butter to Netflix’s chocolate, was to bring all of the numbers of the recommendation algorithm to life, layering on top of it a voice, a personality,” said Jellyvision Founder and Chief Executive Harry Gottlieb. “A huge part of the effort was about creating a character, with Max, who is not your typical movie reviewer on ‘Good Morning America.’ It’s more like a friend talking to you.”

… “Before, these were boxes that did jobs for you. Now they are really entities with a personality. Mobile devices and tablets are becoming so popular, these kinds of attributes are becoming more and more important,” said Abeer Alwan, a professor of electrical engineering at UCLA. “I think there is more acceptance and more admiration for what these machines can do.”

Writers scripted thousands of lines of dialogue – sentence fragments that could be stitched together dynamically, so Max appears to be conversing with the Netflix subscriber and reacting to what’s happening on the TV screen.

“Max seems aware of what you did last session with Netflix, or the time of day, saying, ‘Here’s a lunchtime suggestion,’ ” Yellin said. “I think part of the magic is going to be using creativity and technology to expand these branches and make them feel more sentient.”

Yeah.  Wow.  So I was right on target.  Now all I have to do is get someone to read the book…

I’ve sold two copies so far, well, three if you count the one I sold myself.  I’ve started putting out some hopeful emails (BoingBoing, Irene Pepperberg, and Jaron Lanier so far), but I’m not sure what else I can do.  That is, what I can do.  I didn’t spend the last four years cultivating Twitter followers or building a vast social network or becoming a BMOC at Goodreads or, yeah, anything other than working a job and writing. 

So the book may fail, may sink under the waves.  But I’ve made a sort of peace with that, having realized last night that, you know?  The person who could write this book is not the person who could sell it.  If I had that skill set to be out there in the world with a never-ending cataract of social contacts…I never would have understood the appeal of Alex, never would have got into Caroline’s head, and the book wouldn’t have been what it is.  So it goes.

How the Wheel Turns, or, I Told You So

June 24, 2013

Four years ago I started writing about The Orderly March, what I called the religion of Transcriptarianism – the idea that success in the meritocracy depended on being perfect on paper, never deviating from the life script set out for you from the moment you got into the “right” preschool.  (Search for “The Disorderly March” at this blog to get most of the posts on this subject.) The most egregious case of this was Google, in the form of this quote from a New York Times article on Marissa Mayer, at that time one of the Supreme Googlers:

At a recent personnel meeting, she homes in on grade-point averages and SAT scores to narrow a list of candidates, many having graduated from Ivy League schools, whom she wanted to meet as part of a program to foster in-house talent. In essence, math is used to solve a human problem: How do you predict whether an employee has the potential for success?

A scrum of executives sit around a table, laptops in front of them, as they sort through résumés, college transcripts and quarterly reviews. The conversation is unemotional, at times a little brutal.

One candidate got a C in macroeconomics. “That’s troubling to me,” Ms. Mayer says. “Good students are good at all things.”

But now, four years later, Google has different ideas:

“One of the things we’ve seen from all our data crunching is that G.P.A.’s are worthless as a criteria for hiring, and test scores are worthless — no correlation at all except for brand-new college grads, where there’s a slight correlation,” Bock said. “Google famously used to ask everyone for a transcript and G.P.A.’s and test scores, but we don’t anymore, unless you’re just a few years out of school. We found that they don’t predict anything. What’s interesting is the proportion of people without any college education at Google has increased over time as well. So we have teams where you have 14 percent of the team made up of people who’ve never gone to college.”

That was a pretty remarkable insight, and I asked Bock to elaborate.

“After two or three years, your ability to perform at Google is completely unrelated to how you performed when you were in school, because the skills you required in college are very different,” he said. “You’re also fundamentally a different person. You learn and grow, you think about things differently. Another reason is that I think academic environments are artificial environments. People who succeed there are sort of finely trained, they’re conditioned to succeed in that environment. One of my own frustrations when I was in college and grad school is that you knew the professor was looking for a specific answer. You could figure that out, but it’s much more interesting to solve problems where there isn’t an obvious answer. You want people who like figuring out stuff where there is no obvious answer.”

That’s what I said! 

It’s not really the idea that the best people are the people who are good at everything – it’s the somewhat absurd idea that people who have done everything by the book all their lives, grade–grubbed their way to the top by doing anything and everything it took to match themselves to a template of “excellence” that makes them all essentially cookie–cutter copies of each other, are somehow the people who are going to give us “disruptive innovations…”

And this:

I think this [avoiding classes with “hard graders] shows another facet of the corruption of a real education that comes with Transcriptarianism.  In the abstract, that philosophy presumes that you got a 4.0 because you, using your native genius and zest for hard work, mastered every required and elective subject, thereby creating you, the Renaissance Person any company must desire.  In reality, however, it creates a system that discourages risk. You well know that you are whip-smart, and yet, and yet, there are so many other whip-smarties out there.  You can take that hard class, and risk the stain of a B or (prepare to fall on your sword) a C, but what then?  A Transcriptarian will see your C and say, tsk tsk, and pick that other whip-smartie over you.  So no matter how ready or willing or able you are to take up the challenge of a hard class, run by a hard grader, you don’t dare.  The competition is too fierce to take the outside track on the racecourse when the inside track is available.

I suppose in its most fundamentalist form, Transcriptarianism doesn’t want to hire anyone who could ever take a hard class and fail it, only those who sail effortlessly through every “challenge.”  But that excludes risk-takers from the pool of potential employees – and as we’ve seen at Google (the Vatican of Transcriptarianism) with its recent, failed offerings in the field of phones and social networking, you end up taking risks without knowing it, because your certainty of success at everything you touch becomes your downfall.  A risk-taker has to have a little humility, and a little uncertainty about their ability to create a successful outcome, to be able to weigh the factors involved before setting out on a new venture.  A risk-taker who has (horrors!) failed at something, who got a C in Art History, knows that not every episode in life is an orderly march through a series of successfully executed decisions.

And then there’s the (remember when this was the point?) benefit of the other kind of knowledge gained by taking the class: a risk-taker who gets a C still knows more about a subject than someone who avoided the class to keep that perfect GPA.

It’s good to be right.

Finished. And…Published.

June 16, 2013

Up at Smashwords:

And Amazon: to come and iStore, well, eventually.  Draft2Digital is a great resource for getting to Apple, but Apple does drag their feet about putting stuff up from 3rd party platforms.

I don’t know what I should feel, but I know what I do feel.  Not much, at the moment.  Glad it’s done.  Now comes the gnawing anxiety over whether or not it’ll fly.  Trust me, after all these books, I know my internal script – the well-rehearsed Fresh Air interview, the pithy one liners for my 92nd Street Y lecture, the plans for my new life in Richistan, etc. etc.  Followed by the dim dull realization that nope.  Not this time either. 

But, the good news is, the failure of “Lion” and “Dark” have inoculated me to some degree.  Yeah, I’ll still spend a few days compulsively checking sales figures (one already on Smashwords!  So not a total failure) before moving on.

Because that’s what finishing is really about at this point – moving on, putting this one to bed so that I can start something else without feeling that Monolith behind me, saying, “but you haven’t finished ME yet, you giver-upper, you non-finisher!”

Is it a great book? No.  Is it a good book? Yep.  Is it a finished book?  HEEAALLL YEAH.  After 4.5 years, done doney McDoneington. 

Copyedit Countdown

June 15, 2013

2/3 of the way through copyedit.  Tried to put up an EPUB of the first three chapters but no go, WP won’t allow them without modifications to your code; I have to modify the thing that does the thing and the Interweb is telling me to go to the function that doesn’t exist to update the file I can’t find.  So, a PDF.

Edit Complete

June 13, 2013

Now it’s time for copyedit mode.  I did have a tiny little tears-of-joy moment this a.m., the one I’d expected when I wrote THE END and didn’t get.  I suppose that’s because now it really is done, it’s not something I’m looking back at and saying, o crap this is all wrong, which was a possibility when I’d written the last word.

I need to hire someone to do the “please love me” part that comes next, that’s like twenty brazillion times harder for me than actually writing a novel. 

Edit Mode

June 12, 2013

Truing up the philosophical conversations between Nick and Caroline around Alex, and otherwise polishing up the story.  One more chapter to go there and then copyedit this weekend, then…publication.  It’s a Dickensian pace I’m setting myself. 

Jaron Lanier may be the other person I can send the book to.  He had an article in the Times about the digital economy, in line with what I’m saying about Alex.  He talks about “Siren Servers,” the supermassive databases used by finance and insurance companies among others to hoover up data about you that you often give willingly:

A hip trope holds that privacy is passé, but the loss of one’s privacy to a Siren Server means more than the loss of one’s credit card or Social Security number to a petty online thief. An ordinary person’s choices in music, friends, purchases, reading material and travels in the course of the day are just some of the streams of data that feed into algorithms that compare and correlate the activities of everyone being spied upon.

The motivation for the omni-ogling is that it leads to effective behavioral models of people. These models are far from perfect, but are good enough to predict and manipulate people gradually, over time, shaping tastes and consumption in more effective and insidious ways than even subliminal advertisements do.

MANIPULATION might take the form of paid links appearing in free online services, an automatically personalized pitch for a candidate in an election or perfectly targeted offers of credit. While people are rarely forced to accept the influence of Siren Servers in any particular case, on a broad statistical basis it becomes impossible for a population to do anything but acquiesce over time. This is why companies like Google are so valuable. While no particular Google ad is guaranteed to work, the overall Google ad scheme by definition must work, because of the laws of statistics. Superior computation lets a Siren Server enjoy the magical benefits of reliably manipulating others even though no hand is forced.

Yep, that’s Alex.

Also in Sunday’s paper, an article by Jonathan Safran Foer, who decries how technology drives us apart.  Irony alert:  He starts the article with a story of a girl he sees crying on the sidewalk.

A COUPLE of weeks ago, I saw a stranger crying in public. I was in Brooklyn’s Fort Greene neighborhood, waiting to meet a friend for breakfast. I arrived at the restaurant a few minutes early and was sitting on the bench outside, scrolling through my contact list. A girl, maybe 15 years old, was sitting on the bench opposite me, crying into her phone. I heard her say, “I know, I know, I know” over and over.

What did she know? Had she done something wrong? Was she being comforted? And then she said, “Mama, I know,” and the tears came harder.

What was her mother telling her? Never to stay out all night again? That everybody fails? Is it possible that no one was on the other end of the call, and that the girl was merely rehearsing a difficult conversation?

“Mama, I know,” she said, and hung up, placing her phone on her lap.

I was faced with a choice: I could interject myself into her life, or I could respect the boundaries between us. Intervening might make her feel worse, or be inappropriate. But then, it might ease her pain, or be helpful in some straightforward logistical way. An affluent neighborhood at the beginning of the day is not the same as a dangerous one as night is falling. And I was me, and not someone else. There was a lot of human computing to be done.

Then he goes on about how technology makes it easier to ignore the needs of others…and never does say whether he talked to her or not.  So I guess the answer is that he didn’t, preferring instead to go home and write this article about how he could have if technology hadn’t made us less empathetic.