Skip to content

What is AI For?

January 23, 2009

So here’s my working definition of AI:  Any semi-autonomous decision-making software system. 

When people hear the phrase “artificial intelligence,” they often think it can only mean a “computerized person,” a thinking, feeling machine who’s “just like us, only smarter and more powerful.”  At which point all kinds of panic seems to set in, and statements proliferate about Man’s Unique Place in Nature, the threat of Colossus / HAL / Skynet / Bishop becoming self-aware/ruling us/killing us all, Verily Yahweh Shall Smite Thy new Tower of Babel, etc. etc.  If you believe in “The Soul,” then you really have an opportunity to run off the rails and fulminate about intelligence without a conscience etc., but as an atheist, I don’t feel any need to spend time and energy debating about whether or not an entity can or cannot have something that doesn’t exist anyway.  We’re always hearing from religious folk that you can’t have morality without religion, which is absurd on the face of it, and that line of unreasoning seems to be the same basis for thinking that computers will be evil because they don’t have “souls.”

In some blue sky research, the goal really is to duplicate the entire human brain in hardware and software form.  Which, forgive a mere layman, is an absurd quest.  The one guaranteed permanent difference between man and machine is that man’s thinking processes are constantly affected by chemical changes in the brain.  Our ability to reason is constantly being interfered with by changes in our serotonin, dopamine, norepinephrine, blood sugar, etc.  Our brains are three-layered objects, the first, original section being a sort of “lizard brain,” that part of us that operates with the sole mandate of survival, and which is constantly overruling the rest of the brain in fulfillment of that mission.  (The Three Pound Universe is a great book for laymen on the brain – it’s 20 years old now, so it’s missing a lot of recent research, and if anyone knows of a better update I’d love to read it.) 

A computer doesn’t need a fight-or-flight response, nor will it get hungry, or horny.  Most of our brain, in short, is not only not useful when we’re trying to run reasoning processes, it actually gets in the way.  So a computer is never going to be a perfect copy of a human – even if we develop some sort of biologically-based computer (who knows what’s possible, now that we finally have a president who will “restore science to its proper place”), it’s still not going to be “human,” not having inherited our millions of years of layer-over-layer brain development and chemical makeup.

So if the goal of AI is not duplicating humans, then what is it?  To me, the goal of AI is to take the knowledge of many and combine it in a form which can be used by all to make the best decisions possible, and use the results and experiences these decisions lead to in order to enable humans to increase their own knowledge, which they then take back to the knowledge base to improve it, and so on.  AI makes decisions – in a video game, it determines what might happen next to your character.  In an “expert system,” it decides based on the evidence and rule sets you’ve entered that you should drill for oil “right here.”  And in a chatbot, it’s making decisions about what to say next based on what you just typed.  So it’s more than a calculator, more than a spreadsheet, and yet it’s also “just” a spreadsheet/database, only vastly more complicated in its design and output. 

A “normal” computer program does exactly what it’s told, and no more.  You can tell a search engine to “find books by Orland Outland,” and it can do that.  I would make the case that even a basic recommendation engine is a form of AI – Amazon’s software looks at my books, goes through its records to find everybody who’s bought them, sees how many stars they’ve given my books, then it looks at everything else in their collection (anything you bought or for which you checked the “I Own It” box), and performs mathematical wizardry to see if people who bought and like Orland Outland also bought and liked, say Christopher Rice.  Then it goes back to the record of my book and tells people “people who bought this book also bought Christopher Rice,” and adds it to their recommendations list.  (Now, I don’t know if Amazon’s system is that pure or not – I like to think that people who read my books also purchased, well, something besides “gay novels” at some point in their lives, but according to my Amazon page for A Serious Person, all the people who bought that book only read other gay novels.  So I suspect some tinkering on the programmer’s part, under direction from various Marketing demons, to give a certain level of priority to tags attached to a book (GAY!GAY!GAY!) thus distorting the results you’d get from a pure recommendation engine.)

So it’s not just adding numbers; based on those numbers it’s “choosing” another book for you – a number crunching function, yes, but also a function of taste and sensibility previously only held by people – seasoned critics, librarians, bookshop employees, teachers.  Who in essence had “recommendation engines” built within their brains over years of experience. 

When people start going crazy is when programs start doing “human” things – it’s all well and good for machines to crunch numbers, which as a pure reasoning task doesn’t seem to bother people when it’s automated, but when they start talking and using the first person and overruling Old Jonesie (who’s been around a hell of a long time and has a nose for these things) about where we’re going to drill for oil, then people get upset.  The feeling is understandable – you spend your life building up a body of knowledge and experience, and then you find that when your expertise is combined with the expertise of others inside the machine, the end result is smarter than you all – nobody gets to be the “top man” in the field anymore because the program always knows more than any of you alone.  This may be even truer in the books example above – engineers and geologists and scientists are used to deferring to pure reason as the final arbiter of decisions, whereas humanities experts are more appalled at the idea that “good taste” can be coded and executed more efficiently than a refined aesthetic sensibility can perform the task.

So the main problem people have with AI is that fear of being “replaced.”  One, the machine will be so smart that the company and maybe even the economy won’t need you, and two, that it will even do those other, “soft skill” tasks better, too, like choosing your books and music and movies and nursing the sick and elderly. These are perfectly understandable and reasonable feelings which need to be explored and, with time, society and people will probably change (barring any religious fundamentalist overthrows of our government and a return to “answers in Genesis” as the sum scientific corpus) to see AI as a tool set rather than a threat.  The Luddites smashed the machines because they thought the machines would make people unnecessary, but with each technological breakthrough, people are still needed to run the systems, improve them, invent new ones, and people become more prosperous because of improvements rather than less.  People still manage to stay ahead of technology, and more often now that not, people are incorporating the idea of constant advance that needs to be kept up with, rather than resisted.  Machines, I’m hazarding, will always need people more than people will need machines.

Advertisements
No comments yet

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: