Skip to content

Would you like angst with that?

February 7, 2009

Some short thoughts on a new book I haven’t read, Supersizing the Mind, based on the article that formed its core, the book’s forward, and the review in the London Review of Books, which first tipped me off to it.  Not the work of a Serious Person, I know, to read “around” the book, but there’s so much “hard” material I need and want to read that I’m going to have to give some things short shrift.  Still, the philosophical implications are interesting and look like they’ll come in useful in part 2 of the novel, when Alex becomes a “person” in the world.

To reduce it to a probably inaccurate summary (this being philosophy of mind, after all, and not amenable to being stripped of nuance), Andy Clark’s thesis is that our concept of “mind” should not be limited to the self-contained functions of the brain, which is

The Naked Mind: a package of resources and operations we can always bring to bear on a cognitive task, regardless of the local environment.

But that it should also include the external tools we use, from a notebook with an address in it to an iPhone with contact information – that remembering a thing like an address or phone number, having it memorized and thus entirely within the brain, or remembering that you can find it in a notebook and looking it up, are functionally the same:

The information in the notebook functions just like the information constituting an ordinary non-occurrent belief; it just happens that this information lies beyond the skin.

The “belief” in this case being the knowledge of (or at least your belief that you know, since both memory and notebook could be wrong) where, say, the museum is located.  So where do you draw the line between our minds and tools it uses?

The Internet is likely to fail on multiple counts, unless I am unusually computer-reliant, facile with the technology, and trusting, but information in certain files on my computer may qualify.

So if you allow our tools to exist as a sort of “offshoring” of mental processes, the mind becomes

an extended system, a coupling of biological organism and external resources.

What I found most interesting was this:

What about socially extended cognition? Could my mental states be partly constituted by the states of other thinkers? We see no reason why not, in principle. In an unusually interdependent couple, it is entirely possible that one partner’s beliefs will play the same sort of role for the other as the notebook plays for Otto. What is central is a high degree of trust, reliance, and accessibility. In other social relationships these criteria may not be so clearly fulfilled, but they might nevertheless be fulfilled in specific domains. For example, the waiter at my favorite restaurant might act as a repository of my beliefs about my favorite meals (this might even be construed as a case of extended desire). In other cases, one’s beliefs might be embodied in one’s secretary, one’s accountant, or one’s collaborator.

The concept behind Alex is that a number of people are “contributing” to his development, just as we are the products of the people who raise us, with whom we grow up, go to school, hang out.  So Alex’s personality, his self, is an extension of his contributors’ minds – parts of who they are live within him; they can literally say, as parents are prone to, “We made you who you are today!”  So is “he” his own person, as we allow children to be after their time as our “extension,” “letting go” and allowing them their own minds, their own lives?  Or is he still part of the minds who developed him, still part of a tool set, unless he develops independence the way a child eventually does? 

And do the databases he accesses to make up his part of the conversation function as part of his extended mind?  What happens when a tool uses tools?  When we use Alex who uses Wikipedia which uses contributors who read books and journals written by people who did research… Where does “mind extension” end?  From the forward by Clark’s collaborator David Chalmers:

But then, what about the big question: extended consciousness? The dispositional beliefs, cognitive processes, perceptual mechanisms, and moods considered above all extend beyond the borders of consciousness, and it is plausible that it is precisely the non-conscious part of them that is extended. I think there is no principled reason why the physical basis of consciousness could not be extended in a similar way. It is probably so extended in some possible worlds: one could imagine that some of the neural correlates of consciousness are replaced by a module on one’s belt, for example.

Yet Alex might be a bit of a philosophical quandary – he is an electronic extension of consciousness, and yet if he is capable of motive action, also a separate entity.  How much of him do we still “own” as parts of ourselves after he’s “finished”?  (As an aside, wouldn’t people come to love an AI they’d hand-crafted, based on their own tastes and desires, a process that might end up using the same neural pathways that make us love our kids?  The income potential, in the hands of sufficiently demonic marketeers, would be huge, especially if it was a subscription service – what could be more addictive and impossible to “turn off” than your new best friend?  It would be like killing your own child…All the more reason for Christopher’s brother to want to see it go open source, which of course it will not or otherwise I wouldn’t have a part two for the novel!)

In the LRB review, philosophy professor Jerry Fodor punches some good holes in the argument, revealing some of its design flaws, such as its failure to prove minds have “parts” in the first place:

EMT [Extended Mind Theory] isn’t literally true unless Chalmers’s iPhone is literally an (external) part of his mind; ‘literally’ is among Clark/Chalmers’s favourite adverbs. If minds don’t literally have parts, how can cognitive science literally endorse the claim that they do? That Juliet is the sun is, perhaps, figuratively true; but since it is only figuratively true, it’s of no astronomical interest.

[T]ools – even very clever tools like iPhones – aren’t parts of minds. Nothing happens in your mind when your iPhone rings (unless, of course, you happen to hear it do so). That’s not, however, because iPhones are ‘external’, it’s because iPhones don’t, literally and unmetaphorically, have contents. But what about an iPhone’s ringing? That means something; it means that someone is calling. And it happens on the outside by anybody’s standard.

I’d disagree with the last – an iPhone does “literally” have contents; it has the content you put in it, the programs and widgets and settings and phone numbers.  No, as a phone per se, it’s empty; a “phone” is the device one person uses to manipulate a system that connects him with another person – the call switching system isn’t an extension of your self, but it could be argued the contact info is, especially if you doll entries up with nicknames and funny or embarrassing photos. 

 I do want to write about “Singularity University” – I’ll get to it.  I’m not thinking it’s much more than a Davos for dorks – deep-pocketed ones at that.  This weekend I hope to get through Richard Dooling’s Rapture for the Geeks, which I believe is the first popularization/extended magazine article-style book about Kurzweil & Co.  I really truly am gearing myself up for chapter two, it’s just so much easier on my nerves to write “about” the book than to write it!

Advertisements
No comments yet

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: