The Overlord Fallacy
Per WordPress moderators’ advice, I’ve switched to Windows Live Writer from Word – they’ve made changes that have disabled cut’n’paste, due to some code in Word that’s messing w/the site.
However, indented quoted text comes up shaded in gray and italicized, which to me makes it harder to read (I hate black on gray, it’s a dumb combo). Perversely, you can’t un-italicize just by selecting the text and clicking “I,” or even by using the Font menu. Asking the help file for help with “indent” returns no results, and “quotes” just tells me how to add a block quote, not how to change this crappy look and feel.
Anyway, today’s thought: It’s interesting how often (which is to say basically almost every time) articles about AI and/or robotics conclude with some offhand comment about how, once “intelligent” enough, our robot overlords will take over and rule us with an iron fist, if not kill us all. I should start collecting these to make a stronger case, but I think most people who read about the field know from experience that it’s true.
I think it says more about human nature than it does about AI – basically, people who really believe what they say about “our robot overlords,” as opposed to those who are just parroting conventional wisdom, must be coming to the conclusion that anyone or anything intelligent, given opportunity and power, would choose to “rule” others. No doubt the will to power could be coded into an AI – i.e., some rule set along the lines of “here’s what’s best for people, act accordingly,” but that’s the function of the programmer, not the program. It’s far more likely that human beings, acting with bad intent, will do as they often do with technology and use it as a tool to make their rule over others more “efficient,” than it is that a machine will independently “achieve consciousness” and immediately decide to start bossing us around if not killing us all.
Then again, there may be more atavistic forces at work – if we are subconsciously thinking of AI as a competitor species, we may be treating it like our ancestors treated strange animals, always needing to ask themselves, predator or prey? If it’s bigger than me and smarter than me, it probably wants to eat me. Just as we distrusted members of other tribes until gifts and handshakes were exchanged, it’ll take some time for AI to be “accepted” as a friendly species.