Oh no! Robots! Yay!


“This future man, whom the scientists tell us they will produce in no more than a hundred years, seems to be possessed by a rebellion against human existence as it has been given, a free gift from nowhere (secularly speaking), which he wishes to exchange, as it were, for something he has made himself.” –Hannah Arendt, 1958

“Human beings are ashamed to have been born instead of made.” –Günther Anders, 1956

Now that we’ve branded every consumer good with a computer chip “smart,” the inevitable next step is for robots to start thinking big thoughts, turn us into their menials, and mind-meld into a higher form of life, or lifeyness. Or so we’re told by an (oddly enthusiastic) chorus of putatively rational doomsayers. Forget dirty bombs, climate change, and rogue microbes. AI is now the greatest existential threat to humanity.

Pardon me for yawning. The odds of computers becoming thoughtful enough to decide they want to take over the world, hatch a nefarious plan to do so, and then execute said plan remain exquisitely small. Yes, it’s in the realm of the possible. No, it’s not in the realm of the probable. If you want to worry about existential threats, I would suggest that the old-school Biblical ones — flood, famine, pestilence, plague, war — are still the best place to set your sights.

Rob Walker interviewed me about The Glass Cage for Yahoo Tech, and we touched on this topic:

You don’t spend much time on the idea that the march of artificial intelligence is “summoning the demon” that will destroy humanity, as Elon Musk recently worried aloud. And he’s not the only smart person to frame the issue in apocalyptic, sci-fi terms; it’s become an almost trendy fear. What do you make of that?

It’s probably overblown. All those apocalyptic AI fears are based on an assumption that computers will achieve consciousness, or at least some form of self-awareness. But we have yet to see any evidence of that happening, and because we don’t even know how our own minds achieve consciousness, we have no reliable idea of how to go about building self-aware machines.

There seem to be two theories about how computers will attain consciousness. The first is that computers will gain so much speed and so many connections that consciousness will somehow magically “emerge” from their operations. The second is that we’ll be able to replicate the neuronal structure of our own brains in software, creating an artificial mind.

Now, it’s possible that one of those approaches might work, but there’s no rational reason to assume they’ll work. They’re shots in the dark. Even if we’re able to construct a complete software model of a human brain — and that itself is far from a given — we can’t assume that it will actually function the way a brain functions. The mind may be more than a data-processing system, or at least more than one that can be transferred from biological components to manufactured ones.

The people who expect a “singularity” of machine consciousness to happen in the near future — whether it’s Elon Musk or Ray Kurzweil or whoever — are basing their arguments on faith, not reason. I’d argue that the real threat to humanity is our own misguided tendency to put the interests of technology ahead of the interests of people and other living things.

You can read the whole interview here.

Image: still from Andrei Tarkovsky’s Solaris.

5 thoughts on “Oh no! Robots! Yay!

  1. Linux Guru

    Honestly considering the anti-competitive business practices of Microsoft, Oracle, Intel and Google, does anyone think AI capable of taking over the world will ever be developed? All the great minds left tech over a decade ago. No one takes it seriously anymore. Of course, the reminds me of J.F Sebastian the android designer who “makes” new friends:

  2. Brian

    The existential threat comes from mere capabilities, not full-blown consciousness (something only humans would spare a conscious thought on).

    Automation is just replacement. What happens when through shear imposition of superior complementary physical characteristics (speed, strength, agility, endurance, focus, etc.) the machines as machines (not pseudohuman contructs) command respect, recognition…deference?

    A rocket is fast, but can it play the violin? An aircraft carrier strong, but can it graffiti a bridge? Factory robots are agile, but can they get up, walk out the door and stop the rocket or carrier? Threatening machines combine existing extraordinary capabilities long observed in other more single-purpose machines.

    Then: Relegation, not mere replacement. Physical threat, not mere economic irritant.

    The Terminator(s) did pretty well without consciousness.

Comments are closed.