The human race’s “last invention,” wrote the British mathematician and original Singularitarian Jack Good in 1964, would be a machine that’s smarter than we are. This “ultra-intelligent machine,” by dint of its ability to create even smarter machines, would, “unquestionably,” ignite an “intelligence explosion” that would provide us with innumerable new inventions, improving our lives in unimaginable ways and, indeed, assuring our survival. We’d be able to kick back and enjoy the technological largesse, fat and happy and, one imagines, immortal. We’d never again go bald or forget where we put the car keys.
That’s assuming, Good threw in, as a quick aside, that “the machine is docile enough to tell us how to keep it under control.” If the machine turned out to be an ornery mofo, then the shit would hit the fan, existentially speaking, We’d end up pets, or renewable energy sources. But the dark scenario wasn’t one that Good felt likely. If we could develop an artificial intelligence and set it loose in the world, the future would be bright.
Nearly fifty years have gone by and, though we’re certainly fat, we’re not particularly happy and we’re not at all immortal. Keys are mislaid. Hair falls out. Graves are dug and filled.
Worst of all, we’ve lost our optimism about the benevolence of that ultra-intelligent machine that we still like to think we’re going to build. The Singularity is nearer than ever — 40 years out, right? — and the prospect of its arrival fills us not with joy but with dread. Given our record in such things, it’s hard for us to imagine that the ultra-intelligent machine we design is going to be polite and well-mannered and solicitous — an ultra-intelligent Mary Poppins. No, it’s going to be an ultra-intelligent Snidely Whiplash.
So it comes as a relief to hear that Cambridge University is setting up a Centre for the Study of Existential Risk, to be helmed by the distinguished philosopher Huw Price, the distinguished astrophysicist Martin Reese, and the distinguished programmer Jaan Tallinn, one of the developers of Kazaa and Skype. The CSER will be dedicated to examining and ameliorating “extinction-level risks to our species,” particularly those arising from an AI-fueled Singularity.
In a recent article, Price and Tallinn explained why we should be worried about an intelligence explosion:
We now have machines that have trumped human performance in such domains as chess, trivia games, flying, driving, financial trading, face, speech and handwriting recognition – the list goes on. Along with the continuing progress in hardware, these developments in narrow AI make it harder to defend the view that computers will never reach the level of the human brain. A steeply rising curve and a horizontal line seem destined to intersect! …
It would be comforting to think that any intelligence that surpassed our own capabilities would be like us, in important respects – just a lot cleverer. But here, too, the pessimists see bad news: they point out that almost all the things we humans value (love, happiness, even survival) are important to us because we have particular evolutionary history – a history we share with higher animals, but not with computer programs, such as artificial intelligences. … The bad news is that they might simply be indifferent to us – they might care about us as much as we care about the bugs on the windscreen.
At this point in reading the article, my spirits actually began to brighten. An indifferent AI seemed less worrisome than a hostile one. After all, the universe is indifferent to us, and we’re doing okay. But then came the kicker:
just ask gorillas how it feels to compete for resources with the most intelligent species – the reason they are going extinct is not (on the whole) because humans are actively hostile towards them, but because we control the environment in ways that are detrimental to their continuing survival.
This threw me back into a funk — and it made me even more eager to see the Centre for the Study of Existential Risk up and running. Until, that is, I began to think a little more about those gorillas. If they haven’t had any luck in influencing a species of superior intelligence with whom they share an evolutionary history (that would be us), isn’t it a little silly to think that we’ll have any chance of influencing an intelligence beyond our own, particularly if that intelligence is indifferent to us and even, from an evolutionary standpoint, alien to us? My mood blackened.
Writing in the shadow of the bomb, Jack Good began his 1964 paper with this sentence: “The survival of man depends on the early construction of an ultra-intelligent machine.” We no longer fear the present as much as Good did, but neither are we able to muster as much confidence in our ability to shape the future to our benefit. As computers have become more common, more familiar, we’ve lost our faith in them. They’ve turned from Existential Hope to Existential Risk. When we imagine our last invention — the end of human progress — we sense not our deliverance but our demise. That may actually say more about what’s changed in us than what’s changed about the future.