The last invention

The human race’s “last invention,” wrote the British mathematician and original Singularitarian Jack Good in 1964, would be a machine that’s smarter than we are.  This “ultra-intelligent machine,” by dint of its ability to create even smarter machines, would, “unquestionably,” ignite an “intelligence explosion” that would provide us with innumerable new inventions, improving our lives in unimaginable ways and, indeed, assuring our survival. We’d be able to kick back and enjoy the technological largesse, fat and happy and, one imagines, immortal. We’d never again go bald or forget where we put the car keys.

That’s assuming, Good threw in, as a quick aside, that “the machine is docile enough to tell us how to keep it under control.” If the machine turned out to be an ornery mofo, then the shit would hit the fan, existentially speaking, We’d end up pets, or renewable energy sources. But the dark scenario wasn’t one that Good felt likely. If we could develop an artificial intelligence and set it loose in the world, the future would be bright.

Nearly fifty years have gone by and, though we’re certainly fat, we’re not particularly happy and we’re not at all immortal. Keys are mislaid. Hair falls out. Graves are dug and filled.

Worst of all, we’ve lost our optimism about the benevolence of that ultra-intelligent machine that we still like to think we’re going to build. The Singularity is nearer than ever — 40 years out, right? — and the prospect of its arrival fills us not with joy but with dread. Given our record in such things, it’s hard for us to imagine that the ultra-intelligent machine we design is going to be polite and well-mannered and solicitous — an ultra-intelligent Mary Poppins. No, it’s going to be an ultra-intelligent Snidely Whiplash.

So it comes as a relief to hear that Cambridge University is setting up a Centre for the Study of Existential Risk, to be helmed by the distinguished philosopher Huw Price, the distinguished astrophysicist Martin Reese, and the distinguished programmer Jaan Tallinn, one of the developers of Kazaa and Skype. The CSER will be dedicated to examining and ameliorating “extinction-level risks to our species,” particularly those arising from an AI-fueled Singularity.

In a recent article, Price and Tallinn explained why we should be worried about an intelligence explosion:

We now have machines that have trumped human performance in such domains as chess, trivia games, flying, driving, financial trading, face, speech and handwriting recognition – the list goes on. Along with the continuing progress in hardware, these developments in narrow AI make it harder to defend the view that computers will never reach the level of the human brain. A steeply rising curve and a horizontal line seem destined to intersect! …

It would be comforting to think that any intelligence that surpassed our own capabilities would be like us, in important respects – just a lot cleverer. But here, too, the pessimists see bad news: they point out that almost all the things we humans value (love, happiness, even survival) are important to us because we have particular evolutionary history – a history we share with higher animals, but not with computer programs, such as artificial intelligences. … The bad news is that they might simply be indifferent to us – they might care about us as much as we care about the bugs on the windscreen.

At this point in reading the article, my spirits actually began to brighten. An indifferent AI seemed less worrisome than a hostile one. After all, the universe is indifferent to us, and we’re doing okay. But then came the kicker:

just ask gorillas how it feels to compete for resources with the most intelligent species – the reason they are going extinct is not (on the whole) because humans are actively hostile towards them, but because we control the environment in ways that are detrimental to their continuing survival.

This threw me back into a funk — and it made me even more eager to see the Centre for the Study of Existential Risk up and running. Until, that is, I began to think a little more about those gorillas. If they haven’t had any luck in influencing a species of superior intelligence with whom they share an evolutionary history (that would be us), isn’t it a little silly to think that we’ll have any chance of influencing an intelligence beyond our own, particularly if that intelligence is indifferent to us and even, from an evolutionary standpoint, alien to us? My mood blackened.

Writing in the shadow of the bomb, Jack Good began his 1964 paper with this sentence: “The survival of man depends on the early construction of an ultra-intelligent machine.” We no longer fear the present as much as Good did, but neither are we able to muster as much confidence in our ability to shape the future to our benefit. As computers have become more common, more familiar, we’ve lost our faith in them. They’ve turned from Existential Hope to Existential Risk. When we imagine our last invention — the end of human progress — we sense not our deliverance but our demise. That may actually say more about what’s changed in us than what’s changed about the future.

11 thoughts on “The last invention

  1. procrustes

    The danger is thinking intelligence to be just some brute force “mechanical” demonstration. Intelligence isn’t just facial recognition or winning at chess. We can have wisdom, practical know how, technical knowledge, ethical judgement, political and social graces, artistic creativity, play, care, humour, and a bucket full of other virtues that make up intelligence.

    We wonder and question.

  2. Sriram Narayan

    Would a higher intelligence need to be conscious? I think so and if so, these projections are based on the folly that in due course, consciousness will emerge out of AI algorithms. But this doesn’t reduce the threat of human extinction. We are doing well on our own.

  3. Seth Finkelstein

    I think it’s more likely that we’ll build a war-machine that doesn’t deactivate, and ends up wiping out the human race. As a species, we’ve already gotten to the point of having the capability of destroying civilization as we know it with nuclear weapons. I definitely agree with you about “That may actually say more about what’s changed in us than what’s changed about the future.”.

    I mentioned Asimov’s thinking in the earlier thread, but again I recommend it. e.g. the introduction to “The Rest Of The Robots”.

    http://www.rogerclarke.com/SOS/Asimov.html

    “Under the influence of the well-known deeds and ultimate fate of Frankenstein and Rossum, there seemed only one change to be rung on this plot – robots were created and destroyed their creator … I quickly grew tired of this dull hundred-times-told tale …. Knowledge has its dangers, yes, but is the response to be a retreat from knowledge?”

  4. yvest

    “In the struggle with the English mechanistic dumbing down of the world, Hegel and Schopenhauer (along with Goethe) were unanimous—both of these hostile fraternal geniuses in philosophy, who moved away from each other towards opposite poles of the German spirit and in the process wronged each other, as only brothers can.* What’s lacking in England, and what has always been missing, that’s something that semi-actor and rhetorician Carlyle understood well enough, the tasteless muddle-headed Carlyle, who tried to conceal under his passionate grimaces what he understood about himself, that is, what was lacking in Carlyle—a real power of spirituality, a real profundity of spiritual insight, in short, philosophy.* It is characteristic of such an unphilosophical race that it clings strongly to Christianity. They need its discipline to develop their “moralizing” and humanizing. The Englishman is more gloomy, more sensual, stronger willed, and more brutal than the German—he is also for that very reason, as the more vulgar of the two, more pious than the German. He is even more in need of Christianity. For more refined nostrils this same English Christianity has still a lingering and truly English smell of spleen and alcoholic dissipation, against which it is used for good reasons as a medicinal remedy—that is, the more delicate poison against the coarser one. Among crude people, a subtler poisoning is, in fact, already progress, a step towards spiritualization. The crudity and peasant seriousness of the English are still most tolerably disguised or, stated more precisely, interpreted and given new meaning, by the language of Christian gestures and by prayers and singing psalms. And for those drunken and dissolute cattle who in earlier times learned to make moral grunts under the influence of Methodism and more recently once again as the “Salvation Army,” a twitch of repentance may really be, relatively speaking, the highest achievement of “humanity” to which they can be raised: that much we can, in all fairness, concede.”

    http://records.viu.ca/~johnstoi/nietzsche/beyondgoodandevil8.htm

  5. Chet

    I guess I don’t immediately understand what resources you envision artificial intelligences competing with humans for. They’re made of the eighth-most abundant element in the universe and need only power. I suspect any AI community that felt they had to compete with humans for anything would simply decamp to the Moon, or a Lagrange point, or any number of other places we just can’t go. The “Machine War” scenario is just laughably poorly thought out, and if that’s what’s on the plate for the Center for the Study of Existential Risk, and not climate change or antibiotic resistance, then it just proves what a stupid endeavor the whole thing is.

  6. Petar

    “They’re made of the eighth-most abundant element in the universe and need only power.”

    That is not the point. The point is that they will be able to reproduce instantly at will and they will do that. Even if an extra copy of an AI requires hilariously little resources (1/1 000 000th of the resources a human needs – space, energy, etc.), they will destroy us because we will get in their way.

  7. Casey

    1943…CS Lewis…Abolition of Man: “Man’s conquest of Nature turns out, in the moment of its consummation, to be Nature’s conquest of Man. Every victory we seemed to win has led us, step by step, to this conclusion. All Nature’s apparent reverses have been but tactical withdrawals. We thought we were beating her back when she was luring us on. What looked to us like hands held up in surrender was really the opening of arms to enfold us for ever. If the fully planned and conditioned world comes into existence, Nature will be troubled no more by the restive species that rose in revolt against her so many millions of years ago, will be vexed no longer by its chatter of truth and mercy and beauty and happiness. Ferum victorem cepit: and if the eugenics are efficient enough there will be no second revolt, but all snug beneath the Conditioners, and the Conditioners beneath her, till the moon falls or the sun grows cold.”

  8. Robert Young

    Even a passive hyper-intelligent bunch of machines suffers from the same problem of Financial Services; i.e. monopolizing wealth unto themselves. Which is to say, distribution has been the problem since the beginning of time. With the rise of industrialization, wealth concentration does too. The benefits of industrialization, especially since 2000, have accrued to the few. I’m old enough to remember stories in “Popular Science” and “Popular Mechanics” about how we would have so much more leisure time, thanks to automation. For the few, sure. For most, not so much.

  9. Mark Pontin

    Sriram Narayan wrote: ‘Would a higher intelligence need to be conscious? I think so ….’

    That’s your anthropomorphism thinking for you. It ain’t necessarily so.

    Go look at modern neuroscience and it’s fairly clear that consciousness is nature’s hack or kluge to get around the practical binding problem. Briefly, there’s literally no way via electrochemical synaptic signalling alone that all the neuronal modules in a biological brain can be bound together and interact to create an organism that moves efficiently and effectively through its environment in real time. There’s an old Steve Martin movie where Martin and Lily Tomlin share the same body and try to go in two different directions — it’d be like that, only worse.

    Still, obviously, we do move effectively through our environment (mostly). So some binding mechanism is necessary for that and the current best candidate may be 40 hz thalamocortical oscillations, as proposed by Rodolfo Llinas, Francis Crick, and others —
    http://en.wikipedia.org/wiki/Recurrent_thalamo-cortical_resonance

    The theory is that these binding thalamocortical oscillations produce the illusion we call consciousness. But if not this particular mechanism, there has to be something. (Unless you want to return to the classical religious notion of animating souls.)

    Point is:this kind of hack or kluge to bind the whole system together is only required with biological neuronal wetware (a protein substrate), which is immensely slow. With the speeds possible for electronic computation on a silicon substrate — or, in fifty years’ time, with photonic computing (doable, for example, with arrays of lasers creating holographs that modulate holographs) — no such hack or kluge is necessary.

    In other words, computers don’t need consciousness any more than fishes need bicycles.

    Petar wrote: ‘The point is that they will be able to reproduce instantly at will and they will do that.’

    Again, that’s your anthropomorphic perspective. We are biologically programmed to reproduce. Intelligent machines don’t have to be so programmed.

    That said, one might notionally program a super-AI to solve some problem so serious that it decided it needed extra computational power to do that and therefore rebuilt the matter of Earth (and ourselves) into more processors .

    There’s an old Stanislaw Lem story where such a machine does something like that, turning all the people on a planet into coasters that it then arranges according to some intricate scheme of its own across the planetary surface.

Comments are closed.