« Understanding Amazon Web Services | Main | Was eBay a fad? »

What we talk about when we talk about singularity

June 01, 2008

The Singularity, that much-anticipated moment, or nano-moment, when our once-tractable silicon servants rocket past us, intellectually speaking, in a blur not unlike the one you see when Scotty activates the Enterprise's warp drive on Star Trek, pausing only (we pray) to allow us to virtualize our mental circuitry and upload it into their capacious memory banks (watch for the 2035 launch of Amazon S4: Simple Soul Storage Service), thus achieving a sort of neutered, brain-in-a-jar immortality, yes, that Singularity, that Rapture of the Geeks, as it is known to snarky unbelievers, is the subject of a big stack of articles - all written by humans, alas, but worth reading nonetheless - in a new special issue of IEEE Spectrum.

Vernor Vinge, the original Singularitarian, lays out five scenarios that, in some combination, could give rise to "a posthuman epoch":

The AI Scenario: We create superhuman artificial intelligence (AI) in computers.

The IA Scenario: We enhance human intelligence through human-to-computer interfaces—that is, we achieve intelligence amplification (IA).

The Biomedical Scenario: We directly increase our intelligence by improving the neurological operation of our brains.

The Internet Scenario: Humanity, its networks, computers, and databases become sufficiently effective to be considered a superhuman being.

The Digital Gaia Scenario: The network of embedded microprocessors becomes sufficiently effective to be considered a superhuman being.

"Depending on our inventiveness - and our artifacts' inventiveness - there is the possibility," writes Vinge, "of a transformation comparable to the rise of human intelligence in the biological world. Even if the singularity does not happen, we are going to have to put up with singularity enthusiasms for a long time. Get used to it."

The special issue includes both enthusiasms and skepticisms, sometimes in the same article. Glenn Zorpette, the executive editor of IEEE Spectrum, takes it as a given that "as computers become stupendously powerful" in coming years "life really is going to get more interesting," but he pooh-poohs the suggestion, popularized by Ray Kurzweil, that human immortality will be a byproduct of the Singularity:

Why should a mere journalist question Kurzweil’s conclusion that some of us alive today will live indefinitely? Because we all know it’s wrong. We can sense it in the gaping, take-my-word-for-it extrapolations and the specious reasoning of those who subscribe to this form of the singularity argument. Then, too, there’s the flawed grasp of neuroscience, human physiology, and philosophy. Most of all, we note the willingness of these people to predict fabulous technological advances in a period so conveniently short it offers themselves hope of life everlasting. This has all gone on too long. The emperor isn’t wearing anything, for heaven’s sake.

(But at least he's buff, thanks to all those supplements.)

It may seem a waste of time to debate the contours of a world that, as Vinge says, will be "intrinsically unintelligible to the likes of us." But, hey, you have to do something to pass the time while waiting for Godot 2.0.

My favorite article is the practical-minded "Economics of the Singularity," in which George Mason University economist Robin Hanson sketches out the marketplace of the posthuman epoch. Hanson believes that the best chance for creating an advanced machine intelligence will be through simply "copying the brain":

This approach, known as whole brain emulation, starts with a real human brain, scanned in enough detail to see the exact location and type of each part of each neuron, such as dendrites, axons, and synapses. Then, using models of how each of these neuronal components turns input signals into output signals, you would construct a computer model of this specific brain. With accurate enough models and scans, the final simulation should have the same input-output behavior as the original brain. It would, in a sense, be the “uploaded mind” of whoever served as the template ...

Though it might cost many billions of dollars to build one such machine, the first copy might cost only millions and the millionth copy perhaps thousands or less. Mass production could then supply what has so far been the one factor of production that has remained critically scarce throughout human history: intelligent, highly trained labor.

Once that constraint is removed - and smarts become endlessly abundant - we'll see "the next radical jump in economic growth," where "the world economy, which now doubles in 15 years or so, would soon double in somewhere from a week to a month." Three factors would spur the explosion in growth:

First, we could create capable machines in much less time than it takes to breed, rear, and educate new human workers. Being able to make and retire machine workers as fast as needed could easily double or quadruple growth rates.

Second, the cost of computing has long been falling much faster than the economy has been growing. When the workforce is largely composed of computers, the cost of making workers will therefore fall at that faster rate, with all that this entails for economic growth.

Third, as the economy begins growing faster, computer usage and the resources devoted to developing computers will also grow faster. And because innovation is faster when more people use and study something, we should expect computer performance to improve even faster than in the past.

For humans forced to compete with the vast machine horde, the prospects would seem to be pretty dim:

The population of smart machines would explode even faster than the economy. So even though total wealth would increase very rapidly, wealth per machine would fall rapidly. If these smart machines are considered “people,” then most people would be machines, and per-person wealth and wages would quickly fall to machine-subsistence levels, which would be far below human-subsistence levels. Salaries would probably be just high enough to cover the rent on a tiny body, a few cubic centimeters of space, the odd spare part, a few watts of energy and heat dumping, and a Net connection.

The diminutive machine-people would cluster like insects in vast urban communities, "with many billions living in the volume of a current skyscraper, paying astronomical rents that would exclude most humans. As emulations of humans, these creatures would do the same sorts of things ... that humans have done for hundreds of thousands of years: form communities and coalitions, fall in love, gossip, argue, make art, commit crimes, get work done, innovate, and have fun."

Hanson doesn't speculate on what will be left for humans to do in this world, but I think the answer probably lies in the machine-people's desire to "have fun." Though lacking human bodies to go along with their human minds, the machine-people will, one assumes, have both phantom limbs and phantom desires. As a result, we can expect that the online porn industry will expand exponentially to the point where it employs, in one capacity or another, all remaining human beings. It won't exactly be heaven on earth, but it sure beats being a brain in a database.

Comments

The diminutive machine-people would cluster like insects in vast urban communities, "with many billions living in the volume of a current skyscraper, paying astronomical rents that would exclude most humans. As emulations of humans, these creatures would do the same sorts of things ... that humans have done for hundreds of thousands of years: form communities and coalitions, fall in love, gossip, argue, make art, commit crimes, get work done, innovate, and have fun."

Like mid-town Manhattan then? :-)

Posted by: Thomas [TypeKey Profile Page] at June 1, 2008 06:22 PM

This talk about singularity reminds me Von Neumann's claim in the 50's that we would not only being able to predict the weather but to control it. It was just a matter of number crunching. Except the atmosphere turned out to be much more complex than he realized.

I have a feeling that most arguments about Singularity make the same mistake: they assimilate the human brain to a huge electronic circuit and assume Singularity is just about number crunching.

Except that humans - as well as most animals - have a conscience (others would call it a soul). And nobody knows where it comes from and what is really is. So at the end of the day, is the brain only a big electric circuit or are complex chemicals reactions also a key element to intelligence?

And speaking of intelligence, computers might have seen their power greatly increase over the last decades, their intelligence is still 0. Not very low. Null. Nada. Zilch. Computers still read a script (put a zero or a one in a slot, read the value of a slot and jump to a different part of the script)

Posted by: Laurent [TypeKey Profile Page] at June 1, 2008 09:48 PM

There is no way to make a machine as kludged up as the human brain. We need to make these things want to hurt and torture one second, then make passionate please for peace the next. A singularity machine needs to be as kludged as we are and its not clear that any amount of science will figure out such a flawed process. How about if we just watch BSG and get drunk instead?

Posted by: MarcFarley [TypeKey Profile Page] at June 1, 2008 11:34 PM

"And speaking of intelligence, computers might have seen their power greatly increase over the last decades, their intelligence is still 0. Not very low. Null. Nada. Zilch. Computers still read a script (put a zero or a one in a slot, read the value of a slot and jump to a different part of the script)"

Yes yes, Laurent, I knew you were going to say that.

Posted by: Conal [TypeKey Profile Page] at June 2, 2008 06:42 AM

Ah, but in BSG the AIs *start* electro-mechanical, then choose to abandon their machine bodies to become biological humans.

Posted by: Thomas [TypeKey Profile Page] at June 2, 2008 07:40 AM

Every time I read anything about artificial intelligence, I’m reminded of Marvin Minsky’s comment that “AI has been brain-dead since the 1970’s”.

I think his words are too kind.

Posted by: Greg Quinn [TypeKey Profile Page] at June 2, 2008 11:11 AM

Oh, it's a comin' alright. Only, it's not quite...

Let me back up a bit. What few seem to "get", and most who do "get it" don't talk about, is that the whole "field" of "singularity thinking" is a put-on. And not a very original one.

Arthur C. Clarke had it on good authority that a certain, smart, early hominid reasoned carefully and concluded that if only he could find a sufficiently high tree, he'd be able to touch the moon. That, at least, was an honest mistake on the part of that hominid.

In the 1960s, a different set of false extrapolations drawn on anecdotal reports had a significant political effect. LSD, meditation, free love, communal living and anything else sufficiently Utopian and sufficiently frowned upon by The Man was thrown into a rhetorical soup which, it was promised, would convey transcendence upon all and sundry who partook. Why, maybe if we just concentrate real hard, we can levitate the Pentagon (wink, wink). That concoction was brewed by a diverse crew of self-promoters, tripped out true believers, agitators (of all political affiliations, including The Man his-self), Don Juans (chicks dug it), and subversives (a means to an end for breaking hegemony). Oh, wait, I left out the enterpreneurs who have milked that cash cow ever since (it's like that Beatles song which, I seem to recall, went "All You Need Is Cred").

Fast forward through the style changes, "crack epidemic", Reagan on drugs, and "greed is good" and suddenly the "Yippie / Yuppie Debates" aren't bringing in quite the revenue they once did on the lecture circuit.

And yet, this left the dominant hegemony missing something. After all, that thoroughly neutered 60s babble had at least one real and lasting effect: it politically de-activated large swaths of believers (and to this day). It turned out to be not very subversive at all. It turned out to steer believers away from the protests and, not infrequently, into financial and criminal ruin.

Nauseatingly, you could even then still find the same patter among the Silicon Valley old-blood elites, complete with knowing winks. It became a way of saying, to inferiors, "I don't care to listen to reason, I'm acting by fiat. You need to get some Zen, some Tao, and do penance at a concert by the graying Grateful Dead after which I'm confident you'll appreciate the Wisdom of My Will." And among themselves, it meant, "Hey, who's holding? I haven't gotten high for a while?"

Still, the sparkle was gone. It was disreputable to promote transcendence through drugs and tie-die and jam-sessions. The problem for the Man was how to keep the hypnotic babble, but change the markers. Find something else to talk about besides LSD, so to speak.

Hello "singularity"! And how perfect. How perfectly neutering a transcendence-promising babble! It was an innovation:

Here, too, was a vague but exhausting "vision" of an unimaginable near future where everything is strange yet familiar. Whatever problems we may have today don't matter -- don't bother working on them. Work on that future of miracles instead. Don't waste your time questioning your employer or working to influence the government (except as its actions pertain to your stock option values), go work on that new Intel-inside universal remote control, instead, for it is in that direction that you will find immortality.

Today, "singluarity thinking" has devolved a bit into a parlor game. In a room of people yakking about it the game is to distinguish the con-artists, the dupes, and the knowing-wink-play-alongs. It's great sport and, meanwhile, it still gets regular slashdot space, now IEEE space, sells books, sells lectures, attracts potential trading partners to dinner ("Kurtz will be there!"), and dazzles the kids.

A singularity is coming but it isn't quite a technological one. It's the singularity first posited by the philosophers of the Firesign Theater. It's the moment -- and perhaps it will come at a Ted conference or maybe an O'Reilly Foo -- it's the moment that happens shortly after everyone realizes there are no dupes left in the room, just pretenders, and one of them dares to announce "I think we're all Bozos on this this bus."

-t

Posted by: Tom Lord [TypeKey Profile Page] at June 2, 2008 11:26 AM

The timeframe of enthusiasts like Kurzweil is probably unrealistic, but it is hard to imagine a future when biomedical technology does not fundamentally alter human life. The incremental progress is all around us. As the monkey's arm research covered in last week's NYTimes demonstrates, science is already tapping into the circuitry of the brain. Add to that the Cochlear implant, which sends stimuli directly INTO the human nervous system. When this research matures, we'll have a brain with wires going both in and out which with a little brain food every day can experience a simulated reality (Grand Theft Auto anyone?) into eternity!


Posted by: Alizmo [TypeKey Profile Page] at June 2, 2008 06:51 PM

I also think that comparing this sort of futurism to utopianisms of the past is not fair. Those beliefs were based on sociology and philosophy--fundamentally soft, subjective stuff. But however wide-eyed the speculations of the singularity folks are, they are nonetheless extrapolations based on existing science.

Posted by: Alizmo [TypeKey Profile Page] at June 2, 2008 07:01 PM

Tom, I usually scan the comments on last few entries on Nick's blog about once a week just to see if you have commented on any, cause when you do it's always an interesting and insightful read. Thanks, and keep it up.

Posted by: Sergey Schetinin [TypeKey Profile Page] at June 2, 2008 10:17 PM

Great topic. I went and read several of the IEEE articles. We are clearly missing something fundamental in our understanding of intelligence.

Q: Why do AI researchers want to start with modeling humans when we haven't figured out much simpler animals? I think the answer is that we aren't able to come up with real problems that could validate the fundamentals of intelligence - whatever these fundamentals may be - and disappointed with that - we conveniently choose the human comparison.

My view is that we won't need to model something as complex as a human brain to first see artificial intelligence in action - there just has to be something simpler. I just find it amazing that this 'something simpler' has evaded us for so long.

As far as being able to create smart machines - let's take an analogy from atomic physics - even though we know the fundamentals of quantum mechanics - there is no formulation that can predict that Gold will be yellow given its nuclear and electronic composition - and that's because of sheer complexity.

So I'd say - treat human level intelligence in a machine as a science fiction fantasy that may never come true because the brain is too complex - but the underlying fundamentals - we'll surely understand them someday.

Posted by: Jim Mason [TypeKey Profile Page] at June 3, 2008 12:21 AM

The actuality of singularity, in regard to the development of a non human brain, appears to be in question.

Laurent has it right of course, if reproducing, not even talking about improving; brain power is completely out of reach why are we wasting time with this notion? There can be no question that all this brain stretching comes to a grinding halt as soon as one realizes that the brain is just dead matter without soul or spirit!

So what’s the next joke, soul machines teaching their human robot counterparts about intelligent design?

Alan

Posted by: alan [TypeKey Profile Page] at June 3, 2008 09:57 AM

It astonishes me that anyone can rattle on blithely about any sort of bio-mechanical singularity occurring within any foreseeable future without bothering to address the salient fact that the coming era of human history is going to be defined by resource scarcity: scarcity of non-renewable energy resources, scarcity of materials. Scarcity, in other words, of the stuff on which a singularity would be irreducibly dependent. (And let's add, as in some sense another form of scarcity, or at least another ineluctable form of cost, rapidly escalating climate instability.)

Singularity is a fantasy predicated on the idea that computing and nano-scales (machine or biological) somehow (magically) render the notion of cost obsolete. It's the sort of fantasy that seems to emerge any time a sufficiently revolutionary new set of technologies emerges, which makes it predictable, but under the circumstances (a global environment rapidly and perhaps irreversibly becoming more hostile to human communities) it seems both pathetic and dangerous.

And deeply, deeply intellectually dishonest. If you can only get to utopia by refusing to reckon cost, what's your utopia worth?

Posted by: Michael [TypeKey Profile Page] at June 3, 2008 11:34 AM

You write: Hanson doesn't speculate on what will be left for humans to do in this world, but I think the answer probably lies in the machine-people's desire to "have fun."

How do we know this isn't already the case?

Posted by: alexfiles [TypeKey Profile Page] at June 3, 2008 07:22 PM

Post a comment

Thanks for signing in, . Now you can comment. (sign out)

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)


Remember me?


carrshot5.jpg Subscribe to Rough Type

Now in paperback:
shallowspbk2.jpg Pulitzer Prize Finalist

"Riveting" -San Francisco Chronicle

"Rewarding" -Financial Times

"Revelatory" -Booklist

Order from Amazon

Visit The Shallows site

The Cloud, demystified: bigswitchcover2thumb.jpg "Future Shock for the web-apps era" -Fast Company

"Ominously prescient" -Kirkus Reviews

"Riveting stuff" -New York Post

Order from Amazon

Visit Big Switch site

Greatest hits

The amorality of Web 2.0

Twitter dot dash

The engine of serendipity

The editor and the crowd

Avatars consume as much electricity as Brazilians

The great unread

The love song of J. Alfred Prufrock's avatar

Flight of the wingless coffin fly

Sharecropping the long tail

The social graft

Steve's devices

MySpace's vacancy

The dingo stole my avatar

Excuse me while I blog

Other writing

Is Google Making Us Stupid?

The ignorance of crowds

The recorded life

The end of corporate computing

IT doesn't matter

The parasitic blogger

The sixth force

Hypermediation

More

The limits of computers: Order from Amazon

Visit book site

Rough Type is:

Written and published by
Nicholas Carr

Designed by

JavaScript must be enabled to display this email address.

What?