David Golumbia, author of The Cultural Logic of Computation, describes how the seemingly immaculate materialism of the Singularitarians masks a dualistic view of the mind and the body that would make Descartes proud:
There is a radical, deeply unscientific Cartesianism in singulatarians: they believe mind is special stuff, different from body, despite their apparent overt commitment to a fully materialistic, scientific conception of the world.
This neo-Cartesian conception of the mind predates the Singularitarians, of course. It’s wrapped up in a view of the brain as a computing machine whose logic and data can be abstracted from its physical manifestation. The computer scientist Danny Hillis voiced this view, in stark terms, back in 1992 in an interview with the Whole Earth Review. He said of human beings:
We’re a symbiotic relationship between two essentially different kinds of things. We’re the metabolic thing, which is the monkey that walks around, and we’re the intelligent thing, which is a set of ideas and culture. And those two things have coevolved together, because they helped each other. But they’re fundamentally different things. What’s valuable about us, what’s good about humans, is the idea thing. It’s not the animal thing.
The human consists of that which can be digitized and that which cannot, the logic (or mind) on the one hand and the metabolic machinery (or body) on the other, and these are fundamentally, essentially different things. Mind has no particular dependency on body, at least no more than the program has on the particular computer on which it runs.
What’s striking about the neo-Cartesian, or digital dualist, view is how it manages the neat trick of incorporating both extreme humanism and extreme misanthropy. Since what’s “good” about us is what’s not “the animal thing,” we are given a superior position to all the other animals with whom we share the earth, they being the mere “monkeys that walk around.” This sense of our unique specialness is combined with a deeply misanthropic hatred for the human body, which, by linking us back to mere animals, prevents us from fulling the immortal destiny of pure intelligence. “If I can go into a new body and last for 10,000 years,” said Hillis, “I would do it in an instant.” This view is, needless to say, very close to certain religious conceptions of the body and the soul, though what it lacks is any attempt to put a brake on hubris.
In his critique of the Singularitarian dualism, Golumbia draws a useful distinction between “intelligence” and “mind”:
The use of the term “intelligence” in the fields of AI/Cognitive Science as coterminous with “mind” has always been a red herring. The problems with AI have never been about intelligence: it is obviously the case that machines have become much more intelligent than we are, if we define “intelligence” in the most usual ways: ability to do mathematics, or to access specific pieces of information, or to process complex logical constructions. But they do not have minds–or at least not human minds, or anything much like them. We don’t even have a good, total description of what “mind” is, although both philosophy and some forms of Buddhist thought have good approximations available. Despite singulatarian insistence, we certainly don’t know how to describe “mind” outside of/separately from our bodies.
This is why the Singularitarian program is ultimately fated to fail: the mind is as much the monkey that walks around as it is the “intelligence” that can be abstracted and processed digitally. That gives Golumbia little comfort, however, because he sees the potential for an enormous amount of destruction in the unfettered pursuit of the Singularitarians’ warped humanistic/misanthropic goal—even if that goal is never reached.
Many of the most advanced technologists in corporate America for some reason adhere to this deeply unscientific piece of [dualist] dogma, and pursue unbridled technological progress and the automation of everything because they ‘know’ (following Kurzweil) that it is leading to transcendence — instead of believing the evidence of their own eyes, that it is leading someplace very dark indeed, especially when we reject out of hand — as nearly all Googlers do — that anybody but technologists should decide where technology goes.
Whether Golumbia’s darkest fears are realized, he raises an uncomfortable question: What does it mean for a society to thoughtlessly grant power to those who see the human body as an impediment to transcendence and believe that what’s good about us is what can be replicated by inanimate computers?
UPDATE: On a related note, see Colin McGinn’s review of Kurzweil’s latest book, particularly the discussion of the dangers of thinking that the brain is, literally, an information processor.
Photo by ePsos.