The unbearable unlightness of AI

obscura

There is a continuing assumption — a faith, really — that at some future moment, perhaps only a decade or two away, perhaps even nearer than that, artificial intelligence will, by means yet unknown, achieve consciousness. A window will open on the computer’s black box, and light will stream in. The universe will take a new turn, as the inanimate becomes, for a second time, animate.

George Lakoff, the linguist who cowrote Metaphors We Live By, says it ain’t going to happen. In a fascinating article by Michael Chorost, Lakoff argues not only that language, being essentially metaphorical, is inextricably bound up in our bodily existence, but that cognition and consciousness, too, flow from our experience as creatures on the earth. Recent neuroscience experiments seem to back Lakoff up. They suggest that even our most abstract thoughts involve the mental simulation of physical experiences.

Writes Chorost:

In a 2011 paper in the Journal of Cognitive Neuroscience, Rutvik Desai, an associate professor of psychology at the University of South Carolina, and his colleagues presented fMRI evidence that brains do in fact simulate metaphorical sentences that use action verbs. When reading both literal and metaphorical sentences, their subjects’ brains activated areas associated with control of action. “The understanding of sensory-motor metaphors is not abstracted away from their sensory-motor origins,” the researchers concluded.

Textural metaphors, too, appear to be simulated. That is, the brain processes “She’s had a rough time” by simulating the sensation of touching something rough. Krish Sathian, a professor of neurology, rehabilitation medicine, and psychology at Emory University, says, “For textural metaphor, you would predict on the Lakoff and Johnson account that it would recruit activity- and texture-selective somatosensory cortex, and that indeed is exactly what we found.”

The evidence points to a new theory about the source of consciousness:

What’s emerging from these studies isn’t just a theory of language or of metaphor. It’s a nascent theory of consciousness. Any algorithmic system faces the problem of bootstrapping itself from computing to knowing, from bit-shuffling to caring. Igniting previously stored memories of bodily experiences seems to be one way of getting there.

That, as Chorost notes, “raises problems for artificial intelligence”:

Since computers don’t have bodies, let alone sensations, what are the implications of these findings for their becoming conscious—that is, achieving strong AI? Lakoff is uncompromising: “It kills it.” Of Ray Kurzweil’s singularity thesis, he says, “I don’t believe it for a second.” Computers can run models of neural processes, he says, but absent bodily experience, those models will never actually be conscious.

Then again, even the algorithmic thinking of computers has a physical substrate. There is no software without hardware. The problem is that computers, unlike animals, have no sensory experience of their own existence. They are, or at least appear to be, radically dualist in their operation, their software oblivious to their hardware. If a computer could think metaphorically, what kind of metaphors would it come up with? It’s hard to imagine they’d be anything recognizable to humans.

Image: “Camera Obscura Test 2” by Jon Lewis.

4 thoughts on “The unbearable unlightness of AI

  1. Scott Holloway

    I agree with Lakoff. I personally believe that a machine achieving consciousness will never happen. (But my assertion is founded in belief that there is more to us than our five senses will ever be able to detect.) Sure, scientists and engineers will produce increasingly “smart” machines, devices, information retrieval systems, etc., that simulate, emulate, or do things just like humans. But in the end—as far as I’m concerned—they are all still machines following a set of instructions.

    I can’t help but think, “What is the need for trying to get an inanimate object to achieve consciousness, anyway?” Trying to understand our consciousness is one thing, but trying to make an inanimate object conscious? Don’t we have enough people exercising their free will in the world and causing havoc (on one level or another) while doing so? Forgive me for being pessimistic, but I just don’t see it.

  2. shagggz

    “The problem is that computers, unlike animals, have no sensory experience of their own existence.”

    I’m puzzled as to why arrangements such as this are apparently taken to be the final, immutable limitation inherent to all “computers” and “bodies,” as if one couldn’t be linked up to the other. If a photographic camera is hooked up to some sort of processing unit, why does this photographic data feed not qualify as “sensory experience” or at least a sensory modality, from which “experience” can emerge, from the crossing and integration of multiple sensory modalities, much like in humans?

    If one wishes to invoke a mystical carbon chauvinism in his response to this question, then we could just as easily invoke some sort of synthetic photopigment system, similar to our rods and cones, and integrate this more familiar sort of data feed into the broader data processing architecture.

    The poverty of imagination here is regrettable.

  3. Michael Stiefel

    Unless you believe there is something non-physical about life, the process of evolution must have bootstrapped “itself from computing to knowing, from bit-shuffling to caring.”

    If an evolutionary process can do it, there is no reason why such a result cannot be duplicated. Whether it can be duplicated by a non-evolutionary process remains to be seen.

    When machines can relate to whatever possible sensory inputs they could have , they will be capable of consciousness. That does not mean their consciousness will be the same as ours, nor will this happen anytime soon.

  4. Faza (TCM)

    I don’t believe we’ll be seeing a human-like AI anytime soon and even any kind of machine intelligence seems a rather distant prospect. The reason is childishly simple: computers have no reason to become intelligent.

    We can consider intelligence as consisting in a combination of three things: experience, judgement and imagination. I ascribe special meanings to these terms here, so I’ll do a quick rundown:

    1. Experience is the easy one: it is the sum total of all accumulated data that can be processed. We get our starting data by living and learning, but feeding data into computers is much easier. So far, so good.

    2. Judgement means ascribing values to experiences (data). Any living organism has this built-in – out of evolutionary necessity. Organisms unable to distinguish between beneficial and harmful experiences won’t last very long. However, how do we impart judgement into machines whose existence is in no way dependent on their exercise of judgement? This is the first big hurdle.

    3. Imagination is the ability to extrapolate from experiences and judgement of those experiences – manifest in the ability to visualise different outcomes, or to predict outcomes based on actions and past experience. To the best of our knowledge, it is the scope of our imaginations that sets us apart from other animals. Our imagination is so well developed, we can envision systems radically detached from our immediate surroundings – these can be anything from religion to advanced mathematics. It should be obvious from the above that one does not develop an imagination without a varied set of experiences and a developed sense of judgement.

    The bottom line is that our intelligence evolved as a tool to overcome the difficulties of living – a burden computers are blessed not to have. Our language reflects our existence and our reaction to it. We are able to communicate (and drive one another towards greater mental complexity) because we have a shared set of experiences and a broadly similar value set (the majority of which is based on our biology and environment).

    Computers have none of these things and it isn’t at all clear how we could make them sufficiently human-like to establish a plane of communication. Nor is it clear how we could impart a value system into a machine that would serve as a reference point for its imaginative capacities.

    Ceterum: the Singularity is little more than Digital Age Alchemy.

Comments are closed.