Will we compile?

ordvac1

Getting machines to understand, and speak, the language used by people — natural language processing — has long been a central goal of artificial intelligence research. In a provocative new interview at Edge, Stephen Wolfram turns that goal on its head. The real challenge, he suggests, is getting people to understand, and speak, the language used by machines. In a future world in which we rely on computers to fulfill our desires, we’re going to need to be able to express those desires in a way that computers can understand.

We’re amazed that Siri can answer our questions. But, as Wolfram points out, Siri’s ability to make sense of human language is profoundly constrained. You can’t have a deep or subtle conversation with a computer using human language. “It works pretty well when you’re holding up your phone and asking one question,” he says. “It’s a pretty successful way to communicate, to use natural language. When you want to say something longer and more complicated, it doesn’t work very well.” The problem is not just a consequence of the limits of natural language processing. It’s a consequence of the limits of natural language. We think of human language as all-encompassing (because it encompasses the whole of our conscious thought), but the language we humans speak is particular to our history. It has, as Wolfram puts it, “evolved to describe what we typically encounter in the world.” It’s absurd to assume that our language would do a good job of describing the way computers encounter the world.

If we’re going to depend on computers to fulfill our purposes, we’re going to need a shared language. We’re going to need to describe our purposes, our desires, in a code that can run successfully through a machine. Most of those who advocate teaching programming skills to the masses argue that learning to code will expand our job prospects. Wolfram’s view is more interesting. He argues that we need to learn to code in order to expand our ontological prospects.

In adopting a new language, a machine language, to describe our purposes, we will also, necessarily, change those purposes. That is the price of computer automation. “What do the humans do” in a world where “things can get done automatically?” Wolfram asks. The answer, of course, is that we compose the instructions for the machines to follow to fulfill our wishes. Will it compile? is the iron law of programming. Either the machine can follow the instructions written for it, or it can’t. Will we compile? would seem to be the great ontological question that lies ahead of us in our automated future. Have we formulated our purposes in such a way that machines can carry them out?

Computers can’t choose our goals for us, Wolfram correctly observes. “Goals are a human construct.” Determining our purposes will remain a human activity, beyond the reach of automation. But will it really matter? If we are required to formulate our goals in a language a machine can understand, is not the machine determining, or at least circumscribing, our purposes? Can you assume another’s language without also assuming its system of meaning and its system of being?

The question isn’t a new one. “I must create a system, or be enlaved by another man’s,” wrote William Blake two hundred years ago. Poets and other thoughtful persons have always struggled to express themselves, to formulate and fulfill their purposes, within and against the constraints of language. Up to now, the struggle has been with a language that evolved to express human purposes — to express human being. The ontological crisis changes, and deepens, when we are required to express ourselves in a language developed to suit the workings of a computer. Suddenly, we face a new question: Is the compilable life worth living?

Image: U.S. Army Photo