The real New Economy?

“The very idea of a functional, effective, affordable product as a sufficient basis for economic exchange is dying,” writes Harvard Business School professor Shoshana Zuboff in an incisive, disquieting essay in Frankfurter Allgemeine Zeitung. We’re seeing the rise, she argues, of “a wholly new genus of capitalism, a systemic coherent new logic of accumulation that I call surveillance capitalism.”

Capitalism has been hijacked by a lucrative surveillance project that subverts the “normal” evolutionary mechanisms associated with its historical success and corrupts the unity of supply and demand that has for centuries, however imperfectly, tethered capitalism to the genuine needs of its populations and societies, thus enabling the fruitful expansion of market democracy.

The product, which once formed the foundation and the boundary of the customer-company relation, becomes an excuse for the surreptitious collection of behavioral data. The product becomes the loss leader for surveillance. The money’s in the data.

Zuboff limns what she sees as the path of modern capitalism:  “once profits from products and services, then profits from [financial] speculation, and now profits from surveillance.”

This latest mutation may help explain why the explosion of the digital has failed, so far, to decisively impact economic growth, as so many of its capabilities are diverted into a fundamentally parasitic form of profit.

Zuboff’s is an ominous vision of a drift toward “a disfigured capitalism” that, facilitated by the public’s “ignorance, learned helplessness, inattention, inconvenience, [and] habituation,” ends in “an overthrow of the people’s sovereignty.” Is she overstating the case? Maybe. Maybe not. At the very least, she tells us a truth we seem eager to avoid: the most valuable things in the internet of things are the things formerly known as people.

The people’s campaign

white walker

The funny thing about gatekeepers is that you never appreciate their value until they’re gone.

For David Bowie, belatedly

manwhofell

From Of a Fire on the Moon by Norman Mailer:

Looking into any portion of the interior of a rocket was like looking into the abdominal cavity of a submarine or a whale. Green metal walls, green and blue tanks, pipes and proliferations of pipes, black blocks of electrical boxes and gray blocks of such boxes gave an offering of those zones of silence which reside at the center of machines, a hint of that ancient dark beneath the hatch in the hold of the bow — such zones of silence came over him.

Whatever pretensions we wrap them in, they are all escape modules, variations on an old theme. Tragic. Comic. Pathetic. Heroic.

Image: Still from “The Man Who Fell to Earth.”

The new politics

Imagine that you lived in a highly segregated neighborhood, segregated according to political and cultural sensibility, and it was campaign season, and all your neighbors had political signs out in front of their houses, and all the signs were identical, and you, too, had the same sign out in front of your house, and whenever you looked at the sign you felt good about yourself, because you knew you were doing your part, you knew you were taking a stand, you knew it was the right stand, and you knew your voice was being heard. #NeverLand.

Will we compile?

ordvac1

Getting machines to understand, and speak, the language used by people — natural language processing — has long been a central goal of artificial intelligence research. In a provocative new interview at Edge, Stephen Wolfram turns that goal on its head. The real challenge, he suggests, is getting people to understand, and speak, the language used by machines. In a future world in which we rely on computers to fulfill our desires, we’re going to need to be able to express those desires in a way that computers can understand.

We’re amazed that Siri can answer our questions. But, as Wolfram points out, Siri’s ability to make sense of human language is profoundly constrained. You can’t have a deep or subtle conversation with a computer using human language. “It works pretty well when you’re holding up your phone and asking one question,” he says. “It’s a pretty successful way to communicate, to use natural language. When you want to say something longer and more complicated, it doesn’t work very well.” The problem is not just a consequence of the limits of natural language processing. It’s a consequence of the limits of natural language. We think of human language as all-encompassing (because it encompasses the whole of our conscious thought), but the language we humans speak is particular to our history. It has, as Wolfram puts it, “evolved to describe what we typically encounter in the world.” It’s absurd to assume that our language would do a good job of describing the way computers encounter the world.

If we’re going to depend on computers to fulfill our purposes, we’re going to need a shared language. We’re going to need to describe our purposes, our desires, in a code that can run successfully through a machine. Most of those who advocate teaching programming skills to the masses argue that learning to code will expand our job prospects. Wolfram’s view is more interesting. He argues that we need to learn to code in order to expand our ontological prospects.

In adopting a new language, a machine language, to describe our purposes, we will also, necessarily, change those purposes. That is the price of computer automation. “What do the humans do” in a world where “things can get done automatically?” Wolfram asks. The answer, of course, is that we compose the instructions for the machines to follow to fulfill our wishes. Will it compile? is the iron law of programming. Either the machine can follow the instructions written for it, or it can’t. Will we compile? would seem to be the great ontological question that lies ahead of us in our automated future. Have we formulated our purposes in such a way that machines can carry them out?

Computers can’t choose our goals for us, Wolfram correctly observes. “Goals are a human construct.” Determining our purposes will remain a human activity, beyond the reach of automation. But will it really matter? If we are required to formulate our goals in a language a machine can understand, is not the machine determining, or at least circumscribing, our purposes? Can you assume another’s language without also assuming its system of meaning and its system of being?

The question isn’t a new one. “I must create a system, or be enlaved by another man’s,” wrote William Blake two hundred years ago. Poets and other thoughtful persons have always struggled to express themselves, to formulate and fulfill their purposes, within and against the constraints of language. Up to now, the struggle has been with a language that evolved to express human purposes — to express human being. The ontological crisis changes, and deepens, when we are required to express ourselves in a language developed to suit the workings of a computer. Suddenly, we face a new question: Is the compilable life worth living?

Image: U.S. Army Photo

The internet of things to stick up your butt

smarttemp

I apologize for that headline, but you can only be so delicate in discussing a rectal thermometer with a Bluetooth transmitter. Along with a speedy network connection, the Vicks SmartTemp Wireless Thermometer comes with a smartphone app that allows you to track your or your child’s body-temperature stream, share the data with Apple Health and other commercial services, and upload the readings to the cloud for safekeeping and corporate scanning. I sense the device has a rich symbolic meaning, but I find that I’d prefer not to know what it is.

And, in my defense, let the record show that I resisted the temptation to make a joke about backdoors.