I have a review of When We Are No More: How Digital Memory Will Shape Our Future, Abby Smith Rumsey’s meditation on the fragility of cultural memory, in the Washington Post. It begins:
In the spring of 1997, the Library of Congress opened an ambitious exhibit featuring several hundred of the most historically significant items in its collection. One of the more striking of the artifacts was the “rough draught” of the Declaration of Independence. Over Thomas Jefferson’s original, neatly penned script ran edits by John Adams, Benjamin Franklin and other Founding Fathers. Words were crossed out, inserted and changed, the revisions providing a visual record of debate and compromise. A boon to historians, the four-page manuscript provides even the casual viewer with a keen sense of the drama of a nation being born.
Imagine if the Declaration were composed today. It would almost certainly be written on a computer screen rather than with ink and paper, and the edits would be made electronically, through email exchanges or a file shared on the Internet. If we were lucky, a modern-day Jefferson would turn on the word processor’s track-changes function and print copies of the document as it progressed. We’d at least know who wrote what, even if the generic computer type lacked the expressiveness of handwriting. More likely, the digital file would come to be erased or rendered unreadable by changes in technical standards. We’d have the words, but the document itself would have little resonance. . . .
Given our current obsession with the possibility of an economic or even existential robot apocalypse, the news this week that Google is backing away from its aggressive robotics program has received surprisingly little attention. I’m wondering if the company’s retreat might be a signal that, for the moment, we’ve hit peak robot.
Google, according to press reports, is eagerly seeking a buyer for Boston Dynamics, the most vaunted of the robotics companies that it purchased in a wild buying spree a couple years back. Boston Dynamics is famous for making telegenic humanoid and animaloid robots. I’m not sure what practical use the creatures have been put to, but on YouTube they’re superstars:
There’s a certain S&M quality to the Boston Dynamics videos that makes them particularly compelling. Hitting ambulatory robots with hockey sticks and long poles appears to be deeply satisfying.
A strain of sadomasochism also seems to have run through the relationship between the West Coast Googlers and the East Coast Boston Dynamics crew. “The ethos they have and the ethos we have weren’t super-compatible,” Astro Teller, the head of Google’s X lab, told the Wall Street Journal. “They are some of the most talented roboticists in the world, but in order to be here … you have to sign up for our way of doing things.” Ouch.
The problem, though, seems to go a lot deeper than a clash of personalities. Google’s parent company, Alphabet, has dissolved its standalone robotics division, called Replicant, and moved its robotics engineers into X in hopes of “defining some specific real-world problems in which robotics could help,” according to a company spokesperson. That’s hardly a ringing long-term endorsement. The company’s enthusiasm over the practical applications of robots, particularly those with legs, appears to be much diminished.
It may be that we’re about to enter a robot winter, similar to the AI winters of the past, in which a bubble of optimism about technological progress bursts, leaving everyone disenchanted and grumpy. Progress slows until some new breakthrough ignites a burst of new interest and innovation, and sunniness returns. “The core issue we are dealing with here is the realization that making robots that actually do things in the real world is much more difficult than what we had envisioned,” the distinguished French roboticist Jean-Christophe Baillie told IEEE Spectrum. “I tend to believe that we cannot brute force our way to solve the complex problems of interaction with the environment or, even more difficult, with people.”
I’m not saying robots are dead meat. I am saying that an adjustment in expectations may be in order, particularly when it comes to robots operating autonomously or semiautonomously in the real world. Progress in many areas of robotics will probably go slower than we’ve been led to believe. I really hope, though, that Boston Dynamics doesn’t go under. Those videos are great.
“The very idea of a functional, effective, affordable product as a sufficient basis for economic exchange is dying,” writes Harvard Business School professor Shoshana Zuboff in an incisive, disquieting essay in Frankfurter Allgemeine Zeitung. We’re seeing the rise, she argues, of “a wholly new genus of capitalism, a systemic coherent new logic of accumulation that I call surveillance capitalism.”
Capitalism has been hijacked by a lucrative surveillance project that subverts the “normal” evolutionary mechanisms associated with its historical success and corrupts the unity of supply and demand that has for centuries, however imperfectly, tethered capitalism to the genuine needs of its populations and societies, thus enabling the fruitful expansion of market democracy.
The product, which once formed the foundation and the boundary of the customer-company relation, becomes an excuse for the surreptitious collection of behavioral data. The product becomes the loss leader for surveillance. The money’s in the data.
Zuboff limns what she sees as the path of modern capitalism: “once profits from products and services, then profits from [financial] speculation, and now profits from surveillance.”
This latest mutation may help explain why the explosion of the digital has failed, so far, to decisively impact economic growth, as so many of its capabilities are diverted into a fundamentally parasitic form of profit.
Zuboff’s is an ominous vision of a drift toward “a disfigured capitalism” that, facilitated by the public’s “ignorance, learned helplessness, inattention, inconvenience, [and] habituation,” ends in “an overthrow of the people’s sovereignty.” Is she overstating the case? Maybe. Maybe not. At the very least, she tells us a truth we seem eager to avoid: the most valuable things in the internet of things are the things formerly known as people.
Looking into any portion of the interior of a rocket was like looking into the abdominal cavity of a submarine or a whale. Green metal walls, green and blue tanks, pipes and proliferations of pipes, black blocks of electrical boxes and gray blocks of such boxes gave an offering of those zones of silence which reside at the center of machines, a hint of that ancient dark beneath the hatch in the hold of the bow — such zones of silence came over him.
Whatever pretensions we wrap them in, they are all escape modules, variations on an old theme. Tragic. Comic. Pathetic. Heroic.
Imagine that you lived in a highly segregated neighborhood, segregated according to political and cultural sensibility, and it was campaign season, and all your neighbors had political signs out in front of their houses, and all the signs were identical, and you, too, had the same sign out in front of your house, and whenever you looked at the sign you felt good about yourself, because you knew you were doing your part, you knew you were taking a stand, you knew it was the right stand, and you knew your voice was being heard. #NeverLand.
Getting machines to understand, and speak, the language used by people — natural language processing — has long been a central goal of artificial intelligence research. In a provocative new interview at Edge, Stephen Wolfram turns that goal on its head. The real challenge, he suggests, is getting people to understand, and speak, the language used by machines. In a future world in which we rely on computers to fulfill our desires, we’re going to need to be able to express those desires in a way that computers can understand.
We’re amazed that Siri can answer our questions. But, as Wolfram points out, Siri’s ability to make sense of human language is profoundly constrained. You can’t have a deep or subtle conversation with a computer using human language. “It works pretty well when you’re holding up your phone and asking one question,” he says. “It’s a pretty successful way to communicate, to use natural language. When you want to say something longer and more complicated, it doesn’t work very well.” The problem is not just a consequence of the limits of natural language processing. It’s a consequence of the limits of natural language. We think of human language as all-encompassing (because it encompasses the whole of our conscious thought), but the language we humans speak is particular to our history. It has, as Wolfram puts it, “evolved to describe what we typically encounter in the world.” It’s absurd to assume that our language would do a good job of describing the way computers encounter the world.
If we’re going to depend on computers to fulfill our purposes, we’re going to need a shared language. We’re going to need to describe our purposes, our desires, in a code that can run successfully through a machine. Most of those who advocate teaching programming skills to the masses argue that learning to code will expand our job prospects. Wolfram’s view is more interesting. He argues that we need to learn to code in order to expand our ontological prospects.
In adopting a new language, a machine language, to describe our purposes, we will also, necessarily, change those purposes. That is the price of computer automation. “What do the humans do” in a world where “things can get done automatically?” Wolfram asks. The answer, of course, is that we compose the instructions for the machines to follow to fulfill our wishes. Will it compile? is the iron law of programming. Either the machine can follow the instructions written for it, or it can’t. Will we compile? would seem to be the great ontological question that lies ahead of us in our automated future. Have we formulated our purposes in such a way that machines can carry them out?
Computers can’t choose our goals for us, Wolfram correctly observes. “Goals are a human construct.” Determining our purposes will remain a human activity, beyond the reach of automation. But will it really matter? If we are required to formulate our goals in a language a machine can understand, is not the machine determining, or at least circumscribing, our purposes? Can you assume another’s language without also assuming its system of meaning and its system of being?
The question isn’t a new one. “I must create a system, or be enlaved by another man’s,” wrote William Blake two hundred years ago. Poets and other thoughtful persons have always struggled to express themselves, to formulate and fulfill their purposes, within and against the constraints of language. Up to now, the struggle has been with a language that evolved to express human purposes — to express human being. The ontological crisis changes, and deepens, when we are required to express ourselves in a language developed to suit the workings of a computer. Suddenly, we face a new question: Is the compilable life worth living?