Last Saturday, I had the pleasure of addressing the annual convention of the Media Ecology Association in Denver. The title of my talk was “Media Takes Command: An Inquiry into the Consequences of Automation.” Here is what I said, along with the slides that accompanied the remarks.
As I was trying to figure out what to talk about this afternoon, I found myself flipping through a copy of Neil Postman’s Amusing Ourselves to Death — the twentieth anniversary edition. I started thinking about one of the promotional blurbs printed at the front of the book. A reviewer for the Christian Science Monitor had written that Postman “starts where Marshall McLuhan left off, constructing his arguments with the resources of a scholar and the wit of a raconteur.”
I can’t make claim to either the resources of a scholar or the wit of a raconteur, but at least I can follow Postman’s lead in starting where McLuhan left off. In fact, I’d like to start literally where he left off, with the final line of his most influential work, the 1964 book Understanding Media:
“Panic about automation as a threat of uniformity on a world scale is the projection into the future of mechanical standardization and specialism, which are now past.”
That’s not one of McLuhan’s better sentences. But it does include a couple of ideas that seem pertinent to our current situation.
First is the suggestion that electronic automation marks a break, an epochal break, from the history of industrialization and mechanization. It’s always dangerous to try to pin McLuhan down — he’s a slippery guy — but in this final sentence, as elsewhere in the book, he presents automation as a liberating force that will free us from the narrow roles imposed by the division and specialization of labor under industrial production. Thanks to our newfound ability to hand routine jobs over to electronic circuits and computer software, we’ll become whole beings again, fully and creatively engaged in life.
I think McLuhan is wrong here. And I’ll return to this idea a little later.
Second is McLuhan’s reference to “panic about automation.” That sounds prophetic at the moment. We are today, after all, in the midst of a panic about automation, with many smart people sounding dire warnings about robots stealing all our jobs or even supplanting us as the major-domos of Planet Earth. But McLuhan wasn’t being prophetic. The panic about automation he was talking about was the one going on all around him in 1964. Ever since 1950, when Norbert Wiener had published his foreboding book The Human Use of Human Beings, people had been growing more worried about the threat that computers and robots posed to workers’ jobs and even to human existence. A technological apocalypse seemed to be in the offing at the dawn of the sixties, and it was that very immediate fear that McLuhan was seeking to counter.
What really interests me, though, is the way that McLuhan presents automation as a form of media. The last chapter of Understanding Media is titled “Automation: Learning a Living,” and the theme of electronic automation runs throughout the book. Placing automation in the realm of media might seem odd, but it strikes me as appropriate and illuminating. It helps us understand both the progress of media and the likely consequences of computer automation.
To explain why that’s so, let me walk you through a brief history of media. I want to stress that this is a history of media, not the history of media. Media has plenty of histories, and mine is just one of them, and a fairly idiosyncratic one at that.
I’m going to tell the story through the example of the map, which happens to be my all-time favorite medium. The map was, so far as I can judge, the first medium invented by the human race, and in the map we find a microcosm of media in general. The map originated as a simple tool. A person with knowledge of a particular place drew a map, probably in the dirt with a stick, as a way to communicate his knowledge to another person who wanted to get somewhere in that place. The medium of the map was just a means to transfer useful knowledge efficiently between a knower and a doer at a particular moment in time.
Then, at some point, the map and the mapmaker parted company. Maps started to be inscribed on pieces of hide or stone tablets or other objects more durable and transportable than a patch of dirt, and when that happened the knower’s presence was no longer necessary. The map subsumed the knower. The medium became the knowledge. And when a means of mechanical reproduction came along — the printing press, say — the map became a mass medium, shared by a large audience of doers who wanted to get from one place to another.
For most of recent history, this has been the form of the map we’ve all been familiar with. You arrive in some new place, you go into a gas station and you buy a map, and then you examine the map to figure out where you are and to plot a route to get to wherever you want to be. You don’t give much thought to the knower, or knowers, whose knowledge went into the map. As far as you’re concerned, the medium is the knowledge.
Something very interesting has happened to the map recently, during the course of our own lives. When the medium of the map was transferred from paper to software, the map gained the ability to speak to us, to give us commands. With Google Maps or an in-dash GPS system, we no longer have to look at a map and plot out a route for ourselves; the map assumes that work. We become the actuators of the map’s instructions: the assistants who, on the software’s command, turn the wheel. You might even say that our role becomes that of a robotic apparatus controlled by the medium.
So, having earlier subsumed the knower, the map now begins to subsume the doer. The medium becomes the actor.
In the next and ultimate stage of this story, the map becomes the vehicle. The map does the driving. Google plans to remove the steering wheel from its adorable new autonomous car, along with the pedals and other controls. The map at that point will take over all of the work of the doer — not just the navigational tasks but the perceptual and motor tasks as well. Media takes command.
Over the course of a few months in 2010 and 2011, we got a dramatic lesson in the way media, in the form of software programming, is taking command. In 2010 came the announcement that Google had built a car that could drive itself successfully through highway traffic, a feat unimaginable just a few years earlier. Software, we saw, had gained the ability to navigate the physical world, the world of things. Then, in early 2011, IBM’s Watson beat two reigning champions in the intellectually challenging game show Jeopardy, another astonishing feat. Software, we saw, had also gained the ability to navigate the abstract world, the world of thoughts and symbols and words.
To understand how far computer programming can go in taking over human work, it helps to survey the different skills that people employ in getting stuff done. I would suggest that our skills can be divided into four categories. There are the motor skills we use in performing manual work. There are the analytical skills we use in diagnosing situations and phenomena and making judgments about them. There are the creative skills we use in doing artistic work, creating new objects and forms. And finally there are the communication or interpersonal skills we use in exerting influence over others, persuading them to act or think in certain ways. We combine these four skills, these four modes of doing, in various ways to carry out work, whether in wage-paying jobs or in our personal lives.
Software has made great progress in each of these modes of doing. As the self-driving cars demonstrate, robots are getting much better at acting autonomously in the world. Another good example here is the robotic lettuce picker. It was long thought that robots would have a tough time taking over the harvesting of tender fruits and vegetables. That work requires both a delicate touch and a subtle perceptual ability to distinguish between a crop and a weed. But, thanks to advances in sensors, machine vision, and related technologies, robots are now capable of picking lettuce and other fragile crops.
Computers are getting better at thinking about the world, too. Software programs can now read and interpret mammograms and other medical images, highlighting suspicious areas. And software is adept at digging evidence out of piles of documents in legal cases, work that once employed large numbers of junior lawyers and paralegals. In many other fields as well, we’re seeing data-processing machines perform analyses, make diagnoses, and recommend courses of action.
Robots and computers don’t make particularly good artists, and probably never will, but they can nevertheless perform a good deal of creative work. The Flux software program, originally developed in Google’s labs, aims at taking on the work of designing buildings and even doing city planning — the kinds of jobs that skilled architects and urban designers have long been engaged in. In journalism, computers are already churning out simple news stories, such as play-by-play recaps of sports matches. An artistic creation doesn’t have to be beautiful to be useful.
Finally, there’s the work of influence and persuasion. And here, too, we see media taking command. If you have FitBit or some other personal training app on your phone, you probably receive instruction, encouragement, and even little badges of achievement as you carry out your exercise routines. We also look to apps to remind us to eat healthy meals, take our medications and vitamins on schedule, and get appropriate amounts of sleep. In the recommendation engines used by companies like Amazon and Netflix to suggest books and movies, we see another form of automated persuasion. And dating software will suggest potential mates or at least enticing candidates for one-night stands.
I’m not implying that computers have rendered us obsolete, or that they’ll render us obsolete in the foreseeable future. Those fears are overblown. In each mode of doing, there are plenty of things that smart, experienced people can do that lie far beyond the capabilities of computers and their programmers. What I am suggesting is that software is making rapid and sometimes profound advances in performing difficult work in each of these areas. Even if a computer doesn’t steal your job, it’s going to change your job — and, along the way, your life.
Most of us misperceive the effects of automation. We assume that a particular task or job can be turned over to software without otherwise changing how we go about our work. Human-factors researchers refer to this view as the substitution myth. What they’ve discovered is that when a computer enters a process, even in a small role, it changes the process. It changes the roles people play, the way people go about their work, and even people’s attitudes and perceptions. Raja Parasuraman, a leading scholar of automation — he died, sadly, earlier this year — explained the substitution myth succinctly in an article: “Automation does not simply supplant human activity but rather changes it, often in ways unintended and unanticipated by the designers.” That’s always been true, it’s worth noting, of media. The people who create the media rarely anticipate the effects the media will have.
In some cases, the unanticipated effects of automation can be disastrous. People who rely on computers to do work for them often fall victim to a condition known as automation complacency. They’re so confident that the computer will perform flawlessly that they stop paying attention. Then, when the technology fails or something unexpected happens, they screw up. In 1995, the crew of the cruise ship Royal Majesty watched passively for hours as a flawed computer navigation system sent the ship miles off course. It ultimately ran aground on a sandbar near Nantucket Island. In 2009, Air France 447 crashed into the Atlantic after its autopilot system shut off in bad weather and the pilots, taken by surprise, made fatal mistakes.
People using computers also fall victim to what’s called automation bias. They’re quick to place their complete faith in the data and instructions coming from their computers, even when the data and instructions are incomplete or flat-out wrong. In 2013, a nurse at a San Francisco pediatric hospital gave a boy 38 antibiotic pills because that was what the computer ordered — even though the patient should have been given only a single pill. (The boy survived the overdose, thankfully.)
We see automation bias in our own routine use of apps and online information sources. Think of the faith we place in the Google search engine, for instance. Google uses a certain set of criteria in ranking sources of information, placing particular emphasis on popularity, recency, and personal relevance. That works perfectly well in many cases, but the Google criteria are hardly the only ones that might be used in searching for information. We might often be better served by seeing a diversity of viewpoints, or by looking at sources that have stood the test of time, or finding information that runs counter to our personal preferences and prejudices. And yet, because Google serves up its answers with such speed and certainty, we default to Google and take its judgments as gospel.
These examples hint at a deeper truth — what researchers term the automation paradox. Automated systems often end up having the opposite effect from what was intended. Software designed to reduce human error may, by triggering complacency and bias, make human error more likely. Software designed to “free us” from routine chores may end up turning us into passive computer operators, diminishing our agency and autonomy and making our work less interesting and fulfilling.
The best way to understand the automation paradox is by looking at the Yerkes-Dodson Law. Back in the early years of the last century, the Harvard psychologist Robert Yerkes and his student John Dodson set out to understand the way animals learn new skills. They did an experiment in which they trained mice to go down a particular passageway in a box. They gave the mice a shock whenever the animals headed down the wrong passageway. The scientists assumed that the mice would learn more quickly as the intensity of the shock increased. But they found that wasn’t the case. The mice performed poorly both when they received a very light shock and when they received a very strong shock. They performed best when they received a moderate shock. Yerkes and Dodson concluded that, when it comes to mastering a difficult task, too little stimulus or arousal can make animals lethargic and complacent. They’re not inspired to learn. But too much stimulus or arousal can make animals so anxious, so stressed, that they panic and fail to learn anything at all. Animals perform and learn best when they are stimulated enough to be deeply engaged in their work but no so much they they feel overwhelmed.
The Yerkes-Dodson Law applies to people as well as rodents. If we don’t have enough to do, our attention drifts and we tune out. If we’re overwhelmed, we panic, become discombobulated, and make mistakes. Unfortunately, computer automation, as it’s typically designed, tends to push us in the wrong directions. Because software is usually designed to relieve us of effort, to remove the “friction” from our work and our lives, we end up suffering from a lack of stimulation. Our engagement in what we’re doing fades, and we perform lazily and learn little. But when something goes wrong — when the technology fails, for instance — we’re suddenly pushed all the way over into the debilitating state of overstimulation. Not only do we have to re-engage with the work, and reorient ourselves to the situation, but we also have to consult computer screens and punch in data. We’re overwhelmed.
What we rarely experience is the state of optimum stimulus and engagement — the state in which we perform best, learn best, and feel best.
The dream that the technologies of automation will liberate us from work, the dream expressed by McLuhan, is a seductive one. Karl Marx, in the middle of the nineteenth century, wrote of how new production technologies could have “the wonderful power of shortening and fructifying human labor.” He foresaw a time when he would be able “to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticize after dinner, just as I have a mind.” But Marx did not believe that the emancipatory potential of technology was inherent in the technology itself. The emancipatory power would be released only through political, economic, and social changes. Technology would always serve its master.
Since then, the dream of technological liberation has come to be disassociated from its political, economic, and social context. Technology itself, particularly in the form of automation, has come to be seen as our would-be liberator. Writers as diverse as Oscar Wilde and John Maynard Keynes, as well as McLuhan, have expressed a confidence in the power of machinery to return us to an Eden of leisure and plenty. Today, we routinely hear similar predictions coming from Silicon Valley. Last year, in a series of tweets, the entrepreneur and venture capitalist Marc Andreessen expressed his vision of the coming utopia of perfect automation:
“Human nature expresses itself fully, for the first time in history. Without physical need constraints, we will be whoever we want to be. The main fields of human endeavor will be culture, arts, sciences, creativity, philosophy, experimentation, exploration, adventure. Rather than nothing to do, we would have everything to do: curiosity, artistic and scientific creativity, new forms of status seeking. Imagine six, or 10, billion people doing nothing but arts and sciences, culture and exploring and learning. What a world that would be.”
Flower power would appear to be alive and well in San Francisco, if only as a mask for other, bruter forms of power.
Andreessen is oblivious to the fact that he is expressing a Marxian dream. Technology renders history irrelevant. So what he gives us is a simplistic, self-serving fairy tale. If software takes over all labor, surely we humans will be lifted into a paradise of what McLuhan termed “self-employment and artistic autonomy.”
But the dream is not the reality. What we see emerging as media takes command is a cycle of dependency. As we come to rely on software to do things for us, we are relieved of the difficult work and pressing challenges necessary for skill-building. We experience deskilling, which makes us even more dependent on the software. And the cycle takes another turn. Because the media of software is invisible to us, a black box that both knows and does without disclosing its workings to us, we also become subject to manipulation as we become more dependent on the technology. Economic and cultural power accrues to the programmers, to the people who control the media.
Last year, I published a book on automation called The Glass Cage. When I came up with the title, I wasn’t thinking of Max Weber’s famous metaphor of the “iron cage” of industrialization. But more and more I see a continuity between the iron cage of mechanized industry and the glass cage of computer automation. In The Protestant Ethic and the “Spirit” of Capitalism, published in 1905, Weber argued that labor was once seen as a means to a higher calling — a way to rise above selfish concerns, serve God and, ultimately, gain entrance to heaven. But under industrial capitalism labor was drained of its spirituality. It became merely a way to manufacture earthly goods and to earn money to buy earthly goods. With automation, we see a similar reversal. What is viewed, romantically, as a path to liberation becomes instead a path to dependency. As is so often the case, popular culture provides us with the most powerful images of our situation: Charlie Chaplin cheerfully pushing and pulling a lever in Modern Times, and the plump and passive humans of the future staring into their colorful screens in Wall-E.
My first slide, you may remember, showed part of a photograph of a psychological researcher putting a mouse into a maze. Here is the entire photograph. But it’s not of a psychologist putting a mouse into a maze. Rather, it’s of Claude Shannon, the father of information science, putting a robotic mouse into a maze to demonstrate the efficacy of the automaton’s programming. When I first came across this picture, I found it amusing. But its shadows have come to haunt me. In this photograph — of an information scientist placing a robotic animal into a maze to test its programming — we may have found the perfect visual metaphor for our time.