In the kingdom of the bored, the one-armed bandit is king

interface

It still feels a little shameful to admit to the fact, but what engages us more and more is not the content but the mechanism. Kenneth Goldsmith, in a Los Angeles Review of Books essay, writes of a recent day when he felt an urge to listen to some music by the American composer Morton Feldman:

I dug into my MP3 drive, found my Feldman folder and opened it up. Amongst the various folders in the directory was one labeled “The Complete Works of Morton Feldman.” I was surprised to see it there; I didn’t remember downloading it. Curious, I looked at its date — 2009 — and realized that I must’ve grabbed it during the heyday of MP3 sharity blogs. I opened it to find 79 albums as zipped files. I unzipped three of them, listened to part of one, and closed the folder. I haven’t opened it since.

The pleasure of listening to music was not as great as he anticipated. He found more pleasure in manipulating music files.

Our role as librarians and archivists has outpaced our role as cultural consumers. Engaging with media in a traditional sense is often the last thing we do. … In the digital ecosystem, the apparatuses surrounding the artifact are more engaging than the artifact itself. Management (acquisition, distribution, archiving, filing, redundancy) is the cultural artifact’s new content. … In an unanticipated twist to John Perry Barlow’s 1994 prediction that in the digital age we’d be able to enjoy wine without the bottles, we’ve now come to prefer the bottles to the wine.

It’s as though we find ourselves, suddenly, in a vast library, an infinite library, a library of Borgesian proportions, and we discover that what’s of most interest to us is not the books on the shelves but the intricacies of the Dewey Decimal System.

Goldsmith’s experience reminded me of a passage in Simon Reynolds’s Retromania. Reynolds describes what happened after he got his first iPod and started experimenting with the Shuffle function:

Shuffle offered a reprieve from the problem of choice. Like everybody, at first I was captivated by it and, like everybody, had all those experiences with mysterious recurrences of artists and uncanny sequencings. The downside of shuffle soon revealed itself, though. I became fascinated with the mechanism itself, and soon was always wanting to know what was coming up next. It was irresistible to click onto the next random selection. … Soon I was listening to just the first fifteen seconds of every track; then, not listening at all. … Really, the logical culmination would have been for me to remove the headphones and just look at the track display.

What is the great innovation of SoundCloud, the popular music-streaming service? It has little to do with music and everything to do with the visual enrichment of the track display:

Screen-Shot-2012-11-29-at-17.37.32

Who needs to listen to the song when one can watch the song unspool colorfully on the screen through all its sonic peaks and valleys, triggering the display of comments as it goes? Whatever lies on the other side of the interface seems less and less consequential. The interface is the thing. The interface is the content.

Abundance breeds boredom. When there’s no end of choices, each choice feels disappointing. Listening to or watching one thing means you’re not listening to or watching all the other things you might be listening to or watching. Reynolds quotes a telling line from Karla Starr’s 2008 article “When Every Song Ever Recorded Fits on Your MP3 Player, Will You Listen to Any of Them?” Confessed Starr: “I find myself getting bored even in the middle of songs simply because I can.”

And so, bored by the content, bored by the art, bored by the experience, we become obsessed with the interface. We seek to master the mechanism’s intricate, fascinating functions: downloading and uploading, archiving and cataloging, monitoring readouts, watching time counts, streaming and pausing and skipping, clicking buttons marked with hearts or uplifted thumbs. We become culture’s technicians. We become bureaucrats of experience.

Managing the complexities of the interface provides an illusion of agency while alleviating the agony of choice. In the end, as Reynolds puts it, fiddling with the mechanism “relieves you of the burden of desire itself” — a burden that grows ever more burdensome as options proliferate. And so you find that you’re no longer a music fan; you’re a jukebox aficionado.

As the manufacturers of digital slot machines have discovered, a well-designed interface induces obsession. It’s not the winnings, or the losses, that keep the players feeding money into the slots; it’s the joy of operating a highly responsive machine. In her book Addiction by Design: Machine Gambling in Las Vegas, Natasha Dow Schüll tells of meeting a video-poker player named Mollie in a casino:

When I ask Mollie if she is hoping for a big win, she gives a short laugh and a dismissive wave of her hand. … “Today when I win — and I do win, from time to time — I just put it back in the machines. The thing people never understand is that I’m not playing to win.”

Why, then, does she play? “To keep playing — to stay in that machine zone where nothing else matters.”

I ask Mollie to describe the machine zone. She looks out the window at the colorful movement of lights, her fingers playing on the tabletop between us. “It’s like being in the eye of a storm, is how I’d describe it. Your vision is clear on the machine in front of you but the whole world is spinning around you, and you can’t really hear anything. You aren’t really there — you’re with the machine and that’s all you’re with.”

In a world dense with stuff, a captivating interface is the perfect consumer good. It packages the very act of consumption as a product. We consume our consuming.

The machine zone is where we spend much of our time these days. It extends well beyond the traditional diversions of media and entertainment and gaming. The machine zone surrounds us. You go for a walk, and you find that what inspires you is not the scenery or the fresh air or the physical pleasure of the exercise, but rather the mounting step count on your smartphone’s exercise app. “If I go just a little farther,” you tell yourself, glancing yet again at the interface, “the app will reward me with a badge.” The mechanism is more than beguiling. The mechanism knows you, and it cares about you. You give it your attention, and it tells you that your attention has not been wasted.

How to write a book when you’re paid by the page

robot

When I first heard that Amazon was going to start paying its Kindle Unlimited authors according to the number of pages in their books that actually get read, I wondered whether there might be an opportunity for an intra-Amazon arbitrage scheme that would allow me to game the system and drain Jeff Bezos’s bank account. I thought I might be able to start publishing long books of computer-generated gibberish and then use Amazon’s Mechanical Turk service to pay Third World readers to scroll through the pages at a pace that would register each page as having been read. If I could pay the Turkers a fraction of a penny less to look at a page than Amazon paid me for the “read” page, I’d be able to get really rich and launch my own space exploration company.

Alas, I couldn’t make the numbers work. Amazon draws the royalties for the program from a fixed pool of funds, which serves to cap the upside for devious scribblers.

So much for my Mars vacation. Still, even in a zero-sum game that pits writer against writer, I figured I might be able to steal a few pennies from the pockets of my fellow authors. (I hate them all, anyway.) I would just need to do a better job of mastering the rules of the game, which Amazon was kind enough to lay out for me:

Under the new payment method, you’ll be paid for each page individual customers read of your book, the first time they read it. … To determine a book’s page count in a way that works across genres and devices, we’ve developed the Kindle Edition Normalized Page Count (KENPC). We calculate KENPC based on standard settings (e.g. font, line height, line spacing, etc.), and we’ll use KENPC to measure the number of pages customers read in your book, starting with the Start Reading Location (SRL) to the end of your book.

The first thing that has to be said is that if you’re a poet, you’re screwed. That page-normalization deal is going to kill you. I mean, Walt Whitman might do okay. But Mary Oliver? Totally hosed. So that manuscript of dense, trimetric verse you’ve been fussing over for the last twenty years? Shred it.

Now, turning to prose, where the prospects are brighter, it’s pretty clear that the key is to keep the reader engaged without challenging the reader in any way. To maximize earnings, you need to ensure that the reader moves through your pages at a good, crisp, unbroken clip. You want shallow immersion. Any kind of complication or complexity that slows a reader down is going to take an immediate bite out of your wallet. What you most want to avoid is anything that encourages the reader to go back and re-read a passage. Remember: you only get paid the first time a page gets read. If you inspire the reader to read any of your pages more than once, you’re basically burning cash.

So: You want fairly simple characters — no Russian names, no introverts — with transparent motivations, and you want them to proceed quickly through a plot that takes lots of unexpected turns without ever being at all baffling or disorienting. And you don’t want to write too well or try to get too “literary.” You don’t want the reader to savor your words. You want the reader to gulp your words down like bar nuts. Hemingwayesque is probably okay. But Faulknerian is a no-go. Really, you’d do best to follow Suzanne Collins’s lead. Lusty teenagers killing each other in workmanlike prose: that’s the ticket. Jackie Collins would also work pretty well as a model. In fact, you really can’t go wrong mimicking any writer with the last name of Collins. Even Billy, if you’re still dead set on the poetry thing.

My first instinct, to be frank, was to write a seventeen-volume series called Tales of Ripe Naughtiness consisting entirely of sex scenes packed together like sardines in oil. But the more I thought about it, the more I realized that sex is tricky in a pay-by-the-page world. Moments of passion need to be handled delicately. You want a certain degree of titillation to keep the reader tapping away at the screen, but you need to be careful not to overdo it. You don’t — how should I put this? — you don’t want the reader to linger on any given page. There’s no money in that. The bodice can’t be ripped. The bodice has to be unstitched, thread by thread, over the course of, say, forty-five pages. And then, when the bosom heaves, you want to cut immediately to some other characters in some other setting. Patients playing dominoes in a nursing home, perhaps. But don’t stay there for more than three paragraphs: too depressing.

I just realized that I’m giving my entire strategy away. I need to learn to curb my sharing instinct. I’ll end by saying that I’m starting to see real possibilities in this idea of getting paid by the page for a book. I confess that in the past I’ve had my doubts, but now I’m convinced that Amazon has had the best interests of writers and readers at heart all along.

Music is the oil in the human machine

oil

In announcing the free version of its music streaming service — that’s free as in ads — Google also discloses something revealing about the way it views music:

At any moment in your day, Google Play Music has whatever you need music for — from working, to working out, to working it on the dance floor — and gives you curated radio stations to make whatever you’re doing better. Our team of music experts, including the folks who created Songza, crafts each station song by song so you don’t have to.

This marks a continuation of Google’s promotion of what it terms “activity-based” music. Last year, soon after it acquired Songza, a company that specializes in “curating” playlists to suit particular moods and activities, Google rejiggered its music service to emphasize its practicality:

If you’re a Google Play Music subscriber, next time you open the app you’ll be prompted to play music for a time of day, mood or activity. Choose an activity to get options for several music stations to make whatever you’re doing even better — whether it’s a station for a morning workout, songs to relieve stress during traffic, or the right mix for cooking with friends. Each station has been handcrafted — song by song — by our team of music experts (dozens of DJs, musicians, music critics and ethnomusicologists) to give you the exact right song for the moment.

This is the democratization of the Muzak philosophy. Music becomes an input, a factor of production. Listening to music is not itself an “activity” — music isn’t an end in itself — but rather an enhancer of other activities, each of which must be clearly demarcated. (As I’ve argued before, the fuzziness of human experience is anathema to Silicon Valley. Before you can code it, you have to formalize it.) Here’s a sampling of the discrete activities — “jobs” might be the more accurate term — that Google lets you choose from in ordering up units of music:

Barbecuing
Being Romantic
Breaking Up
Coding
Cooking
Daydreaming
Drinking
Driving
Entering Beast Mode
Entertaining
Falling in Love
Family Time
Getting Cosy
Getting Married
Girls Night Out
Having Friends Over
Having Fun at Work
Pregaming
Raising Your Kids
Relaxing
Sleeping
Studying
Waking Up
Working
Working Out
Yoga

Once you accept that music is an input, a factor of production, you’ll naturally seek to minimize the cost and effort required to acquire the input. And since music is “context” rather than “core,” to borrow Geoffrey Moore’s famous categorization of business inputs, simple economics would dictate that you outsource the supply of music rather than invest personal resources — time, money, attention, passion — in supplying it yourself. You should, as Google suggests, look to a “team of music experts” to “craft” your musical inputs, “song by song,” so “you don’t have to.” To choose your own songs, or even to develop the personal taste in music required to choose your own songs, would be wasted labor, a distraction from the series of essential jobs that give structure and value to your days.

Art is an industrial lubricant that, by reducing the friction from activities, makes for more productive lives.

Image: Marilyn Peddle.

When triumphalists fail, they fail triumphantly

paved

Progress turns everyone into a nostalgist sooner or later. You just have to wait for your own particular trigger to come along — the new thing that threatens the old thing you love.

David Weinberger has a new article in The Atlantic called “The Internet That Was (and Still Could Be).” It’s a tortured and ultimately dishonest piece that calls to mind some lines from a great old Buzzcocks tune:

About the future I only can reminisce
For what I’ve had is what I’ll never get
And although this may sound strange
My future and my past are presently disarranged
And I’m surfing on a wave of nostalgia
For an age yet to come.

Weinberger, coauthor of The Cluetrain Manifesto and author of Small Pieces Loosely Joined, has long argued that the “architecture” of the internet provides not only a metaphor but an actual working model for a more perfect society. The net was created with data-communication protocols that enabled “packets of information [to be moved] around without any central management or control,” and that technical architecture, he contends, not only facilitates but promotes democratic values such as “open access to information” and “the permission-free ability to read and to post.” Spanning civil and commercial interests, the net is “an open market of ideas and businesses” that provides “a framework for bottom-up collaboration among equals.”

More than that, though, Weinberger saw a deterministic power in the networking technology. The “open” technical protocols of “the One True Architecture,” as he puts it, were fated to become society’s protocols. He offered an “argument from architecture” positing that the technology’s political and social values would by necessity become the values of its users:

The Internet’s architecture reflects certain values.

Our use of the Net, based on that architecture, strongly encourages the adoption of those values.

Therefore, the Internet tends to transform us and our institutions in ways that reflect those values.

But the actual development of the web frustrated that utopian dream. The Triumphalists were, Weinberger now admits, naive and even delusional:

It is not enough for the Internet to succeed. It must succeed inevitably. Or so many of us Internet Triumphalists in the mid-1990s thought. For, if the march of the Internet’s new values were not unstoppable, then it would surely be stopped by our age-old inclinations and power structures. The Net, as we called it then, would become just another venue for the familiar patterns of marginalization, exclusion, oppression, and ignorance.

Now I’m afraid the argument for inevitability that kept me, and others, hopeful for 20 years no longer holds.

What the Triumphalists mistook for the one true architecture was merely a foundation, it turns out, and that foundation could support many different kinds of media structures with many different “values.” And so the net gave rise to, for instance, private content distribution networks, or CDNs, which, despite the underlying democratic protocols for information exchange, allowed big companies to distribute their informational wares with greater speed and reliability than the rest of us could afford. On the net, as elsewhere in society, some equals turned out to be more equal than others. “The architecture itself has been distorted by the needs of commercial content creators and their enabling pals,” Weinberger laments. “Paradise has been well and truly paved.” So much for inevitability.

And then there’s Facebook, the vast city-state, the virtual Singapore, that sprawls atop the net’s foundation like Smaug on the dwarves’ treasure.

Facebook is not ours. It’s theirs for us to use. … If the new prototype of the Internet is not the Blogosphere but Facebook, then the argument that’s maintained me for 20 years has fallen apart. If users don’t come into contact with the Internet’s architecture, that architecture can’t shape them. If they instead deal almost exclusively with Facebook, then the conclusion of the Argument from Architecture ought to be that Facebook is shaping the values of its users. And Facebook’s values are not much like the Net’s.

Weinberger, like the other Triumphalists, has invested much intellectual and emotional capital into the net over the years. And now he arrives at his moment of crisis: the dreaded moment when he has to write off all that investment and declare bankruptcy.

And yet.

And yet?

At the moment of accounting, Weinberger loses his intellectual nerve. Rather than offer a critique of the net as it is, he gives in to nostalgia for the net as it was and should be. He crawls back into the empty bank vault and searches in the dust for the bright, untarnished penny that will redeem everything — or at least buy him a little more time. “The Internet’s architecture still shows through many of the big corporate apps that are the Internet’s new pavement,” he writes. And: “The Internet’s architecture shines through the Facebook layer, as it does through virtually all Internet applications.” And: “Those lessons of the Internet’s architecture shine through the layers built on top of it.” The glimmers! The glimmers! And then the reversal: “The pavement is well penetrated by the Internet. Maybe ‘pavement’ isn’t an apt metaphor at all. I’m sorry I brought it up.”

Well, that’s convenient. In the asphalt’s mirror, the Triumphalist’s failure is revealed to have an aura of triumph. What’s disappointing here is not Weinberger’s gobbledygook; it’s the self-justifying nature of the gobbledygook. He’s covering something up, and what he’s covering up is his own role in subverting the values he cherishes. As Weinberger makes clear, his work, dating back to The Cluetrain Manifesto, has argued that the “openness” of the net’s protocols would inevitably dissolve traditional sources of economic and political power. Everyone on the net, whether an individual or a corporation, would inevitably act as equals. Rather than pursuing their own interests, they would act as the technology demanded. By suggesting that the net’s democratic future was a fait accompli, a technological necessity, Weinberger abetted the kind of commercialization of the web that he now rues. The Triumphalists served as the flagmen for the paving crew.

Given the opportunity to examine the role that technological triumphalism played in the development of the net, Weinberger instead resurrects that triumphalism in a ghostly form.  When he claims that the “values” of an open architecture remain alive, if latent, in the closed architecture of Facebook, he’s giving himself a free pass. He concludes his piece with what can only be described as a kind of cynical sunniness: “We can try to teach the young’uns how the Internet works and remind them of its glory so that it can be as if they were present at the Revelation.” If the Triumphalists hadn’t been blinded by the Revelation, perhaps things would have worked out differently.

Media takes command

Last Saturday, I had the pleasure of addressing the annual convention of the Media Ecology Association in Denver. The title of my talk was “Media Takes Command: An Inquiry into the Consequences of Automation.” Here is what I said, along with the slides that accompanied the remarks.

media ecology.001

As I was trying to figure out what to talk about this afternoon, I found myself flipping through a copy of Neil Postman’s Amusing Ourselves to Death — the twentieth anniversary edition. I started thinking about one of the promotional blurbs printed at the front of the book. A reviewer for the Christian Science Monitor had written that Postman “starts where Marshall McLuhan left off, constructing his arguments with the resources of a scholar and the wit of a raconteur.”

media ecology.002

I can’t make claim to either the resources of a scholar or the wit of a raconteur, but at least I can follow Postman’s lead in starting where McLuhan left off. In fact, I’d like to start literally where he left off, with the final line of his most influential work, the 1964 book Understanding Media:

“Panic about automation as a threat of uniformity on a world scale is the projection into the future of mechanical standardization and specialism, which are now past.”

That’s not one of McLuhan’s better sentences. But it does include a couple of ideas that seem pertinent to our current situation.

media ecology.003

First is the suggestion that electronic automation marks a break, an epochal break, from the history of industrialization and mechanization. It’s always dangerous to try to pin McLuhan down — he’s a slippery guy — but in this final sentence, as elsewhere in the book, he presents automation as a liberating force that will free us from the narrow roles imposed by the division and specialization of labor under industrial production. Thanks to our newfound ability to hand routine jobs over to electronic circuits and computer software, we’ll become whole beings again, fully and creatively engaged in life.

I think McLuhan is wrong here. And I’ll return to this idea a little later.

media ecology.004

Second is McLuhan’s reference to “panic about automation.” That sounds prophetic at the moment. We are today, after all, in the midst of a panic about automation, with many smart people sounding dire warnings about robots stealing all our jobs or even supplanting us as the major-domos of Planet Earth. But McLuhan wasn’t being prophetic. The panic about automation he was talking about was the one going on all around him in 1964. Ever since 1950, when Norbert Wiener had published his foreboding book The Human Use of Human Beings, people had been growing more worried about the threat that computers and robots posed to workers’ jobs and even to human existence. A technological apocalypse seemed to be in the offing at the dawn of the sixties, and it was that very immediate fear that McLuhan was seeking to counter.

What really interests me, though, is the way that McLuhan presents automation as a form of media. The last chapter of Understanding Media is titled “Automation: Learning a Living,” and the theme of electronic automation runs throughout the book. Placing automation in the realm of media might seem odd, but it strikes me as appropriate and illuminating. It helps us understand both the progress of media and the likely consequences of computer automation.

media ecology.005

To explain why that’s so, let me walk you through a brief history of media. I want to stress that this is a history of media, not the history of media. Media has plenty of histories, and mine is just one of them, and a fairly idiosyncratic one at that.

I’m going to tell the story through the example of the map, which happens to be my all-time favorite medium. The map was, so far as I can judge, the first medium invented by the human race, and in the map we find a microcosm of media in general. The map originated as a simple tool. A person with knowledge of a particular place drew a map, probably in the dirt with a stick, as a way to communicate his knowledge to another person who wanted to get somewhere in that place. The medium of the map was just a means to transfer useful knowledge efficiently between a knower and a doer at a particular moment in time.

media ecology.006

Then, at some point, the map and the mapmaker parted company. Maps started to be inscribed on pieces of hide or stone tablets or other objects more durable and transportable than a patch of dirt, and when that happened the knower’s presence was no longer necessary. The map subsumed the knower. The medium became the knowledge. And when a means of mechanical reproduction came along — the printing press, say — the map became a mass medium, shared by a large audience of doers who wanted to get from one place to another.

For most of recent history, this has been the form of the map we’ve all been familiar with. You arrive in some new place, you go into a gas station and you buy a map, and then you examine the map to figure out where you are and to plot a route to get to wherever you want to be. You don’t give much thought to the knower, or knowers, whose knowledge went into the map. As far as you’re concerned, the medium is the knowledge.

media ecology.007

Something very interesting has happened to the map recently, during the course of our own lives. When the medium of the map was transferred from paper to software, the map gained the ability to speak to us, to give us commands. With Google Maps or an in-dash GPS system, we no longer have to look at a map and plot out a route for ourselves; the map assumes that work. We become the actuators of the map’s instructions: the assistants who, on the software’s command, turn the wheel. You might even say that our role becomes that of a robotic apparatus controlled by the medium.

So, having earlier subsumed the knower, the map now begins to subsume the doer. The medium becomes the actor.

media ecology.008

In the next and ultimate stage of this story, the map becomes the vehicle. The map does the driving. Google plans to remove the steering wheel from its adorable new autonomous car, along with the pedals and other controls. The map at that point will take over all of the work of the doer — not just the navigational tasks but the perceptual and motor tasks as well. Media takes command.

media ecology.009

Over the course of a few months in 2010 and 2011, we got a dramatic lesson in the way media, in the form of software programming, is taking command. In 2010 came the announcement that Google had built a car that could drive itself successfully through highway traffic, a feat unimaginable just a few years earlier. Software, we saw, had gained the ability to navigate the physical world, the world of things. Then, in early 2011, IBM’s Watson beat two reigning champions in the intellectually challenging game show Jeopardy, another astonishing feat. Software, we saw, had also gained the ability to navigate the abstract world, the world of thoughts and symbols and words.

media ecology.010

To understand how far computer programming can go in taking over human work, it helps to survey the different skills that people employ in getting stuff done. I would suggest that our skills can be divided into four categories. There are the motor skills we use in performing manual work. There are the analytical skills we use in diagnosing situations and phenomena and making judgments about them. There are the creative skills we use in doing artistic work, creating new objects and forms. And finally there are the communication or interpersonal skills we use in exerting influence over others, persuading them to act or think in certain ways. We combine these four skills, these four modes of doing, in various ways to carry out work, whether in wage-paying jobs or in our personal lives.

media ecology.011

Software has made great progress in each of these modes of doing. As the self-driving cars demonstrate, robots are getting much better at acting autonomously in the world. Another good example here is the robotic lettuce picker. It was long thought that robots would have a tough time taking over the harvesting of tender fruits and vegetables. That work requires both a delicate touch and a subtle perceptual ability to distinguish between a crop and a weed. But, thanks to advances in sensors, machine vision, and related technologies, robots are now capable of picking lettuce and other fragile crops.

media ecology.012

Computers are getting better at thinking about the world, too. Software programs can now read and interpret mammograms and other medical images, highlighting suspicious areas. And software is adept at digging evidence out of piles of documents in legal cases, work that once employed large numbers of junior lawyers and paralegals. In many other fields as well, we’re seeing data-processing machines perform analyses, make diagnoses, and recommend courses of action.

media ecology.013

Robots and computers don’t make particularly good artists, and probably never will, but they can nevertheless perform a good deal of creative work. The Flux software program, originally developed in Google’s labs, aims at taking on the work of designing buildings and even doing city planning — the kinds of jobs that skilled architects and urban designers have long been engaged in. In journalism, computers are already churning out simple news stories, such as play-by-play recaps of sports matches. An artistic creation doesn’t have to be beautiful to be useful.

media ecology.014

Finally, there’s the work of influence and persuasion. And here, too, we see media taking command. If you have FitBit or some other personal training app on your phone, you probably receive instruction, encouragement, and even little badges of achievement as you carry out your exercise routines. We also look to apps to remind us to eat healthy meals, take our medications and vitamins on schedule, and get appropriate amounts of sleep. In the recommendation engines used by companies like Amazon and Netflix to suggest books and movies, we see another form of automated persuasion. And dating software will suggest potential mates or at least enticing candidates for one-night stands.

I’m not implying that computers have rendered us obsolete, or that they’ll render us obsolete in the foreseeable future. Those fears are overblown. In each mode of doing, there are plenty of things that smart, experienced people can do that lie far beyond the capabilities of computers and their programmers. What I am suggesting is that software is making rapid and sometimes profound advances in performing difficult work in each of these areas. Even if a computer doesn’t steal your job, it’s going to change your job — and, along the way, your life.

media ecology.015

Most of us misperceive the effects of automation. We assume that a particular task or job can be turned over to software without otherwise changing how we go about our work. Human-factors researchers refer to this view as the substitution myth. What they’ve discovered is that when a computer enters a process, even in a small role, it changes the process. It changes the roles people play, the way people go about their work, and even people’s attitudes and perceptions. Raja Parasuraman, a leading scholar of automation — he died, sadly, earlier this year — explained the substitution myth succinctly in an article: “Automation does not simply supplant human activity but rather changes it, often in ways unintended and unanticipated by the designers.” That’s always been true, it’s worth noting, of media. The people who create the media rarely anticipate the effects the media will have.

media ecology.016

In some cases, the unanticipated effects of automation can be disastrous. People who rely on computers to do work for them often fall victim to a condition known as automation complacency. They’re so confident that the computer will perform flawlessly that they stop paying attention. Then, when the technology fails or something unexpected happens, they screw up. In 1995, the crew of the cruise ship Royal Majesty watched passively for hours as a flawed computer navigation system sent the ship miles off course. It ultimately ran aground on a sandbar near Nantucket Island. In 2009, Air France 447 crashed into the Atlantic after its autopilot system shut off in bad weather and the pilots, taken by surprise, made fatal mistakes.

media ecology.017

People using computers also fall victim to what’s called automation bias. They’re quick to place their complete faith in the data and instructions coming from their computers, even when the data and instructions are incomplete or flat-out wrong. In 2013, a nurse at a San Francisco pediatric hospital gave a boy 38 antibiotic pills because that was what the computer ordered — even though the patient should have been given only a single pill. (The boy survived the overdose, thankfully.)

We see automation bias in our own routine use of apps and online information sources. Think of the faith we place in the Google search engine, for instance. Google uses a certain set of criteria in ranking sources of information, placing particular emphasis on popularity, recency, and personal relevance. That works perfectly well in many cases, but the Google criteria are hardly the only ones that might be used in searching for information. We might often be better served by seeing a diversity of viewpoints, or by looking at sources that have stood the test of time, or finding information that runs counter to our personal preferences and prejudices. And yet, because Google serves up its answers with such speed and certainty, we default to Google and take its judgments as gospel.

media ecology.018

These examples hint at a deeper truth — what researchers term the automation paradox. Automated systems often end up having the opposite effect from what was intended. Software designed to reduce human error may, by triggering complacency and bias, make human error more likely. Software designed to “free us” from routine chores may end up turning us into passive computer operators, diminishing our agency and autonomy and making our work less interesting and fulfilling.

The best way to understand the automation paradox is by looking at the Yerkes-Dodson Law. Back in the early years of the last century, the Harvard psychologist Robert Yerkes and his student John Dodson set out to understand the way animals learn new skills. They did an experiment in which they trained mice to go down a particular passageway in a box. They gave the mice a shock whenever the animals headed down the wrong passageway. The scientists assumed that the mice would learn more quickly as the intensity of the shock increased. But they found that wasn’t the case. The mice performed poorly both when they received a very light shock and when they received a very strong shock. They performed best when they received a moderate shock. Yerkes and Dodson concluded that, when it comes to mastering a difficult task, too little stimulus or arousal can make animals lethargic and complacent. They’re not inspired to learn. But too much stimulus or arousal can make animals so anxious, so stressed, that they panic and fail to learn anything at all. Animals perform and learn best when they are stimulated enough to be deeply engaged in their work but no so much they they feel overwhelmed.

The Yerkes-Dodson Law applies to people as well as rodents. If we don’t have enough to do, our attention drifts and we tune out. If we’re overwhelmed, we panic, become discombobulated, and make mistakes. Unfortunately, computer automation, as it’s typically designed, tends to push us in the wrong directions. Because software is usually designed to relieve us of effort, to remove the “friction” from our work and our lives, we end up suffering from a lack of stimulation. Our engagement in what we’re doing fades, and we perform lazily and learn little. But when something goes wrong — when the technology fails, for instance — we’re suddenly pushed all the way over into the debilitating state of overstimulation. Not only do we have to re-engage with the work, and reorient ourselves to the situation, but we also have to consult computer screens and punch in data. We’re overwhelmed.

What we rarely experience is the state of optimum stimulus and engagement — the state in which we perform best, learn best, and feel best.

media ecology.020

The dream that the technologies of automation will liberate us from work, the dream expressed by McLuhan, is a seductive one. Karl Marx, in the middle of the nineteenth century, wrote of how new production technologies could have “the wonderful power of shortening and fructifying human labor.” He foresaw a time when he would be able “to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticize after dinner, just as I have a mind.” But Marx did not believe that the emancipatory potential of technology was inherent in the technology itself. The emancipatory power would be released only through political, economic, and social changes. Technology would always serve its master.

Since then, the dream of technological liberation has come to be disassociated from its political, economic, and social context. Technology itself, particularly in the form of automation, has come to be seen as our would-be liberator. Writers as diverse as Oscar Wilde and John Maynard Keynes, as well as McLuhan, have expressed a confidence in the power of machinery to return us to an Eden of leisure and plenty. Today, we routinely hear similar predictions coming from Silicon Valley. Last year, in a series of tweets, the entrepreneur and venture capitalist Marc Andreessen expressed his vision of the coming utopia of perfect automation:

“Human nature expresses itself fully, for the first time in history. Without physical need constraints, we will be whoever we want to be. The main fields of human endeavor will be culture, arts, sciences, creativity, philosophy, experimentation, exploration, adventure. Rather than nothing to do, we would have everything to do: curiosity, artistic and scientific creativity, new forms of status seeking. Imagine six, or 10, billion people doing nothing but arts and sciences, culture and exploring and learning. What a world that would be.”

Flower power would appear to be alive and well in San Francisco, if only as a mask for other, bruter forms of power.

Andreessen is oblivious to the fact that he is expressing a Marxian dream. Technology renders history irrelevant. So what he gives us is a simplistic, self-serving fairy tale. If software takes over all labor, surely we humans will be lifted into a paradise of what McLuhan termed “self-employment and artistic autonomy.”

media ecology.021

But the dream is not the reality. What we see emerging as media takes command is a cycle of dependency. As we come to rely on software to do things for us, we are relieved of the difficult work and pressing challenges necessary for skill-building. We experience deskilling, which makes us even more dependent on the software. And the cycle takes another turn. Because the media of software is invisible to us, a black box that both knows and does without disclosing its workings to us, we also become subject to manipulation as we become more dependent on the technology. Economic and cultural power accrues to the programmers, to the people who control the media.

media ecology.022

Last year, I published a book on automation called The Glass Cage. When I came up with the title, I wasn’t thinking of Max Weber’s famous metaphor of the “iron cage” of industrialization. But more and more I see a continuity between the iron cage of mechanized industry and the glass cage of computer automation. In The Protestant Ethic and the “Spirit” of Capitalism, published in 1905, Weber argued that labor was once seen as a means to a higher calling — a way to rise above selfish concerns, serve God and, ultimately, gain entrance to heaven. But under industrial capitalism labor was drained of its spirituality. It became merely a way to manufacture earthly goods and to earn money to buy earthly goods. With automation, we see a similar reversal. What is viewed, romantically, as a path to liberation becomes instead a path to dependency. As is so often the case, popular culture provides us with the most powerful images of our situation: Charlie Chaplin cheerfully pushing and pulling a lever in Modern Times, and the plump and passive humans of the future staring into their colorful screens in Wall-E.

media ecology.024

My first slide, you may remember, showed part of a photograph of a psychological researcher putting a mouse into a maze. Here is the entire photograph. But it’s not of a psychologist putting a mouse into a maze. Rather, it’s of Claude Shannon, the father of information science, putting a robotic mouse into a maze to demonstrate the efficacy of the automaton’s programming. When I first came across this picture, I found it amusing. But its shadows have come to haunt me. In this photograph — of an information scientist placing a robotic animal into a maze to test its programming — we may have found the perfect visual metaphor for our time.

Thank you.

The seconds are just packed

hopper3

This post is the final installment in Rough Type’s Realtime Chronicles, which began here in 2009. An earlier version of this post appeared at Edge.org.

“Everything is going too fast and not fast enough,” laments Warren Oates, playing a decaying gearhead called G.T.O., in Monte Hellman’s 1971 masterpiece Two-Lane Blacktop. I can relate. The faster the clock spins, the more I feel as if I’m stuck in a slo-mo GIF loop.

It’s weird. We humans have been shown to have remarkably accurate internal clocks. Take away our wristwatches and our cell phones, dim the LEDs on all our appliances and gizmos, and we can still make pretty good estimates about the passage of minutes and hours. Our brains have adapted well to mechanical time-keeping devices. But our time-tracking faculty goes out of whack easily. Our perception of time is subjective; it changes, as we all know, with circumstances. When things are happening quickly around us, delays that would otherwise seem brief begin to feel interminable. Seconds stretch out. Minutes go on forever. “Our sense of time,” observed William James in his 1890 Principles of Psychology, “seems subject to the law of contrast.”

In a 2009 article in the Philosophical Transactions of the Royal Society, the French psychologists Sylvie Droit-Volet and Sandrine Gil described what they call the paradox of time: “although humans are able to accurately estimate time as if they possess a specific mechanism that allows them to measure time,” they wrote, “their representations of time are easily distorted by the context.” They describe how our sense of time changes with our emotional state. When we’re agitated or anxious, for instance, time seems to crawl; we lose patience. Our social milieu, too, influences the way we experience time. Studies suggest, write Droit-Volet and Gill, “that individuals match their time with that of others.” The “activity rhythm” of those around us alters our own perception of the passing of time.

“A compression of time characterizes the life of the century now closing,” wrote James Gleick in his 1999 book Faster. Such compression characterized, as well, the preceding century. ‘The dreamy quiet old days are over and gone forever,” lamented William Smith in 1886; “for men now live, think and work at express speed.” I suspect it would take no more than a minute of googling to discover a quotation from one of the ancients bemoaning the horrific speed of contemporary life. The past has always had the advantage of seeming, and probably being, less hurried than the present.

Still, something has changed in the last few years. Given what we know about the variability of our time sense, it seems clear that information and communication technologies would have a particularly strong effect on our perception of time. After all, those technologies often determine the pace of the events we experience, the speed with which we’re presented with new information and stimuli, and even the rhythm of our interactions with others. That’s been true for a long time — the newspaper, the telephone, and the television all quickened the speed of life — but the influence must be all the stronger now that we carry powerful and extraordinarily fast computers around with us all day long. Our gadgets train us to expect near-instantaneous responses to our actions, and we quickly get frustrated and annoyed at even brief delays.

I know from my own experience with computers that my perception of time has been changed by technology. If I go from using a fast computer or web connection to using even a slightly slower one, processes that take just a few seconds longer — waking the machine from sleep, launching an application, opening a web page — seem almost intolerably slow. Never before have I been so aware of, and annoyed by, the passage of mere seconds.

Research on web users makes it clear that this is a general phenomenon. Back in 2006, a famous study of online retailing found that a large percentage of shoppers would abandon a merchant’s site if its pages took four seconds or longer to load. In the few years since, the so-called Four Second Rule has been repealed and replaced by the Quarter of a Second Rule. Studies by companies like Google and Microsoft now find that it takes a delay of just 250 milliseconds in page loading for people to start abandoning a site. “Two hundred fifty milliseconds, either slower or faster, is close to the magic number now for competitive advantage on the Web,” a top Microsoft engineer said in 2012.  To put that into perspective, it takes about the same amount of time for you to blink an eye.

A recent study of online video viewing provides more evidence of how advances in media and networking technology reduce the patience of human beings. The researchers, affiliated with the networking firm Akamai Technologies, studied a huge database that documented 23 million video views by nearly seven million people. They found that people start abandoning a video in droves after a two-second delay. That won’t come as a surprise to anyone who has had to wait for a YouTube clip to begin after clicking the Start button. (The only surprise was that 10 percent of people were willing to wait a full fifty seconds for a video to begin. Almost a whole minute! I’m guessing they spent the time checking their Facebook feed.) More interesting is the study’s finding of a causal link between higher connection speeds and higher abandonment rates. Check it out:

Every time a network gets quicker, we become antsier. “Every millisecond matters,” says a Google engineer.

As we experience faster flows of information online, we become, in other words, less patient people. But impatience is not just a network effect. The phenomenon is amplified by the constant buzz of Facebook, Twitter, Snapchat, texting, and social networking in general. Society’s “activity rhythm” has never been so harried. Impatience is a contagion spread from gadget to gadget.

All of this has obvious importance to anyone involved in online media or in running data centers. But it also has implications for how all of us think, socialize, and in general live. If we assume that networks will continue to get faster — a pretty safe bet — then we can also conclude that we’ll become more and more impatient, more and more intolerant of even milliseconds of delay between action and response. As a result, we’ll be less likely to experience anything that requires us to wait, that doesn’t provide us with instant gratification. That has cultural as well as personal consequences. The greatest of works — in art, science, politics, whatever — tend to take time and patience both to create and to appreciate. The deepest experiences can’t be measured in fractions of seconds.

It’s not clear whether a technology-induced loss of patience persists even when we’re not using the technology. But I would hypothesize (based on what I see in myself and in others) that our sense of time is indeed changing in a lasting way. Digital technologies are training us to be more conscious of and more antagonistic toward delays of all sorts — and perhaps more intolerant of moments of time that pass without the arrival of new messages or other stimuli. Call it the patience deficit. Because our experience of time is so important to our experience of life, it strikes me that these kinds of technology- and media-induced changes in our perceptions can have particularly broad consequences. How long are you willing to wait for a new thing? How many empty seconds can you endure?

Image: detail of Edward Hopper’s “Four Lane Road.”

A reasonable part of the house

phone

There was, in most homes, a small, boxy machine affixed to the wall, usually in the kitchen, and this machine was called a telephone. —Wikipedia, 2030

The home telephone had a good hundred-year run. Its days are numbered now. Its name, truncated to just phone, will live on, attached anachronistically to the diminutive general-purpose computers we carry around with us. (We really should have called them teles rather than phones.) But the object itself? It’s headed for history’s landfill, one layer up from the PalmPilot and the pager.

A remarkable thing about the telephone, in retrospect, is that it was a shared device. It was familial rather than personal. That entailed some complications.

In his monumental study of the forms of human interlocution, published posthumously in 1992 as the two-volume Lectures on Conversation, the sociologist Harvey Sacks explained how the arrival of the home telephone introduced a whole new role in conversation: that of the answerer. There was the caller, there was the called, and then there was the answerer, who might or might not also be the called. The caller would never know for sure who would answer the phone — it might be the called’s mom or dad rather than the called — and what kind of pre-conversational rigamarole might need to be endured, what pleasantries might need to be exchanged, what verbal gauntlet might need to be run, before the called would actually take the line. As for the answerer, he or she would not know, upon picking up the phone, whether he or she would also be playing the role of the called or would merely serve as the answerer, a kind of functionary or go-between. Each ringing of the telephone set off little waves of subterranean tension in the household: expectation, apprehension, maybe even some resentment.

“Hello?”

“Is Amy there?”

“Who’s calling?”

Sacks:

In non-professional settings by and large, it’s from among the possible calleds that answerers are selected; answerer being now a merely potential resting state, where you’ve made preparations for turning out to be the called right off when you say “Hello.” Answerers can become calleds, or they can become non-calleds-but-talked-to, or they can remain answerers, in the sense of not being talked to themselves, and also having what turn out to be obligations incumbent on being an answerer-not-called; obligations like getting the called or taking a message for the called.

As I said: complications. And also: an intimate entwining of familial interests.

The answerer, upon realizing that he is not the called, Sacks continues, occupies “the least happy position” in the exchange.

Having done the picking up of the phone, they have been turned into someone at the mercy of the treatment that the caller will give them: What kind of jobs are they going to impose? Are they even going to talk to them? A lot of family world is implicated in the way those little things come out, an enormous amount of conflict turning on being always the answerer and never the called, and battles over who is to pick up the phone.

“I’ll get it!”

But what exactly will you get?

And so here we have this strange device, this technology, and it suddenly appears in the midst of the home, in the midst of the family, crouching there with all sorts of inscrutable purposes and intents. And yet — and this is the most remarkable thing of all — it doesn’t take long for it to be accommodated, to come to feel as though it’s a natural part of the home. Rather than remaking the world, Sacks argues, the telephone was subsumed into the world. The familial and social dynamics that the telephone revealed, with each ring, each uncradling of the receiver, are ones that were always already there.

Here’s an object introduced into the world 75 years ago. And it’s a technical thing which has a variety of aspects to it. It works only with voices, and because of economic considerations people share it … Now what happens is, like any other natural object, a culture secretes itself onto it in its well-shaped ways. It turns this technical apparatus which allows for conversation, into something in which the ways that conversation works are more or less brought to bear …

What we’re studying, then, is making the phone a reasonable part of the house. … We can read the world out of the phone conversation as well as we can read it out of anything else we’re doing. That’s a funny kind of thing, in which each new object becomes the occasion for seeing again what we can see anywhere; seeing people’s nastinesses or goodnesses and all the rest, when they do this initially technical job of talking over the phone. This technical apparatus is, then, being made at home with the rest of our world. And that’s a thing that’s routinely being done, and it’s the source for the failures of technocratic dreams that if only we introduced some fantastic new communication machine the world will be transformed. Where what happens is that the object is made at home in the world that has whatever organization it already has.

“Who is it?”

“It’s me.”

_____
Image: detail of a Bell System advertisement, circa 1960.