Listen to me

Kiewit Computation Center 1966

I’ll be giving a couple of talks in the Northeast next week. Both are free and open to the public. On Monday, I’ll be in Hanover, New Hampshire, to give a lecture at my alma mater, Dartmouth College. (Details here.) And on Wednesday I’ll be in Buffalo to speak at Medaille College. (Details here.) If you’re in either area, please come by.

That fuzzy picture up there, incidentally, is of Dartmouth’s Kiewit Computation Center, where, in the late 1970s, I first touched a digital computer — more precisely, a terminal connected to the school’s mainframe time-sharing system. Kiewit was torn down in 2000.

Photo: Dartmouth College.

Frederick Taylor and the quantified self

stopwatch

The faithful gathered in San Francisco earlier this month for the Quantified Self 2013 Global Conference, an annual conclave of “self-trackers and tool-makers.” Founded by long-time technology writers Gary Wolf and Kevin Kelly, the Quantified Self, or QS, movement aims to bring the new apparatus of big data to the old pursuit of self-actualization, using sensors, wearables, apps, and the cloud to monitor and optimize bodily functions and design a more perfect self. “Instead of interrogating their inner worlds through talking and writing,” Wolf explains, trackers are seeking “self-knowledge through numbers.” He continues: “Behind the allure of the quantified self is a guess that many of our problems come from simply lacking the instruments to understand who we are.”

“Allure” may be an overstatement. A small band of enthusiasts is gung-ho for QS. But the masses, so far, have shown little interest in self-tracking, rarely going beyond the basic pedometer level of monitoring fitness regimes. Like meticulous calorie counting, self-tracking is hard to sustain. It gets boring quickly, and the numbers are more likely to breed anxiety than contentment. There’s a reason the body keeps its vagaries out of the conscious mind.

But, as management researcher H. James Wilson reports in the Wall Street Journal, there is one area where self-tracking is beginning to be pursued with vigor: business operations. Some companies are outfitting employees with wearable computers and other self-tracking gadgets in order to “gather subtle data about how they move and act — and then use that information to help them do their jobs better.” There is, for example, the Hitachi Business Microscope, which office workers wear on a lanyard around their neck. “The device is packed with sensors that monitor things like how workers move and speak, as well as environmental factors like light and temperature. So, it can track where workers travel in an office, and recognize whom they’re talking to by communicating with other people’s badges. It can also measure how well they’re talking to them — by recording things like how often they make hand gestures and nod, and the energy level in their voice.” Other companies are developing Google Glass-style “smart glasses” to accomplish similar things.

A little more than a century ago, Frederick Winslow Taylor introduced “scientific management” to American factories. By meticulously tracking and measuring the physical movements of manufacturing workers as they went through their tasks, Taylor counseled, companies could determine the “one best way” to do any job and then enforce that protocol on all other workers. Through the systematic collection of data, industry could be optimized, operated as a perfectly calibrated machine. “In the past the man has been first,” declared Taylor; “in the future the system must be first.”

The goals and mechanics of the Quantified Self movement, when applied in business settings, not only bring back the ethic of Taylorism, but extend Taylorism’s reach into the white-collar workforce. The dream of perfect optimization reaches into the intimate realm of personal affiliation and conversation among colleagues. One thing that Taylor’s system aided was the mechanization of factory work. Once you had turned the jobs of human workers into numbers, it turned out, you also had a good template for replacing those workers with machines. It seems that the new Taylorism might accomplish something similar for knowledge work. It provides the specs for software applications that can take over the jobs of even highly educated professionals.

One can  imagine other ways QS might be productively applied in the commercial realm. Automobile insurers already give policy holders an incentive for installing tracking sensors in their cars to monitor their driving habits. It seems only logical for health and life insurers to provide similar incentives for policy holders who wear body sensors. Premiums could then be adjusted based on, say, a person’s cholesterol or blood sugar levels, or food intake, or even the areas they travel in or the people they associate with — anything that correlates with risk of illness or death. (Rough Type readers will remember that this is a goal that Yahoo director Max Levchin is actively pursuing.)

The transformation of QS from tool of liberation to tool of control follows a well-established pattern in the recent history of networked computers. Back in the mainframe age, computers were essentially control mechanisms, aimed at monitoring and enforcing rules on people and processes. In the PC era, computers also came to be used to liberate people, freeing them from corporate oversight and control. The tension between central control and personal liberation continues to define the application of computer power. We originally thought that the internet would tilt the balance further away from control and toward liberation. That now seems to be a misjudgment. By extending the collection of data to intimate spheres of personal activity and then centralizing the storage and processing of that data, the net actually seems to be shifting the balance back toward the control function. The system takes precedence.

Automation and the decay of talent

automation

The new wave of computer automation has provoked much concern and debate about job losses and the future of employment. Less discussed has been the way the computer is shaping the way people work and act, both on the job and in their personal lives. As the computer becomes a universal tool for getting things done, what happens to the diverse talents that people used to develop by engaging directly with the world in all its intricacy and complexity? In “The Great Forgetting,” an essay in the new issue of The Atlantic (the online version of the article bears the title “All Can Be Lost”), I look at some of the unexpected consequences of computer automation, particularly the way that software, as currently designed, tends to steal from us the opportunity to develop rich, distinctive, and hard-earned skills. Psychologists, human-factors experts, and other researchers are discovering that the price we pay for the ease and convenience of automation is a narrowing of human possibility.

Here’s an excerpt:

Psychologists have found that when we work with computers, we often fall victim to two cognitive ailments — complacency and bias — that can undercut our performance and lead to mistakes. Automation complacency occurs when a computer lulls us into a false sense of security. Confident that the machine will work flawlessly and handle any problem that crops up, we allow our attention to drift. We become disengaged from our work, and our awareness of what’s going on around us fades. Automation bias occurs when we place too much faith in the accuracy of the information coming through our monitors. Our trust in the software becomes so strong that we ignore or discount other information sources, including our own eyes and ears. When a computer provides incorrect or insufficient data, we remain oblivious to the error.

Examples of complacency and bias have been well documented in high-risk situations — on flight decks and battlefields, in factory control rooms — but recent studies suggest that the problems can bedevil anyone working with a computer. Many radiologists today use analytical software to highlight suspicious areas on mammograms. Usually, the highlights aid in the discovery of disease. But they can also have the opposite effect. Biased by the software’s suggestions, radiologists may give cursory attention to the areas of an image that haven’t been highlighted, sometimes overlooking an early-stage tumor. Most of us have experienced complacency when at a computer. In using e-mail or word-processing software, we become less proficient proofreaders when we know that a spell-checker is at work.

The way computers can weaken awareness and attentiveness points to a deeper problem. Automation turns us from actors into observers. That shift may make our lives easier, but it can also inhibit the development of expertise. Since the late 1970s, psychologists have been documenting a phenomenon called the “generation effect.” It was first observed in studies of vocabulary, which revealed that people remember words much better when they actively call them to mind — when they generate them — than when they simply read them. The effect, it has since become clear, influences learning in many different circumstances. When you engage actively in a task, you set off intricate mental processes that allow you to retain more knowledge. You learn more and remember more. When you repeat the same task over a long period, your brain constructs specialized neural circuits dedicated to the activity. It assembles a rich store of information and organizes that knowledge in a way that allows you to tap into it instantaneously.

Whether it’s Serena Williams on a tennis court or Magnus Carlsen at a chessboard, an expert can spot patterns, evaluate signals, and react to changing circumstances with speed and precision that can seem uncanny. What looks like instinct is hard-won skill, skill that requires exactly the kind of struggle that modern software seeks to alleviate.

This is one of the themes that I’ll be exploring in my next book, The Glass Cage: Automation and Us.

Photo: NASA.

The pleasures of merely circulating

whisk

Rob Horning, one of the most thoughtful writers on the online experience, considers how his writing and thinking have changed as he has shifted his time from blogging to tweeting:

Now, when I hit upon an article that starts me thinking, I excerpt a sentence of it on Twitter and start firing off aphoristic tweets. I don’t worry about ordering my thoughts into a sequential argument, or revising my first impressions much. I don’t try to build toward a conclusion; rather I try to draw conclusions that seem to require no build-up, no particular justification to be superficially plausible. And then, more often than not, I will monitor what sort of reaction these statements get to assess their accuracy, their resonance. At best, my process of deliberation and further reading on the subject  gets replaced by immediate Twitter conversations with other people. At worst, tweeting pre-empts my doing any further thinking, since I am satisfied with merely charting the response.

One of his recent tweets reads: “making things circulate seems far more important than letting things ‘settle’ within me.” Frisson and dolor, a Catherine wheel of vanity, servitude to vanishing ink: the Twitter intellectual is a strange new species.

The future’s so bright I gotta wear Glass

shiva

“It’s coming,” said Google Xer Mary Lou Jepsen last week. “I don’t think it’s stoppable.” She’s referring, of course, to Glass, Google’s much anticipated head-mountable. “I’ve thought for many years that a laptop is an extension of my mind,” she continued. “Why not have it closer to my mind?” Hmm. Next time I see Spock, I’m going to have to ask him if that’s logical. In the meantime, I will sleep with my Air under my pillow, just in case.

“You become addicted to the speed of it,” Jepsen confessed. Like all junkies, she craves more. Glass is just the “Model T” of wearables. In the churning bowels of the company’s secret lab, she let on, new and even zippier generations of mind-melding computers are already taking shape. “I’m now running a super-secret, stealth part of Google X that I can’t tell you anything about today. I’m really sorry. Maybe next year. Probably next year.” Jepsen said that she and her team are only sleeping three hours a night. That’s how important their work is.

Michael Sacasas sees Jepsen’s words as yet another manifestation of what he terms the Borg Complex — the quasi-religious belief that computer technology is an inexorable force carrying us to a better world. Only losers would be so foolish as to resist. Earlier this year, Eric Schmidt gave the starkest expression of this view. Also speaking of Glass, he said: “Our goal is to make the world better. We’ll take the criticism along the way, but criticisms are inevitably from people who are afraid of change or who have not figured out that there will be an adaptation of society to it.” Inevitably. Schmidt, in his benighted fashion, wants to imbue adaptation, a fundamentally amoral process, with a moral glow. To adapt is to improve, history and biology be damned.

There is no greater arrogance than the arrogance of those who assume their intentions justify their actions.

I believe in yesterday

8track

The following review of Retromania by Simon Reynolds originally appeared in The New Republic in 2011.

“Who wants yesterday’s papers?” sang Mick Jagger in 1967. “Who wants yesterday’s girl?” The answer, in the Swinging 60s, was obvious: “Nobody in the world.” That was then. Now we seem to want nothing more than to read yesterday’s papers and carry on with yesterday’s girl. Popular culture has become obsessed with the past — with recycling it, rehashing it, replaying it. Though we live in a fast-forward age, we can’t take our finger off the rewind button.

Nowhere is the past’s grip so tight as in the world of music, as the rock critic Simon Reynolds meticulously documents in Retromania. Over the last two decades, he argues, the “exploratory impulse” that once powered pop music forward has shifted its focus from Now to Then. Fans and musicians alike have turned into archeologists. The evidence is everywhere. There are the reunion tours and the reissues, the box sets and the tribute albums. There are the R&B museums, the rock halls of fame, the punk libraries. There are the collectors of vinyl and cassettes and — God help us — eight-tracks. There are the remixes, the mash-ups, the samples. There are the “curated” playlists. When pop shakes its moneymaker today, what rises is the dust of the archive.

Nostalgia is nothing new. It has been a refrain of art and literature at least since Homer set Odysseus on Calypso’s island and had him yearn to turn back time. And popular music has always had a strong revivalist streak, particularly in Reynolds’s native Britain. But retromania is not just about nostalgia. It goes deeper than the tie-dyed dreams of Baby Boomers or the gray-flecked mohawks of Gen X punks. Whereas nostalgia is rooted in a sense of the past as past, retromania stems from a sense of the past as present. Yesterday’s music, in all its forms, has become the atmosphere of contemporary culture. We live, Reynolds remarks, in “a simultaneity of pop time that abolishes history while nibbling away at the present’s own sense of itself as an era with a distinct identity and feel.”

One reason is the sheer quantity of pop music that has accumulated over the past half century. Whether it is rock, funk, country, or electronica, we have heard it all before. Even the edgiest musicians have little choice but to produce pastiche. Greatly amplifying the effect is the recent shift to producing and distributing songs as digital files. When kids had to fork out cash for records or CDs, they had to make hard choices about what they listened to and what they let pass by. Usually, they would choose the new over the old, which served to keep the past at bay. Now, thanks to freely traded MP3s and all-you-can-eat music services such as Spotify, there is no need to make choices. Pretty much any song ever recorded is just a click away. With the economic barrier removed, the old floods in, swamping the new.

Reynolds argues that the glut of tunes has not just changed what we listen to; it has also changed how we listen. The rapt fan who knew every hook, lyric, and lead by heart has been replaced by the fickle dabbler who cannot stop hitting Next. Reynolds presents himself as a case in point, and his experience will sound familiar to anyone with a hard drive packed with music files. He was initially “captivated” by the ability to use a computer to navigate an ocean of tunes. But in short order he found himself more interested in “the mechanism” than the music: “Soon I was listening to just the first fifteen seconds of every track; then, not listening at all.” The logical culmination, he writes, “would have been for me to remove the headphones and just look at the track display.”

Given a choice between more and less, we all choose more, even if it means a loss of sensory and emotional engagement. Though we don’t like to admit it, the digital music revolution has merely confirmed what we have always known: we cherish what is scarce, and what is abundant we view as disposable. Reynolds quotes another music writer, Karla Starr: “I find myself getting bored even in the middle of songs simply because I can.”

As all time is compressed into the present moment, our recycling becomes ever more compulsive. We begin to plunder not just bygone eras but also the immediate past. Over the course of the last decade, writes Reynolds, “the interval between something happening and its being revisited seemed to shrink insidiously.” Not only did we have 1960s revivals and 70s revivals and 80s revivals, but we even began to see revivals of musical fashions from the 90s, such as shoegaze and Britpop. It sometimes seems that the reason things go out of fashion so quickly these days is because we cannot wait for them to come back into fashion. Displaying enthusiasm for something new is socially risky, particularly in an ironical time. It is safer to wait for it to come around again, preferably bearing the “vintage” label.

For musicians themselves, the danger is that their art becomes disconnected from the present — “timeless” in a bad sense. The eras of greatest ferment and creativity in popular music, such as the mid-60s and the late 70s, were times of social discontent, when the young rejected the past and its stifling traditions. Providing the soundtrack for rebellion, rock musicians felt compelled to slay their fathers rather than pay tribute to them. Even if their lyrics were about getting laid or getting high — as they frequently were — their songs were filled with political force. Those not busy being born, as Dylan put it shortly after taking an axe to his folkie roots, are busy dying.

Now, youth culture is largely apolitical, and pop’s soundtrack is just a soundtrack. Those not busy being born are busy listening to their iPods. Whether it’s Fleet Foxes or Friendly Fires, Black Keys or Beach House, today’s bands are less likely to battle the past than to luxuriate in it. This is not to say they aren’t good bands. As Reynolds is careful to note, there is plenty of fine pop music being made today, in an ear-boggling array of styles. But drained of its subversive energies, none of it matters much. It just streams by.

Retromania is an important and often compelling work, but it is also a sprawling one. Its aesthetic is more Sandinista! than “Hey Ya!” But Reynolds is sharp, and he knows his stuff. Even when his narrative gets lost in the details, the details remain interesting. (I didn’t know, for instance, that the rave scene of the early 90s had its origins in the trad-jazz fad that preceded Beatlemania in England.) Reynolds might also be accused of being something of a retromaniac himself. After all, in worrying about the enervating influence of the past, he echoes the complaints of earlier cultural critics. “Our age is retrospective,” grumbled Emerson in 1836. “Why should we grope among the dry bones of the past, or put the living generation into masquerade out of its faded wardrobe?” Longing for a less nostalgic time is itself a form of nostalgia.

But Reynolds makes a convincing case that today’s retromania is different in degree and in kind from anything we’ve experienced before. And it is not just an affliction of the mainstream. It has also warped the perspective of the avant-garde, dulling culture’s cutting edge. It’s one thing for old folks to look backwards. It’s another thing — and a far more lamentable one — for young people to feed on the past. Somebody needs to figure out a new way to smash a guitar.

Photo from Eight Track Museum.

Ambient Reality

ThingOneThingTwo

People are forever buttonholing me on the street and saying, “Nick, what comes after realtime?” It’s a good question, and I happen to know the answer: Ambient Reality. Ambient Reality is the ultimate disruption, as it alters the actual fabric of the universe. We begin living in the prenow. Things happen before they happen. “Between the desire / And the spasm,” wrote T. S. Eliot, “Falls the Shadow.” In Ambient Reality, the Shadow goes away. Spasm precedes desire. In fact, it’s all spasm. We enter what I call Uninterrupted Spasm State, or USS.

In “How the Internet of Things Changes Everything,” a new and seemingly machine-written article in Foreign Affairs, two McKinsey consultants write of “the interplay” between “the most disruptive technologies of the coming decade: the mobile Internet and the Internet of Things.” The “mobile-ready Internet of Things,” as they term it, will have “a profound, widespread, and transformative impact on how we live and work.” For instance, “by combining a digital camera in a wearable device with image-recognition software, a shopper can automatically be fed comparative pricing information based on the image of a product captured by the camera.” That’s something to look forward to, but the McKinseyites are missing the big picture. They underestimate the profundity, the ubiquity, and the transformativeness of the coming disruption. In Ambient Reality, there is no such thing as “a shopper.” Indeed, the concept of “shopping” becomes anachronistic. Goods are delivered before the urge to buy them manifests itself in the conscious mind. Demand is ambient, as are pricing comparisons. They become streams in the cloud.

EBay strategist John Sheldon gets closer to the truth when he describes, in a new Wired piece, the concept of “ambient commerce”:

Imagine setting up a rule in Nike+, he says, to have the app order you a new pair of shoes after you run 300 miles. … Now consider an even more advanced scenario. A shirt has a sensor that detects moisture. And you find yourself stuck out in the rain without an umbrella. Not too many minutes after the downpour starts, a car pulls up alongside you. A courier steps out and hands you an umbrella — or possibly a rain jacket, depending on what rules you set up ahead of time for such a situation.

I ask you: Are there no bounds to the dreams of our innovators?

Comments Wired‘s Marcus Wohlsen, “Though it might be hard to believe, the logistics of delivering that umbrella are likely more complex than the math behind detecting the water.” That is indeed hard to believe.

But even these scenarios fail to capture the full power of Ambient Reality. They assume some agency is required on the part of the consumer. One has to “set up a rule” about the lifespan of one’s sneakers. One has to pre-program a choice between umbrella and rain jacket. In Ambient Reality, no such agency is required. Personal decisions are made prenow, by communications among software-infused things. The sensors in your feet and in your sneakers are in constant communication not only with each other but with the cloud. When a new pair of sneakers is required, the new pair is automatically printed on your 3-D printer at home. The style of the sneakers is chosen algorithmically based on your past behavior as well as contemporaneous neural monitoring. Choice is ambient. As for that “courier” who “steps out and hands you an umbrella” after the onset of precipitation, that’s just plain retrograde. The required consumer good will be delivered before the rain starts by an unmanned drone delivery aircraft. The idea that humans will be involved in delivery chores is ridiculous. In Ambient Reality, human effort will be restricted to self-actualization—in other words, ambient consumption. That’s the essence of USS.

I hardly need mention that, once the shower has passed, the drone will retrieve the umbrella in order to deliver it to another person facing an imminent rain event. All assets will be shared to optimize utilization. Think how rarely you use your umbrella today: that’s a sign of how broken society is.

We are on the verge, says Wohlsen, of “a utopian future in which running out of toilet paper at the wrong time will never, ever happen again.” That’s very true, but the never-run-out-of-toilet-paper utopia is actually a transitional utopia. [inlinetweet prefix=”” tweeter=”” suffix=””]In the ultimate utopia of Ambient Reality, there will be no need for toilet paper.[/inlinetweet] But I’ll leave that for a future post.

This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here. A full listing of posts can be found here.