The manipulators

marionettes

In “The Manipulators,” a new essay in the Los Angeles Review of Books, I explore two much-discussed documents published earlier this year: “Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks” by Adam Kramer et al. and “Judgment in Case C-131/12: Google Spain SL, Google Inc v Agencia Espanola de Proteccion de Datos, Mario Costeja Gonzalez” by the Court of Justice of the European Union. The latter, I argue, helps us make sense of the former. Both challenge us to think afresh about the past and the future of the net.

Here’s how the piece begins:

Since the launch of Netscape and Yahoo twenty years ago, the development of the internet has been a story of new companies and new products, a story shaped largely by the interests of entrepreneurs and venture capitalists. The plot has been linear; the pace, relentless. In 1995 came Amazon and Craigslist; in 1997, Google and Netflix; in 1999, Napster and Blogger; in 2001, iTunes; in 2003, MySpace; in 2004, Facebook; in 2005, YouTube; in 2006, Twitter; in 2007, the iPhone and the Kindle; in 2008, Airbnb; in 2010, Instagram; in 2011, Snapchat; in 2012, Coursera; in 2013, Google Glass. It has been a carnival ride, and we, the public, have been the giddy passengers.

This year something changed. The big news about the net came not in the form of buzzy startups or cool gadgets but in the shape of two dry, arcane documents. One was a scientific paper describing an experiment in which researchers attempted to alter the moods of Facebook users by secretly manipulating the messages they saw. The other was a ruling by the European Union’s highest court granting citizens the right to have outdated or inaccurate information about them erased from Google and other search engines. Both documents provoked consternation, anger, and argument. Both raised important, complicated issues without resolving them. Arriving in the wake of revelations about the NSA’s online spying operation, both seemed to herald, in very different ways, a new stage in the net’s history — one in which the public will be called upon to guide the technology, rather than the other way around. We may look back on 2014 as the year the internet began to grow up.

Read on.

Image: “Marionettes” by Mario De Carli.

1 Comment

Filed under Uncategorized

Students and their devices

viewmaster

“The practical effects of my decision to allow technology use in class grew worse over time,” writes Clay Shirky in explaining why he’s decided to ban laptops, smartphones, and tablets from the classes he teaches at NYU. “The level of distraction in my classes seemed to grow, even though it was the same professor and largely the same set of topics, taught to a group of students selected using roughly the same criteria every year. The change seemed to correlate more with the rising ubiquity and utility of the devices themselves, rather than any change in me, the students, or the rest of the classroom encounter.”

When students put away their devices, Shirky continues, “it’s as if someone has let fresh air into the room. The conversation brightens, [and] there is a sense of relief from many of the students. Multi-tasking is cognitively exhausting — when we do it by choice, being asked to stop can come as a welcome change.”

It’s been more than ten years now since Cornell’s Helene Hembrooke and Geri Gay published their famous “The Laptop and the Lecture” study, which documented how laptop use reduces students’ retention of material presented in class.* Since then, the evidence of the cognitive toll that distractions, interruptions, and multitasking inflict on memory and learning has only grown. I surveyed a lot of the evidence in my 2010 book The Shallows, and Shirky details several of the more recent studies. The evidence fits with what educational psychologists have long known: when a person’s cognitive load — the amount of information streaming into working memory — rises beyond a certain, quite low threshold, learning suffers. There’s nothing counterintuitive about this. We’ve all experienced cognitive overload and its debilitating effects.

Earlier this year, Dan Rockmore, a computer scientist at Dartmouth, wrote of his decision to ban laptops and other personal computing devices from his classes:

I banned laptops in the classroom after it became common practice to carry them to school. When I created my “electronic etiquette policy” (as I call it in my syllabus), I was acting on a gut feeling based on personal experience. I’d always figured that, for the kinds of computer-science and math classes that I generally teach, which can have a significant theoretical component, any advantage that might be gained by having a machine at the ready, or available for the primary goal of taking notes, was negligible at best. We still haven’t made it easy to type notation-laden sentences, so the potential benefits were low. Meanwhile, the temptation for distraction was high. I know that I have a hard time staying on task when the option to check out at any momentary lull is available; I assumed that this must be true for my students, as well.

As Rockmore followed the research on classroom technology use, he found that the empirical evidence backed up his instincts.

No one would call Shirky or Rockmore a Luddite or a nostalgist or a technophobe. They are thoughtful, analytical scholars and teachers who have great enthusiasm and respect for computers and the internet. So their critiques of classroom computer use are especially important. Shirky, in particular, has always had a strong inclination to leave decisions about computer and phone use up to his students. He wouldn’t have changed his mind without good reason.

Still, even as the evidence grows, there are many teachers who, for a variety of reasons, continue to oppose any restrictions on classroom computer use — and who sometimes criticize colleagues that do ban gadgets as blinkered or backward-looking. At this point, some of the pro-gadget arguments are starting to sound strained. Alexander Reid, an English professor at the University of Buffalo, draws a fairly silly parallel between computers and books:

Can we imagine a liberal arts degree where one of the goals is to graduate students who can work collaboratively with information/media technologies and networks? Of course we can. It’s called English. It’s just that the information/media technologies and networks take the form of books and other print media. Is a book a distraction? Of course. Ever try to talk to someone who is reading a book? What would you think of a student sitting in a classroom reading a magazine, doodling in a notebook or doing a crossword puzzle? However, we insist that students bring their books to class and strongly encourage them to write.

Others worry that putting limits on gadget use, even if justified pedagogically, should be rejected as paternalistic. Rebecca Schuman, who teaches at Pierre Laclede Honors College, makes this case:

My colleagues and I joke sometimes that we teach “13th-graders,” but really, if I confiscate laptops at the door, am I not creating a 13th-grade classroom? Despite their bottle-rocket butt pranks and their 10-foot beer bongs, college students are old enough to vote and go to war. They should be old enough to decide for themselves whether they want to pay attention in class — and to face the consequences if they do not.

A related point, also made by Schuman, is that teachers, not computers, are ultimately to blame if students get distracted in class:

You want students to close their machines and pay attention? Put them in a smaller seminar where their presence actually registers and matters, and be engaging enough — or, in my case, ask enough questions cold — that students aren’t tempted to stick their faces in their machines in the first place.

The problem with blaming the teacher, or the student, or the class format — the problem with treating the technology as a neutral object — is that it ignores the way software and social media are painstakingly designed to exploit the mind’s natural inclination toward distractedness. Shirky makes this point well, and I’ll quote him here at some length:

Laptops, tablets and phones — the devices on which the struggle between focus and distraction is played out daily — are making the problem progressively worse. Any designer of software as a service has an incentive to be as ingratiating as they can be, in order to compete with other such services. “Look what a good job I’m doing! Look how much value I’m delivering!”

This problem is especially acute with social media, because . . . social information is immediately and emotionally engaging. Both the form and the content of a Facebook update are almost irresistibly distracting, especially compared with the hard slog of coursework. (“Your former lover tagged a photo you are in” vs. “The Crimean War was the first conflict significantly affected by use of the telegraph.” Spot the difference?)

Worse, the designers of operating systems have every incentive to be arms dealers to the social media firms. Beeps and pings and pop-ups and icons, contemporary interfaces provide an extraordinary array of attention-getting devices, emphasis on “getting.” Humans are incapable of ignoring surprising new information in our visual field, an effect that is strongest when the visual cue is slightly above and beside the area we’re focusing on. (Does that sound like the upper-right corner of a screen near you?)

The form and content of a Facebook update may be almost irresistible, but when combined with a visual alert in your immediate peripheral vision, it is—really, actually, biologically—impossible to resist. Our visual and emotional systems are faster and more powerful than our intellect; we are given to automatic responses when either system receives stimulus, much less both. Asking a student to stay focused while she has alerts on is like asking a chess player to concentrate while rapping their knuckles with a ruler at unpredictable intervals.

A teacher has an obligation not only to teach but to create, or at least try to create, a classroom atmosphere that is conducive to the work of learning. Ignoring technology’s influence on that atmosphere doesn’t do students any favors. Here’s some of what Anne Curzan, a University of Michigan English professor, tells her students when she explains why she doesn’t want them to use computers in class:

Now I know that one could argue that it is your choice about whether you want to use this hour and 20 minutes to engage actively with the material at hand, or whether you would like to multitask. You’re not bothering anyone (one could argue) as you quietly do your email or check Facebook. Here’s the problem with that theory: From what we can tell, you are actually damaging the learning environment for others, even if you’re being quiet about it. A study published in 2013 found that not only did the multitasking student in a classroom do worse on a postclass test on the material, so did the peers who could see the computer. In other words, the off-task laptop use distracted not just the laptop user but also the group of students behind the laptop user. (And I get it, believe me. I was once in a lecture where the woman in front of me was shoe shopping, and I found myself thinking at one point, “No, not the pink ones!” I don’t remember all that much else about the lecture.)

Our attention is governed not just by our will but by our environment. That’s how we’re built.

I suspect the debate over classroom computer use has become a perennial one, and that it will blossom anew every September. That’s good, as it’s an issue that deserves ongoing debate. But there is a point on which perhaps everyone can agree, and from that point of agreement might emerge constructive action. It’s a point about design, and Shirky gets at it in his article:

The fact that hardware and software is being professionally designed to distract was the first thing that made me willing to require rather than merely suggest that students not use devices in class. There are some counter-moves in the industry right now — software that takes over your screen to hide distractions, software that prevents you from logging into certain sites or using the internet at all, phones with Do Not Disturb options — but at the moment these are rear-guard actions. The industry has committed itself to an arms race for my students’ attention, and if it’s me against Facebook and Apple, I lose.

Computers and software can be designed in many different ways, and the design decisions will always reflect the interests of the designers (or their employers). Beyond the laptops-or-no-laptops-debate lies a broader and more important discussion about how computer technology has come to be designed — and why.

*This post, and the other posts cited within it, concerns the use of personal computing devices in classes in which those devices have not been formally incorporated as teaching aids. There are, of course, plenty of classes in which computers are built into the teaching plan. It’s perhaps noteworthy, though, to point out that, in the “Laptop and Lecture” study, students who used their laptops to look at sites relevant to the class actually did even worse on tests of retention than did students who used their computers to look at irrelevant sites.

Image: “Viewmaster” by Geof Wilson.

5 Comments

Filed under Uncategorized

Speak, algorithm

192hmaxdb7ij5jpg

Lost in yesterday’s coverage of the Apple Watch was a small software feature that, when demonstrated on the stage of the Flint Center, earned brief but vigorous applause from the audience. It was the watch’s ability to scan incoming messages and suggest possible responses. The Verge’s live-blogging crew were wowed:

autothink

The example Apple presented was pretty rudimentary. The incoming message included the question “Are you going with Love Shack or Wild Thing?” To which the watch suggested three possible answers: Love Shack, Wild Thing, Not Sure. Big whoop. In terms of natural language processing, that’s like Watson with a lobotomy.

But it was just a taste of a much more sophisticated “predictive text” capability, called QuickType, that Apple has built into the latest version of its smartphone operating system. “iOS 8 predicts what you’ll say next,” explains the company. “No matter whom you’re saying it to.”

Now you can write entire sentences with a few taps. Because as you type, you’ll see choices of words or phrases you’d probably type next, based on your past conversations and writing style. iOS 8 takes into account the casual style you might use in messages and the more formal language you probably use in Mail. It also adjusts based on the person you’re communicating with, because your choice of words is likely more laid back with your spouse than with your boss.

Now, this may all turn out to be a clumsy parlor trick. If the system isn’t adept at mimicking a user’s writing style and matching it to the intended recipient — if it doesn’t nail both text and context — the predictive-text feature will rarely be used, except for purposes of making “stupid robot” jokes. But if the feature actually turns out to be “good enough” — or if our conversational expectations devolve to a point where the automated messages feel acceptable — then it will mark a breakthrough in the automation of communication and even thought. We’ll begin allowing our computers to speak for us.

Is that a development to be welcomed? It seems more than a little weird that Apple’s developers would get excited about an algorithm that will converse with your spouse on your behalf, channeling the “laid back” tone you deploy for conjugal chitchat. The programmers seem to assume that romantic partners are desperate to trade intimacy for efficiency. I suppose the next step is to get Frederick Winslow Taylor to stand beside the marriage bed with a stopwatch and a clipboard. “Three caresses would have been sufficient, ma’am.”

In The Glass Cage, I argue that we’ve embraced a wrong-headed and ultimately destructive approach to automating human activities, and in Apple’s let-the-software-do-the-talking feature we see a particularly disquieting manifestation of the reigning design ethic. Technical qualities are given precedence over human qualities, and human qualities come to be seen as dispensable.

When we allow ourselves to be guided by predictive algorithms, in acting, speaking, or thinking, we inevitably become more predictable ourselves, as Rochester Institute of Technology philosopher Evan Selinger pointed out in discussing the Apple system:

Predicting you is predicting a predictable you. Which is itself subtracting from your autonomy. And it’s encouraging you to be predictable, to be a facsimile of yourself. So it’s a prediction and a nudge at the same moment.

It’s a slippery slope, and it becomes more slippery with each nudge. Predicted responses begin to replace responses, simply because it’s a little more efficient to simulate a response —a thought, a sentence, a gesture — than to undertake the small amount of work necessary to have a response. And then that small amount of work begins to seem like a lot of work — like correcting your own typos rather than allowing the spellchecker to do it. And then, as original responses become rarer, the predictions become predictions based on earlier predictions. Where does the algorithm end and the self begin?

And if we assume that the people we’re exchanging messages with are also using the predictive-text program to formulate their responses . . . well, then things get really strange. Everything becomes a parlor trick.

Image: Thomas Edison’s talking doll.

5 Comments

Filed under Uncategorized

Apple’s small big thing

watch2

Over at the Time site, I have a short commentary on the Apple Watch. It begins:

Many of us already feel as if we’re handcuffed to our computers. With its new smart watch, unveiled today in California, Apple is hoping to turn that figure of speech into a literal truth.

Apple has a lot riding on the diminutive gadget. It’s the first major piece of hardware the company has rolled out since the iPad made its debut four years ago. It’s the first new product to be designed under the purview of fledgling CEO Tim Cook. And, when it goes on sale early next year, it will be Apple’s first entry in a much-hyped product category — wearable computers — that has so far fallen short of expectations. Jocks and geeks seem eager to strap computers onto their bodies. The rest of us have yet to be convinced. …

Read on.

(Apple’s live stream of its event today was, by the way, a true comedy of errors. It seemed like the company was methodically going down a checklist of all the possible ways you can screw up a stream, from running audio feeds in different languages simultaneously to bouncing around in time in a way that would have made Billy Pilgrim dizzy.)

Image: Darren Birgenheier.

4 Comments

Filed under Uncategorized

There will always be spare change

nobeggars

“There will always be change,” wrote Thomas Friedman in his 2012 column “Average Is Over.” “But the one thing we know for sure is that with each advance in globalization and the I.T. revolution, the best jobs will require workers to have more and better education to make themselves above average.”

Economics professor and blogger Tyler Cowen borrowed Friedman’s title for his most recent book, Average Is Over: Powering America Beyond the Age of the Great Stagnation, but his emphasis, in surveying the opportunities opening up in today’s labor scene, is not exactly on more and better education. “I see marketing as the seminal sector for our future economy,” Cowen writes:

We can expect a lot of job growth in personal services, even if those jobs do not rely very directly on computer power. The more that the high earners pull in, the more people will compete to serve them, sometimes for high wages and sometimes for low wages. This will mean maids, chauffeurs, and gardeners for the high earners, but a lot of the service jobs won’t fall under the service category as traditionally construed. They can be thought of as “creating the customer experience.” Have you ever walked into a restaurant and been greeted by a friendly hostess, and noticed she was very attractive? Have you ever had an assistant bring you coffee before a meeting, touching you on the shoulder before leaving the cup? Have you gone to negotiate a major business deal and been greated by a mass of smiles and offers of future friendship and collaboration? All of those people are working to make you feel better. They are working at marketing.

I would just like to interject here that I am feeling better.

It sounds a little silly, but making high earners feel better in just about every part of their lives will be a major source of job growth in the future. At some point it is hard to sell more physical stuff to high earners, yet there is usually just a bit more room to make them feel better. Better about the world. Better about themselves. Better about what they have achieved.

Welcome to the mendicancy economy.

Cowen uses a happy metaphor to sketch out the contours of interpersonal competition in this new world:

The more that earnings rise at the upper end of the distribution, the more competition there will be for the attention of the high earners and thus the greater the importance of marketing. If you imagine two wealthy billionaire peers sitting down for lunch, their demands for the attention of the other tend to be roughly equal. After all, each always has a billion dollars (or more) to spend and they don’t need to court each other for favors so much. There is a (rough) parity of attention offered and received. Of course, some billionaires are more important than others, or one billionaire may court another for the purpose of becoming a mega-billionaire, but let’s set that aside.

Compare it to one of those same billionaires riding in a limousine, with open windows, through the streets of Calcutta. A lot of beggars will be competing for the attention of that billionaire, and yet probably the billionaire won’t much need the attention of the beggars. The billionaire may feel overwhelmed by all of these demands, and yet each of these beggars will be trying to find some way to break through and capture but a moment of the billionaire’s attention. This in short is what the contemporary world is like, except the billionaire is the broader class of high earners and the beggars are wealthier than in India.

That’s an awesome analogy, really felicitous, but it has one big flaw. What billionaire is going to drive through Calcutta in a limo with the windows open? I’m sorry, but that’s just nuts.

UPDATE (9/6): Cowen offers an even sunnier speculation today: “It is an interesting question how much that will prove to be the equilibrium more generally, namely the genetic superiority of slaves because they can reap more external investment. After all, capital is more productive today than in times past, so evolution might now produce more slaves.”

Remember back when we were beggars? Those were good times.

Image: “undesirables” by shannon.

9 Comments

Filed under The Glass Cage

Big Internet

lost

We talk about Big Oil and Big Pharma and Big Ag. Maybe it’s time we started talking about Big Internet.

That thought crossed my mind after reading a couple of recent posts. One was Scott Rosenberg’s piece about a renaissance in the ancient art of blogging. I hadn’t even realized that blogs were a thing again, but Rosenberg delivers the evidence. Jason Kottke, too, says that blogging is once again the geist in our zeit. Welcome back, world.

The other piece was Alan Jacobs’s goodbye to Twitter. Jacobs writes of a growing sense of disillusionment and disappointment with the ubiquitous microblogging platform:

As long as I’ve been on Twitter (I started in March 2007) people have been complaining about Twitter. But recently things have changed. The complaints have increased in frequency and intensity, and now are coming more often from especially thoughtful and constructive users of the platform. There is an air of defeat about these complaints now, an almost palpable giving-up. For many of the really smart people on Twitter, it’s over. Not in the sense that they’ll quit using it altogether; but some of what was best about Twitter — primarily the experience of discovery — is now pretty clearly a thing of the past.

“Big Twitter was great — for a while,” says Jacobs. “But now it’s over, and it’s time to move on.”

These trends, if they are actually trends, seem related. I sense that they both stem from a sense of exhaustion with what I’m calling Big Internet. By Big Internet, I mean the platform- and plantation-based internet, the one centered around giants like Google and Facebook and Twitter and Amazon and Apple. Maybe these companies were insurgents at one point, but now they’re fat and bland and obsessed with expanding or defending their empires. They’ve become the Henry VIIIs of the web. And it’s starting to feel a little gross to be in their presence.

So, yeah, I’m down with this retro movement. Bring back personal blogs. Bring back RSS. Bring back the fun. Screw Big Internet.

But, please, don’t bring back the term “blogosphere.”

Image: still from Lost.

4 Comments

Filed under Uncategorized

The Glass Cage: early reviews

automationandus

I’ve been encouraged by the comments on The Glass Cage that have been coming in from early readers and reviewers. Here’s a roundup:

“Nicholas Carr is among the most lucid, thoughtful, and necessary thinkers alive. He’s also terrific company. The Glass Cage should be required reading for everyone with a phone.” —Jonathan Safran Foer, author of Everything Is Illuminated and Extremely Loud and Incredibly Close

“Written with restrained objectivity, The Glass Cage is nevertheless scary as any sci-fi thriller could be. It forces readers to reflect on what they already suspect, but don’t want to admit, about how technology is shaping our lives. Like it or not, we are now responsible for the future of this negligible planet circling Sol; books like this one are needed until we develop an appropriate operating manual.” —Mihaly Csikszentmihalyi, author of Flow: The Psychology of Optimal Experience; professor of psychology and management, Claremont Graduate University

“Nick Carr is our most informed, intelligent critic of technology. Since we are going to automate everything, Carr persuades us that we should do it wisely — with mindful automation. Carr’s human-centric technological future is one you might actually want to live in.” —Kevin Kelly, author of What Technology Wants

“Carr brilliantly and scrupulously explores all the psychological and economic angles of our increasingly problematic reliance on machinery and microchips to manage almost every aspect of our lives. A must-read for software engineers and technology experts in all corners of industry as well as everyone who finds himself or herself increasingly dependent on and addicted to gadgets.” —Booklist (starred review)

“Artificial intelligence has that name for a reason — it isn’t natural, it isn’t human. As Nicholas Carr argues so gracefully and convincingly in this important, insightful book, it is time for people to regain the art of thinking. It is time to invent a world where machines are subservient to the needs and wishes of humanity.” —Donald Norman, author of Things that Make Us Smart and Design of Everyday Things; director of the University of California San Diego Design Lab

“Most of us, myself included, are too busy tweeting to notice our march into technological de-humanization. Nicholas Carr applies the brakes for us (and our self-driving cars). Smart and concise, this book will change the way you think about the growing automation of our lives.” —Gary Shteyngart, author of Super Sad True Love Story and Little Failure

“Nick Carr is the rare thinker who understands that technological progress is both essential and worrying. The Glass Cage is a call for technology that complements our human capabilities, rather than replacing them.” —Clay Shirky, author of Here Comes Everybody and Cognitive Surplus

“I read it without putting it down. I think it is a very necessary book, that we ignore at our peril.” —Iain McGilchrist, author of The Master and His Emissary

“This sweeping analysis from journalist Carr outlines the various implications of automation in our everyday lives. He asks whether automating technology is always beneficial, or if we are unwittingly rendering ourselves superfluous and ineffectual, and cites examples where both might be the case, such as fatal plane crashes attributed to an overreliance on autopilot; the deskilling of architects and doctors caused by occupational software; and the adverse mental effects of GPS. … The book manages to be engaging, informative, and elicits much needed reflection on the philosophical and ethical implications of over-reliance on automation. Carr deftly incorporates hard research and historical developments with philosophy and prose to depict how technology is changing the way we live our lives and the world we find ourselves in.” —Publishers Weekly

“Important.” —Kirkus

The U.S. edition of The Glass Cage will be published on September 29; other editions will be published simultaneously or in the coming months. I’ll be out talking about the book throughout October and will be posting a schedule of events soon.

3 Comments

Filed under The Glass Cage