Atrapados

atrapados

The Spanish edition of The Glass Cage, titled Atrapados: Cómo las Máquinas se Apoderan de Nuestras Vidas, is being published by Taurus on Wednesday. Today’s issue of El País includes a special feature on the book, with a review by Mercedes Cebrián, a profile, an excerpt, and a rejoinder by business professor Enrique Dans. You can also read the opening pages of the Spanish translation here, courtesy of the publisher. The translation is by Pedro Cifuentes.

I’m keeping a list of all forthcoming editions of the book here.

Comments Off

Filed under Uncategorized

What algorithms want

abacus

Here’s another brief excerpt from my new essay, “The Manipulators: Facebook’s Social Engineering Project,” in the Los Angeles Review of Books:

We have had a hard time thinking clearly about companies like Google and Facebook because we have never before had to deal with companies like Google and Facebook. They are something new in the world, and they don’t fit neatly into our existing legal and cultural templates. Because they operate at such unimaginable magnitude, carrying out millions of informational transactions every second, we’ve tended to think of them as vast, faceless, dispassionate computers — as information-processing machines that exist outside the realm of human intention and control. That’s a misperception, and a dangerous one.

Modern computers and computer networks enable human judgment to be automated, to be exercised on a vast scale and at a breathtaking pace. But it’s still human judgment. Algorithms are constructed by people, and they reflect the interests, biases, and flaws of their makers. As Google’s founders themselves pointed out many years ago, an information aggregator operated for commercial gain will inevitably be compromised and should always be treated with suspicion. That is certainly true of a search engine that mediates our intellectual explorations; it is even more true of a social network that mediates our personal associations and conversations.

Because algorithms impose on us the interests and biases of others, we have not only a right but an obligation to carefully examine and, when appropriate, judiciously regulate those algorithms. We have a right and an obligation to understand how we, and our information, are being manipulated. To ignore that responsibility, or to shirk it because it raises hard problems, is to grant a small group of people — the kind of people who carried out the Facebook and OKCupid experiments — the power to play with us at their whim.

What algorithms want is what the people who write algorithms want. Appreciating that, and grappling with the implications, strikes me as one of the great challenges now lying before us.

Image: “abacus” by Jenny Downing.

3 Comments

Filed under The Glass Cage

The Uncaged Tour

uncaged

I like it when bands name their tours, like Dylan’s Why Do You Look At Me So Strangely Tour in 1992, or They Might Be Giants’ Don’t Tread on the Cut-up Snake World Tour, also in 1992, or Guided by Voices’ Insects of Rock Tour in 1994.* So I’ve decided to give a name to my upcoming book tour. It’s going to be called The Uncaged Tour. (Actually, the full, official title is The Uncaged Tour of the Americas 2014.)

Here are the dates so far, with links to more information:

Sept. 30: New York: The Glass Cage: Nicholas Carr in Conversation with Tim Wu (92nd St Y event)

Oct. 1: Washington, DC: Politics and Prose

Oct. 2: Cambridge, MA: Harvard Book Store

Oct. 6: Seattle: Town Hall Seattle

Oct. 8.: Mountain View, CA: Authors at Google

Oct. 8: San Francisco: Commonwealth Club (with Andrew Leonard)

Oct. 14: Boulder, CO: Boulder Book Store

Oct. 16: Calgary: Wordfest

Oct. 17: Salt Lake City: Utah Book Festival

Oct. 23: Denver: Tattered Cover Book Store

Oct. 25: Boston: Boston Book Festival

Nov. 5: Boulder, CO: Chautauqua

I hope to see you at one of the events.

Now I’m off to design the official tour t-shirt.

_____

*The early nineties appear to have been the golden age for tour names.

Image by Sebastien Camelot.

6 Comments

Filed under The Glass Cage

The unbearable unlightness of AI

obscura

There is a continuing assumption — a faith, really — that at some future moment, perhaps only a decade or two away, perhaps even nearer than that, artificial intelligence will, by means yet unknown, achieve consciousness. A window will open on the computer’s black box, and light will stream in. The universe will take a new turn, as the inanimate becomes, for a second time, animate.

George Lakoff, the linguist who cowrote Metaphors We Live By, says it ain’t going to happen. In a fascinating article by Michael Chorost, Lakoff argues not only that language, being essentially metaphorical, is inextricably bound up in our bodily existence, but that cognition and consciousness, too, flow from our experience as creatures on the earth. Recent neuroscience experiments seem to back Lakoff up. They suggest that even our most abstract thoughts involve the mental simulation of physical experiences.

Writes Chorost:

In a 2011 paper in the Journal of Cognitive Neuroscience, Rutvik Desai, an associate professor of psychology at the University of South Carolina, and his colleagues presented fMRI evidence that brains do in fact simulate metaphorical sentences that use action verbs. When reading both literal and metaphorical sentences, their subjects’ brains activated areas associated with control of action. “The understanding of sensory-motor metaphors is not abstracted away from their sensory-motor origins,” the researchers concluded.

Textural metaphors, too, appear to be simulated. That is, the brain processes “She’s had a rough time” by simulating the sensation of touching something rough. Krish Sathian, a professor of neurology, rehabilitation medicine, and psychology at Emory University, says, “For textural metaphor, you would predict on the Lakoff and Johnson account that it would recruit activity- and texture-selective somatosensory cortex, and that indeed is exactly what we found.”

The evidence points to a new theory about the source of consciousness:

What’s emerging from these studies isn’t just a theory of language or of metaphor. It’s a nascent theory of consciousness. Any algorithmic system faces the problem of bootstrapping itself from computing to knowing, from bit-shuffling to caring. Igniting previously stored memories of bodily experiences seems to be one way of getting there.

That, as Chorost notes, “raises problems for artificial intelligence”:

Since computers don’t have bodies, let alone sensations, what are the implications of these findings for their becoming conscious—that is, achieving strong AI? Lakoff is uncompromising: “It kills it.” Of Ray Kurzweil’s singularity thesis, he says, “I don’t believe it for a second.” Computers can run models of neural processes, he says, but absent bodily experience, those models will never actually be conscious.

Then again, even the algorithmic thinking of computers has a physical substrate. There is no software without hardware. The problem is that computers, unlike animals, have no sensory experience of their own existence. They are, or at least appear to be, radically dualist in their operation, their software oblivious to their hardware. If a computer could think metaphorically, what kind of metaphors would it come up with? It’s hard to imagine they’d be anything recognizable to humans.

Image: “Camera Obscura Test 2” by Jon Lewis.

4 Comments

Filed under Uncategorized

The manipulators

marionettes

In “The Manipulators,” a new essay in the Los Angeles Review of Books, I explore two much-discussed documents published earlier this year: “Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks” by Adam Kramer et al. and “Judgment in Case C-131/12: Google Spain SL, Google Inc v Agencia Espanola de Proteccion de Datos, Mario Costeja Gonzalez” by the Court of Justice of the European Union. The latter, I argue, helps us make sense of the former. Both challenge us to think afresh about the past and the future of the net.

Here’s how the piece begins:

Since the launch of Netscape and Yahoo twenty years ago, the development of the internet has been a story of new companies and new products, a story shaped largely by the interests of entrepreneurs and venture capitalists. The plot has been linear; the pace, relentless. In 1995 came Amazon and Craigslist; in 1997, Google and Netflix; in 1999, Napster and Blogger; in 2001, iTunes; in 2003, MySpace; in 2004, Facebook; in 2005, YouTube; in 2006, Twitter; in 2007, the iPhone and the Kindle; in 2008, Airbnb; in 2010, Instagram; in 2011, Snapchat; in 2012, Coursera; in 2013, Google Glass. It has been a carnival ride, and we, the public, have been the giddy passengers.

This year something changed. The big news about the net came not in the form of buzzy startups or cool gadgets but in the shape of two dry, arcane documents. One was a scientific paper describing an experiment in which researchers attempted to alter the moods of Facebook users by secretly manipulating the messages they saw. The other was a ruling by the European Union’s highest court granting citizens the right to have outdated or inaccurate information about them erased from Google and other search engines. Both documents provoked consternation, anger, and argument. Both raised important, complicated issues without resolving them. Arriving in the wake of revelations about the NSA’s online spying operation, both seemed to herald, in very different ways, a new stage in the net’s history — one in which the public will be called upon to guide the technology, rather than the other way around. We may look back on 2014 as the year the internet began to grow up.

Read on.

Image: “Marionettes” by Mario De Carli.

3 Comments

Filed under Uncategorized

Students and their devices

viewmaster

“The practical effects of my decision to allow technology use in class grew worse over time,” writes Clay Shirky in explaining why he’s decided to ban laptops, smartphones, and tablets from the classes he teaches at NYU. “The level of distraction in my classes seemed to grow, even though it was the same professor and largely the same set of topics, taught to a group of students selected using roughly the same criteria every year. The change seemed to correlate more with the rising ubiquity and utility of the devices themselves, rather than any change in me, the students, or the rest of the classroom encounter.”

When students put away their devices, Shirky continues, “it’s as if someone has let fresh air into the room. The conversation brightens, [and] there is a sense of relief from many of the students. Multi-tasking is cognitively exhausting — when we do it by choice, being asked to stop can come as a welcome change.”

It’s been more than ten years now since Cornell’s Helene Hembrooke and Geri Gay published their famous “The Laptop and the Lecture” study, which documented how laptop use reduces students’ retention of material presented in class.* Since then, the evidence of the cognitive toll that distractions, interruptions, and multitasking inflict on memory and learning has only grown. I surveyed a lot of the evidence in my 2010 book The Shallows, and Shirky details several of the more recent studies. The evidence fits with what educational psychologists have long known: when a person’s cognitive load — the amount of information streaming into working memory — rises beyond a certain, quite low threshold, learning suffers. There’s nothing counterintuitive about this. We’ve all experienced cognitive overload and its debilitating effects.

Earlier this year, Dan Rockmore, a computer scientist at Dartmouth, wrote of his decision to ban laptops and other personal computing devices from his classes:

I banned laptops in the classroom after it became common practice to carry them to school. When I created my “electronic etiquette policy” (as I call it in my syllabus), I was acting on a gut feeling based on personal experience. I’d always figured that, for the kinds of computer-science and math classes that I generally teach, which can have a significant theoretical component, any advantage that might be gained by having a machine at the ready, or available for the primary goal of taking notes, was negligible at best. We still haven’t made it easy to type notation-laden sentences, so the potential benefits were low. Meanwhile, the temptation for distraction was high. I know that I have a hard time staying on task when the option to check out at any momentary lull is available; I assumed that this must be true for my students, as well.

As Rockmore followed the research on classroom technology use, he found that the empirical evidence backed up his instincts.

No one would call Shirky or Rockmore a Luddite or a nostalgist or a technophobe. They are thoughtful, analytical scholars and teachers who have great enthusiasm and respect for computers and the internet. So their critiques of classroom computer use are especially important. Shirky, in particular, has always had a strong inclination to leave decisions about computer and phone use up to his students. He wouldn’t have changed his mind without good reason.

Still, even as the evidence grows, there are many teachers who, for a variety of reasons, continue to oppose any restrictions on classroom computer use — and who sometimes criticize colleagues that do ban gadgets as blinkered or backward-looking. At this point, some of the pro-gadget arguments are starting to sound strained. Alexander Reid, an English professor at the University of Buffalo, draws a fairly silly parallel between computers and books:

Can we imagine a liberal arts degree where one of the goals is to graduate students who can work collaboratively with information/media technologies and networks? Of course we can. It’s called English. It’s just that the information/media technologies and networks take the form of books and other print media. Is a book a distraction? Of course. Ever try to talk to someone who is reading a book? What would you think of a student sitting in a classroom reading a magazine, doodling in a notebook or doing a crossword puzzle? However, we insist that students bring their books to class and strongly encourage them to write.

Others worry that putting limits on gadget use, even if justified pedagogically, should be rejected as paternalistic. Rebecca Schuman, who teaches at Pierre Laclede Honors College, makes this case:

My colleagues and I joke sometimes that we teach “13th-graders,” but really, if I confiscate laptops at the door, am I not creating a 13th-grade classroom? Despite their bottle-rocket butt pranks and their 10-foot beer bongs, college students are old enough to vote and go to war. They should be old enough to decide for themselves whether they want to pay attention in class — and to face the consequences if they do not.

A related point, also made by Schuman, is that teachers, not computers, are ultimately to blame if students get distracted in class:

You want students to close their machines and pay attention? Put them in a smaller seminar where their presence actually registers and matters, and be engaging enough — or, in my case, ask enough questions cold — that students aren’t tempted to stick their faces in their machines in the first place.

The problem with blaming the teacher, or the student, or the class format — the problem with treating the technology as a neutral object — is that it ignores the way software and social media are painstakingly designed to exploit the mind’s natural inclination toward distractedness. Shirky makes this point well, and I’ll quote him here at some length:

Laptops, tablets and phones — the devices on which the struggle between focus and distraction is played out daily — are making the problem progressively worse. Any designer of software as a service has an incentive to be as ingratiating as they can be, in order to compete with other such services. “Look what a good job I’m doing! Look how much value I’m delivering!”

This problem is especially acute with social media, because . . . social information is immediately and emotionally engaging. Both the form and the content of a Facebook update are almost irresistibly distracting, especially compared with the hard slog of coursework. (“Your former lover tagged a photo you are in” vs. “The Crimean War was the first conflict significantly affected by use of the telegraph.” Spot the difference?)

Worse, the designers of operating systems have every incentive to be arms dealers to the social media firms. Beeps and pings and pop-ups and icons, contemporary interfaces provide an extraordinary array of attention-getting devices, emphasis on “getting.” Humans are incapable of ignoring surprising new information in our visual field, an effect that is strongest when the visual cue is slightly above and beside the area we’re focusing on. (Does that sound like the upper-right corner of a screen near you?)

The form and content of a Facebook update may be almost irresistible, but when combined with a visual alert in your immediate peripheral vision, it is—really, actually, biologically—impossible to resist. Our visual and emotional systems are faster and more powerful than our intellect; we are given to automatic responses when either system receives stimulus, much less both. Asking a student to stay focused while she has alerts on is like asking a chess player to concentrate while rapping their knuckles with a ruler at unpredictable intervals.

A teacher has an obligation not only to teach but to create, or at least try to create, a classroom atmosphere that is conducive to the work of learning. Ignoring technology’s influence on that atmosphere doesn’t do students any favors. Here’s some of what Anne Curzan, a University of Michigan English professor, tells her students when she explains why she doesn’t want them to use computers in class:

Now I know that one could argue that it is your choice about whether you want to use this hour and 20 minutes to engage actively with the material at hand, or whether you would like to multitask. You’re not bothering anyone (one could argue) as you quietly do your email or check Facebook. Here’s the problem with that theory: From what we can tell, you are actually damaging the learning environment for others, even if you’re being quiet about it. A study published in 2013 found that not only did the multitasking student in a classroom do worse on a postclass test on the material, so did the peers who could see the computer. In other words, the off-task laptop use distracted not just the laptop user but also the group of students behind the laptop user. (And I get it, believe me. I was once in a lecture where the woman in front of me was shoe shopping, and I found myself thinking at one point, “No, not the pink ones!” I don’t remember all that much else about the lecture.)

Our attention is governed not just by our will but by our environment. That’s how we’re built.

I suspect the debate over classroom computer use has become a perennial one, and that it will blossom anew every September. That’s good, as it’s an issue that deserves ongoing debate. But there is a point on which perhaps everyone can agree, and from that point of agreement might emerge constructive action. It’s a point about design, and Shirky gets at it in his article:

The fact that hardware and software is being professionally designed to distract was the first thing that made me willing to require rather than merely suggest that students not use devices in class. There are some counter-moves in the industry right now — software that takes over your screen to hide distractions, software that prevents you from logging into certain sites or using the internet at all, phones with Do Not Disturb options — but at the moment these are rear-guard actions. The industry has committed itself to an arms race for my students’ attention, and if it’s me against Facebook and Apple, I lose.

Computers and software can be designed in many different ways, and the design decisions will always reflect the interests of the designers (or their employers). Beyond the laptops-or-no-laptops-debate lies a broader and more important discussion about how computer technology has come to be designed — and why.

*This post, and the other posts cited within it, concerns the use of personal computing devices in classes in which those devices have not been formally incorporated as teaching aids. There are, of course, plenty of classes in which computers are built into the teaching plan. It’s perhaps noteworthy, though, to point out that, in the “Laptop and Lecture” study, students who used their laptops to look at sites relevant to the class actually did even worse on tests of retention than did students who used their computers to look at irrelevant sites.

Image: “Viewmaster” by Geof Wilson.

6 Comments

Filed under Uncategorized

Speak, algorithm

192hmaxdb7ij5jpg

Lost in yesterday’s coverage of the Apple Watch was a small software feature that, when demonstrated on the stage of the Flint Center, earned brief but vigorous applause from the audience. It was the watch’s ability to scan incoming messages and suggest possible responses. The Verge’s live-blogging crew were wowed:

autothink

The example Apple presented was pretty rudimentary. The incoming message included the question “Are you going with Love Shack or Wild Thing?” To which the watch suggested three possible answers: Love Shack, Wild Thing, Not Sure. Big whoop. In terms of natural language processing, that’s like Watson with a lobotomy.

But it was just a taste of a much more sophisticated “predictive text” capability, called QuickType, that Apple has built into the latest version of its smartphone operating system. “iOS 8 predicts what you’ll say next,” explains the company. “No matter whom you’re saying it to.”

Now you can write entire sentences with a few taps. Because as you type, you’ll see choices of words or phrases you’d probably type next, based on your past conversations and writing style. iOS 8 takes into account the casual style you might use in messages and the more formal language you probably use in Mail. It also adjusts based on the person you’re communicating with, because your choice of words is likely more laid back with your spouse than with your boss.

Now, this may all turn out to be a clumsy parlor trick. If the system isn’t adept at mimicking a user’s writing style and matching it to the intended recipient — if it doesn’t nail both text and context — the predictive-text feature will rarely be used, except for purposes of making “stupid robot” jokes. But if the feature actually turns out to be “good enough” — or if our conversational expectations devolve to a point where the automated messages feel acceptable — then it will mark a breakthrough in the automation of communication and even thought. We’ll begin allowing our computers to speak for us.

Is that a development to be welcomed? It seems more than a little weird that Apple’s developers would get excited about an algorithm that will converse with your spouse on your behalf, channeling the “laid back” tone you deploy for conjugal chitchat. The programmers seem to assume that romantic partners are desperate to trade intimacy for efficiency. I suppose the next step is to get Frederick Winslow Taylor to stand beside the marriage bed with a stopwatch and a clipboard. “Three caresses would have been sufficient, ma’am.”

In The Glass Cage, I argue that we’ve embraced a wrong-headed and ultimately destructive approach to automating human activities, and in Apple’s let-the-software-do-the-talking feature we see a particularly disquieting manifestation of the reigning design ethic. Technical qualities are given precedence over human qualities, and human qualities come to be seen as dispensable.

When we allow ourselves to be guided by predictive algorithms, in acting, speaking, or thinking, we inevitably become more predictable ourselves, as Rochester Institute of Technology philosopher Evan Selinger pointed out in discussing the Apple system:

Predicting you is predicting a predictable you. Which is itself subtracting from your autonomy. And it’s encouraging you to be predictable, to be a facsimile of yourself. So it’s a prediction and a nudge at the same moment.

It’s a slippery slope, and it becomes more slippery with each nudge. Predicted responses begin to replace responses, simply because it’s a little more efficient to simulate a response —a thought, a sentence, a gesture — than to undertake the small amount of work necessary to have a response. And then that small amount of work begins to seem like a lot of work — like correcting your own typos rather than allowing the spellchecker to do it. And then, as original responses become rarer, the predictions become predictions based on earlier predictions. Where does the algorithm end and the self begin?

And if we assume that the people we’re exchanging messages with are also using the predictive-text program to formulate their responses . . . well, then things get really strange. Everything becomes a parlor trick.

Image: Thomas Edison’s talking doll.

6 Comments

Filed under Uncategorized