Category Archives: Uncategorized

Uncaged, in Seattle and San Francisco


Fulfilling its Manifest Destiny, the Uncaged Tour has arrived at the western edge of the continent. I will be speaking about The Glass Cage at Town Hall Seattle tonight at 7:30 (details). And then, on Wednesday at 6:30 pm, I’ll be at the Commonwealth Club in San Francisco for a conversation with Salon’s Andrew Leonard (details). If you’re around, please swing by.

And here are a few choice quotes from recent Glass Cage reviews:

Hiawatha Bray, Boston Globe:

[Carr] suggests that automated systems should require humans to participate in vital activities. An aircraft autopilot might require the pilot to manually change the plane’s course, altitude, and speed; a medical diagnostic program might run regular quizzes to teach radiologists to spot unusual cancers. And once self-driving vehicles arrive, we might require their human owners to take the wheel every now and then.

Of course, this kind of automation with a human face would be more costly and timeconsuming, making it less likely that businesses will race to embrace it. More likely, we’ll have to tolerate a world of ever smarter machines, operated by ever less capable humans. Not a cheerful prospect, but we can’t say we weren’t warned.

Michelle Scheraga, Associated Press:

Without resorting to scare tactics or sermonizing on the dangers of overautomation, [Carr's] book details in careful, measured ways both the promise of mechanization and its drawbacks since the earliest days of the Industrial Revolution, drawing connections between the blue-collar worker operating factory equipment and the white-collar worker inputting data in a computer, both using machines meant to shoulder most of the heavy physical or mental labor.

His historical, inclusive approach makes an issue most of those already deeply steeped in technology won’t find at all surprising — that what we’re losing might outweigh what we gain by relying on computers — a stimulating, absorbing read.

Elisabeth Donnelly, Flavorwire:

In his new book, The Glass Cage: Automation and Us, Carr provides an elegantly written history of what role robotics have played in our past, and the possible role that they may play in our future. In a world where there’s a lot of technology cheerleaders, Carr is one of our most valuable skeptics. [...]

Carr shows how maps and our concept of them, have changed with the GPS. Where once we had to read an area, to see where we were in relation to the world, to figure it out with our heads, GPS satellite technology has made the world shrink to our perceptions of it. These technologically adept maps start with where we are and tell us, simply, how to get to the next place. It reduces our cognitive abilities with its ease. “The more you think about it, the more you realize that to never confront the possibility of getting lost is to live in a state of perpetual dislocation,” he writes. Carr pulls off this incredible synthesis, over and over, starting with something like maps and what technology’s done with them, bringing history, literature, culture, economics, and science, all together to reveal a window into who we are and what we’re becoming.

James Janega, Chicago Tribune:

The Glass Cage is a worthy antidote to the relentlessly hopeful futurism of Google, TED Talks and Walt Disney, and just as statistically probable as a world in which devoted digital assistants will book our anniversary dinners, route us around traffic jams, and send the perfect Mother’s Day floral arrangement on our behalf.

Jacob Axelrad, Christian Science Monitor:

Will smart phones, tablets, and applications imprison us in a “frictionless world”? Do devices and programs dull our senses? Are we – as tech critics sometimes suggest – outsourcing our brains?

These questions are posed by Nicholas Carr in The Glass Cage: Automation and Us, a thoughtful extension of some of the questions raised in his 2011 Pulitzer Prize finalist, The Shallows: What the Internet Is Doing to Our Brains. The Glass Cage is smart, insightful, and at times funny, as it takes readers through a series of anecdotes, academic research, and current and historical events to paint a portrait of a world readily handing itself over to intelligent devices.

Mark Bauerlein, The Weekly Standard:

There is a long tradition of automation zeal, and Carr provides revealing examples, including Oscar Wilde’s prediction that “while Humanity will be amusing itself, or enjoying cultivated leisure .  .  . or making beautiful things, or reading beautiful things, or simply contemplating the world with admiration and delight, machinery will be doing all the necessary and unpleasant work.”

Nicholas Carr’s warnings run against that pleasing vision, which puts him in a minority of culture-watchers. [...] The future he paints is a dicey one: We may soon reach a point at which automation—in hazardous settings from cockpits to battle zones—allows mistakes to happen less frequently but more catastrophically, because humans are unprepared to resume control. The technophile’s solution is to augment the automation, thereby decreasing the very toil that keeps humans sharp. Better to think more about the human subject, Carr advises.

And, finally, here’s a report on the hair-raising joyride I took through the streets of D.C. with NPR’s Robert Siegel during last week’s East Coast segment of the Uncaged Tour.


Filed under Uncategorized

Siri, how should I live?


Longreads is today featuring an excerpt from The Glass Cage. It’s a piece taken from the second to last chapter, “Your Inner Drone,” which examines the ethical and political implications of the spread of automation from factory production to everyday life.

It begins:

Back in the 1990s, just as the dot-com bubble was beginning to inflate, there was much excited talk about “ubiquitous computing.” Soon, pundits assured us, microchips would be everywhere — embedded in factory machinery and warehouse shelving, affixed to the walls of offices and homes, installed in consumer goods and stitched into clothing, even swimming around in our bodies. Equipped with sensors and transceivers, the tiny computers would measure every variable imaginable, from metal fatigue to soil temperature to blood sugar, and they’d send their readings, via the internet, to data-processing centers, where bigger computers would crunch the numbers and output instructions for keeping everything in spec and in sync. Computing would be pervasive. Our lives would be automated.

One of the main sources of the hype was Xerox PARC, the fabled Silicon Valley research lab where Steve Jobs found the inspiration for the Macintosh. PARC’s engineers and information scientists published a series of papers portraying a future in which computers would be so deeply woven into “the fabric of everyday life” that they’d be “indistinguishable from it.” We would no longer even notice all the computations going on around us. We’d be so saturated with data, so catered to by software, that, instead of experiencing the anxiety of information overload, we’d feel “encalmed.” It sounded idyllic.

The excitement about ubiquitous computing proved premature. The technology of the 1990s was not up to making the world machine-readable, and after the dot-com crash, investors were in no mood to bankroll the installation of expensive microchips and sensors everywhere. But much has changed in the succeeding fifteen years. …

Read on.

Image: detail from John William Waterhouse’s “Consulting the Oracle.”


Filed under Uncategorized

Tainted love: humans and machines


The Atlantic is featuring an excerpt from The Glass Cage. Taken from the second chapter, “The Robot at the Gate,” the piece looks at how our fraught relationship with labor-saving technology — a mix of utopian hope and existential fear — dates back to the beginning of the industrial revolution.

Here’s how the excerpt begins:

“We are brothers and sisters of our machines,” the technology his­torian George Dyson once remarked. Sibling relations are notori­ously fraught, and so it is with our technological kin. We love our machines—not just because they’re useful to us, but because we find them companionable and even beautiful. In a well-built machine, we see some of our deepest aspirations take form: the desire to under­stand the world and its workings, the desire to turn nature’s power to our own purposes, the desire to add something new and of our own fashioning to the cosmos, the desire to be awed and amazed. An ingenious machine is a source of wonder and of pride.

But machines are ugly too, and we sense in them a threat to things we hold dear. Machines may be a conduit of human power, but that power has usually been wielded by the industrialists and financiers who own the contraptions, not the people paid to operate them. Machines are cold and mindless, and in their obedience to scripted routines we see an image of society’s darker possibilities. If machines bring something human to the alien cosmos, they also bring some­thing alien to the human world. The mathematician and philoso­pher Bertrand Russell put it succinctly in a 1924 essay: “Machines are worshipped because they are beautiful and valued because they confer power; they are hated because they are hideous and loathed because they impose slavery.”

The tension reflected in Russell’s description of automated machines—they’d either destroy us or redeem us, liber­ate us or enslave us—has a long history. …

Read on.

Comments Off

Filed under Uncategorized

The message a nudge sends


In “It’s All for Your Own Good,” an article in the new issue of the New York Review of Books, law professor Jeremy Waldron offers a particularly thoughtful examination of nudge-ism, via a review of two recent books by chief nudgenik Cass Sunstein. Here’s a brief bit from a section in which Waldron explores the tension between nudging and dignity:

Nudging doesn’t teach me not to use inappropriate heuristics or to abandon irrational intuitions or outdated rules of thumb. It does not try to educate my choosing, for maybe I am unteachable. Instead it builds on my foibles. It manipulates my sense of the situation so that some heuristic—for example, a lazy feeling that I don’t need to think about saving for retirement—which is in principle inappropriate for the choice that I face, will still, thanks to a nudge, yield the answer that rational reflection would yield. Instead of teaching me to think actively about retirement, it takes advantage of my inertia. Instead of teaching me not to automatically choose the first item on the menu, it moves the objectively desirable items up to first place.

I still use the same defective strategies but now things have been arranged to make that work out better. Nudging takes advantage of my deficiencies in the way one indulges a child. The people doing this (up in Government House) are not exactly using me as a mere means in violation of some Kantian imperative. They are supposed to be doing it for my own good. Still, my choosing is being made a mere means to my ends by somebody else—and I think this is what the concern about dignity is all about.

Image: Philip Bump.

Comments Off

Filed under Uncategorized



Jamie Davies, “A Closed Loop“:

The concept of ‘the gene for feature x’ is giving way to a much more complicated story. Think something like: ‘the gene for protein a, that interacts with proteins b, c and d to allow a cell to undertake process p, that allows that cell to co‑ordinate with other cells to make body feature x’. The very length of the above phrase, and the weakness of the blueprint metaphor, emphasises a conceptual distance that is opening up between the molecular-scale, mechanical function of genes and the interesting large-scale features of bodies. The genes matter – of course they do, because something has to build all these proteins. But the helix seems less and less appropriate as an icon for the all-important control systems that run life, especially at larger scales (cells, tissues, organisms, populations, ecosystems and so on).

There is, however, an alternative. It can be represented by an even simpler icon than the double helix. It really does seem to pervade life at all scales. This alternative is a concept, rather than a physical thing. And it can it be glimpsed most clearly if we ask how things structure themselves when they must adapt to an environment that cannot be known in advance.

John R. Searle, “What Your Computer Can’t Know“:

Suppose we took seriously the project of creating an artificial brain that does what real human brains do. … How should we go about it? The absolutely first step is to get clear about the distinction between a simulation or model on the one hand, and a duplication of the causal mechanisms on the other. Consider an artificial heart as an example. Computer models were useful in constructing artificial hearts, but such a model is not an actual functioning causal mechanism. The actual artificial heart has to duplicate the causal powers of real hearts to pump blood. Both the real and artificial hearts are physical pumps, unlike the computer model or simulation.

Now exactly the same distinctions apply to the brain. An artificial brain has to literally create consciousness, unlike the computer model of the brain, which only creates a simulation. So an actual artificial brain, like the artificial heart, would have to duplicate and not just simulate the real causal powers of the original. In the case of the heart, we found that you do not need muscle tissue to duplicate the causal powers. We do not know enough about the operation of the brain to know how much of the specific biochemistry is essential for duplicating the causal powers of the original. Perhaps we can make artificial brains using completely different physical substances as we did with the heart. The point, however, is that whatever the substance is, it has to duplicate and not just simulate, emulate, or model the real causal powers of the original organ.

Michael Sacasas, “Cathedrals, Pyramids, or iPhones: Toward a Very Tentative Theory of Technological Innovation“:

Technological innovation on a grand scale is an act of sublimation, and we are too self-knowing to sublimate. Let me lead into this discussion by acknowledging that this point may be too subtle to be true, so I offer it circumspectly. According to certain schools of psychology, sublimation describes the process by which we channel or redirect certain desires, often destructive or transgressive desires, into productive action. On this view, the great works of civilization are powered by sublimation. But, to borrow a line cited by the late Phillip Reiff, “if you tell people how they can sublimate, they can’t sublimate.” In other words, sublimation is a tacit process. It is the by-product of a strong buy-in into cultural norms and ideals by which individual desire is subsumed into some larger purpose. It is the sort of dynamic, in other words, that conscious awareness hampers and that ironic detachment, our default posture toward reality, destroys. Make of that theory what you will.

Jeffrey Toobin, “The Solace of Oblivion“:

Mayer-Schönberger said that Google, whose market share for Internet searches in Europe is around ninety per cent, does not make sinister use of the information at its disposal. But in “Delete” he describes how, in the nineteen-thirties, the Dutch government maintained a comprehensive population registry, which included the name, address, and religion of every citizen. At the time, he writes, “the registry was hailed as facilitating government administration and improving welfare planning.” But when the Nazis invaded Holland they used the registry to track down Jews and Gypsies. “We may feel safe living in democratic republics, but so did the Dutch,” he said. “We do not know what the future holds in store for us, and whether future governments will honor the trust we put in them to protect information privacy rights.”

Ian Bogost, “Future Ennui“:

Unlike its competitor Google, with its eyeglass wearables and delivery drones and autonomous cars, Apple’s products are reasonable and expected — prosaic even, despite their refined design. Google’s future is truly science fictional, whereas Apple’s is mostly foreseeable. You can imagine wearing Apple Watch, in no small part because you remember thinking that you could imagine carrying Apple’s iPhone — and then you did, and now you always do.

Technology moves fast, but its speed now slows us down. A torpor has descended, the weariness of having lived this change before — or one similar enough, anyway — and all too recently. The future isn’t even here yet, and it’s already exhausted us in advance. It’s a far cry from “future shock,” Alvin Toffler’s 1970 term for the post-industrial sensation that too much change happens in too short a time. Where once the loss of familiar institutions and practices produced a shock, now it produces something more tepid and routine. The planned obsolescence that coaxes us to replace our iPhone 5 with an iPhone 6 is no longer disquieting, but just expected.

Image: still from Fritz Lang’s “Metropolis.”

1 Comment

Filed under Uncategorized



The Spanish edition of The Glass Cage, titled Atrapados: Cómo las Máquinas se Apoderan de Nuestras Vidas, is being published by Taurus on Wednesday. Today’s issue of El País includes a special feature on the book, with a review by Mercedes Cebrián, a profile, an excerpt, and a rejoinder by business professor Enrique Dans. You can also read the opening pages of the Spanish translation here, courtesy of the publisher. The translation is by Pedro Cifuentes.

I’m keeping a list of all forthcoming editions of the book here.

Comments Off

Filed under Uncategorized

The unbearable unlightness of AI


There is a continuing assumption — a faith, really — that at some future moment, perhaps only a decade or two away, perhaps even nearer than that, artificial intelligence will, by means yet unknown, achieve consciousness. A window will open on the computer’s black box, and light will stream in. The universe will take a new turn, as the inanimate becomes, for a second time, animate.

George Lakoff, the linguist who cowrote Metaphors We Live By, says it ain’t going to happen. In a fascinating article by Michael Chorost, Lakoff argues not only that language, being essentially metaphorical, is inextricably bound up in our bodily existence, but that cognition and consciousness, too, flow from our experience as creatures on the earth. Recent neuroscience experiments seem to back Lakoff up. They suggest that even our most abstract thoughts involve the mental simulation of physical experiences.

Writes Chorost:

In a 2011 paper in the Journal of Cognitive Neuroscience, Rutvik Desai, an associate professor of psychology at the University of South Carolina, and his colleagues presented fMRI evidence that brains do in fact simulate metaphorical sentences that use action verbs. When reading both literal and metaphorical sentences, their subjects’ brains activated areas associated with control of action. “The understanding of sensory-motor metaphors is not abstracted away from their sensory-motor origins,” the researchers concluded.

Textural metaphors, too, appear to be simulated. That is, the brain processes “She’s had a rough time” by simulating the sensation of touching something rough. Krish Sathian, a professor of neurology, rehabilitation medicine, and psychology at Emory University, says, “For textural metaphor, you would predict on the Lakoff and Johnson account that it would recruit activity- and texture-selective somatosensory cortex, and that indeed is exactly what we found.”

The evidence points to a new theory about the source of consciousness:

What’s emerging from these studies isn’t just a theory of language or of metaphor. It’s a nascent theory of consciousness. Any algorithmic system faces the problem of bootstrapping itself from computing to knowing, from bit-shuffling to caring. Igniting previously stored memories of bodily experiences seems to be one way of getting there.

That, as Chorost notes, “raises problems for artificial intelligence”:

Since computers don’t have bodies, let alone sensations, what are the implications of these findings for their becoming conscious—that is, achieving strong AI? Lakoff is uncompromising: “It kills it.” Of Ray Kurzweil’s singularity thesis, he says, “I don’t believe it for a second.” Computers can run models of neural processes, he says, but absent bodily experience, those models will never actually be conscious.

Then again, even the algorithmic thinking of computers has a physical substrate. There is no software without hardware. The problem is that computers, unlike animals, have no sensory experience of their own existence. They are, or at least appear to be, radically dualist in their operation, their software oblivious to their hardware. If a computer could think metaphorically, what kind of metaphors would it come up with? It’s hard to imagine they’d be anything recognizable to humans.

Image: “Camera Obscura Test 2” by Jon Lewis.


Filed under Uncategorized