Category Archives: Uncategorized

Siri, how should I live?


Longreads is today featuring an excerpt from The Glass Cage. It’s a piece taken from the second to last chapter, “Your Inner Drone,” which examines the ethical and political implications of the spread of automation from factory production to everyday life.

It begins:

Back in the 1990s, just as the dot-com bubble was beginning to inflate, there was much excited talk about “ubiquitous computing.” Soon, pundits assured us, microchips would be everywhere — embedded in factory machinery and warehouse shelving, affixed to the walls of offices and homes, installed in consumer goods and stitched into clothing, even swimming around in our bodies. Equipped with sensors and transceivers, the tiny computers would measure every variable imaginable, from metal fatigue to soil temperature to blood sugar, and they’d send their readings, via the internet, to data-processing centers, where bigger computers would crunch the numbers and output instructions for keeping everything in spec and in sync. Computing would be pervasive. Our lives would be automated.

One of the main sources of the hype was Xerox PARC, the fabled Silicon Valley research lab where Steve Jobs found the inspiration for the Macintosh. PARC’s engineers and information scientists published a series of papers portraying a future in which computers would be so deeply woven into “the fabric of everyday life” that they’d be “indistinguishable from it.” We would no longer even notice all the computations going on around us. We’d be so saturated with data, so catered to by software, that, instead of experiencing the anxiety of information overload, we’d feel “encalmed.” It sounded idyllic.

The excitement about ubiquitous computing proved premature. The technology of the 1990s was not up to making the world machine-readable, and after the dot-com crash, investors were in no mood to bankroll the installation of expensive microchips and sensors everywhere. But much has changed in the succeeding fifteen years. …

Read on.

Image: detail from John William Waterhouse’s “Consulting the Oracle.”


Filed under Uncategorized

Tainted love: humans and machines


The Atlantic is featuring an excerpt from The Glass Cage. Taken from the second chapter, “The Robot at the Gate,” the piece looks at how our fraught relationship with labor-saving technology — a mix of utopian hope and existential fear — dates back to the beginning of the industrial revolution.

Here’s how the excerpt begins:

“We are brothers and sisters of our machines,” the technology his­torian George Dyson once remarked. Sibling relations are notori­ously fraught, and so it is with our technological kin. We love our machines—not just because they’re useful to us, but because we find them companionable and even beautiful. In a well-built machine, we see some of our deepest aspirations take form: the desire to under­stand the world and its workings, the desire to turn nature’s power to our own purposes, the desire to add something new and of our own fashioning to the cosmos, the desire to be awed and amazed. An ingenious machine is a source of wonder and of pride.

But machines are ugly too, and we sense in them a threat to things we hold dear. Machines may be a conduit of human power, but that power has usually been wielded by the industrialists and financiers who own the contraptions, not the people paid to operate them. Machines are cold and mindless, and in their obedience to scripted routines we see an image of society’s darker possibilities. If machines bring something human to the alien cosmos, they also bring some­thing alien to the human world. The mathematician and philoso­pher Bertrand Russell put it succinctly in a 1924 essay: “Machines are worshipped because they are beautiful and valued because they confer power; they are hated because they are hideous and loathed because they impose slavery.”

The tension reflected in Russell’s description of automated machines—they’d either destroy us or redeem us, liber­ate us or enslave us—has a long history. …

Read on.

Comments Off

Filed under Uncategorized

The message a nudge sends


In “It’s All for Your Own Good,” an article in the new issue of the New York Review of Books, law professor Jeremy Waldron offers a particularly thoughtful examination of nudge-ism, via a review of two recent books by chief nudgenik Cass Sunstein. Here’s a brief bit from a section in which Waldron explores the tension between nudging and dignity:

Nudging doesn’t teach me not to use inappropriate heuristics or to abandon irrational intuitions or outdated rules of thumb. It does not try to educate my choosing, for maybe I am unteachable. Instead it builds on my foibles. It manipulates my sense of the situation so that some heuristic—for example, a lazy feeling that I don’t need to think about saving for retirement—which is in principle inappropriate for the choice that I face, will still, thanks to a nudge, yield the answer that rational reflection would yield. Instead of teaching me to think actively about retirement, it takes advantage of my inertia. Instead of teaching me not to automatically choose the first item on the menu, it moves the objectively desirable items up to first place.

I still use the same defective strategies but now things have been arranged to make that work out better. Nudging takes advantage of my deficiencies in the way one indulges a child. The people doing this (up in Government House) are not exactly using me as a mere means in violation of some Kantian imperative. They are supposed to be doing it for my own good. Still, my choosing is being made a mere means to my ends by somebody else—and I think this is what the concern about dignity is all about.

Image: Philip Bump.

Comments Off

Filed under Uncategorized



Jamie Davies, “A Closed Loop“:

The concept of ‘the gene for feature x’ is giving way to a much more complicated story. Think something like: ‘the gene for protein a, that interacts with proteins b, c and d to allow a cell to undertake process p, that allows that cell to co‑ordinate with other cells to make body feature x’. The very length of the above phrase, and the weakness of the blueprint metaphor, emphasises a conceptual distance that is opening up between the molecular-scale, mechanical function of genes and the interesting large-scale features of bodies. The genes matter – of course they do, because something has to build all these proteins. But the helix seems less and less appropriate as an icon for the all-important control systems that run life, especially at larger scales (cells, tissues, organisms, populations, ecosystems and so on).

There is, however, an alternative. It can be represented by an even simpler icon than the double helix. It really does seem to pervade life at all scales. This alternative is a concept, rather than a physical thing. And it can it be glimpsed most clearly if we ask how things structure themselves when they must adapt to an environment that cannot be known in advance.

John R. Searle, “What Your Computer Can’t Know“:

Suppose we took seriously the project of creating an artificial brain that does what real human brains do. … How should we go about it? The absolutely first step is to get clear about the distinction between a simulation or model on the one hand, and a duplication of the causal mechanisms on the other. Consider an artificial heart as an example. Computer models were useful in constructing artificial hearts, but such a model is not an actual functioning causal mechanism. The actual artificial heart has to duplicate the causal powers of real hearts to pump blood. Both the real and artificial hearts are physical pumps, unlike the computer model or simulation.

Now exactly the same distinctions apply to the brain. An artificial brain has to literally create consciousness, unlike the computer model of the brain, which only creates a simulation. So an actual artificial brain, like the artificial heart, would have to duplicate and not just simulate the real causal powers of the original. In the case of the heart, we found that you do not need muscle tissue to duplicate the causal powers. We do not know enough about the operation of the brain to know how much of the specific biochemistry is essential for duplicating the causal powers of the original. Perhaps we can make artificial brains using completely different physical substances as we did with the heart. The point, however, is that whatever the substance is, it has to duplicate and not just simulate, emulate, or model the real causal powers of the original organ.

Michael Sacasas, “Cathedrals, Pyramids, or iPhones: Toward a Very Tentative Theory of Technological Innovation“:

Technological innovation on a grand scale is an act of sublimation, and we are too self-knowing to sublimate. Let me lead into this discussion by acknowledging that this point may be too subtle to be true, so I offer it circumspectly. According to certain schools of psychology, sublimation describes the process by which we channel or redirect certain desires, often destructive or transgressive desires, into productive action. On this view, the great works of civilization are powered by sublimation. But, to borrow a line cited by the late Phillip Reiff, “if you tell people how they can sublimate, they can’t sublimate.” In other words, sublimation is a tacit process. It is the by-product of a strong buy-in into cultural norms and ideals by which individual desire is subsumed into some larger purpose. It is the sort of dynamic, in other words, that conscious awareness hampers and that ironic detachment, our default posture toward reality, destroys. Make of that theory what you will.

Jeffrey Toobin, “The Solace of Oblivion“:

Mayer-Schönberger said that Google, whose market share for Internet searches in Europe is around ninety per cent, does not make sinister use of the information at its disposal. But in “Delete” he describes how, in the nineteen-thirties, the Dutch government maintained a comprehensive population registry, which included the name, address, and religion of every citizen. At the time, he writes, “the registry was hailed as facilitating government administration and improving welfare planning.” But when the Nazis invaded Holland they used the registry to track down Jews and Gypsies. “We may feel safe living in democratic republics, but so did the Dutch,” he said. “We do not know what the future holds in store for us, and whether future governments will honor the trust we put in them to protect information privacy rights.”

Ian Bogost, “Future Ennui“:

Unlike its competitor Google, with its eyeglass wearables and delivery drones and autonomous cars, Apple’s products are reasonable and expected — prosaic even, despite their refined design. Google’s future is truly science fictional, whereas Apple’s is mostly foreseeable. You can imagine wearing Apple Watch, in no small part because you remember thinking that you could imagine carrying Apple’s iPhone — and then you did, and now you always do.

Technology moves fast, but its speed now slows us down. A torpor has descended, the weariness of having lived this change before — or one similar enough, anyway — and all too recently. The future isn’t even here yet, and it’s already exhausted us in advance. It’s a far cry from “future shock,” Alvin Toffler’s 1970 term for the post-industrial sensation that too much change happens in too short a time. Where once the loss of familiar institutions and practices produced a shock, now it produces something more tepid and routine. The planned obsolescence that coaxes us to replace our iPhone 5 with an iPhone 6 is no longer disquieting, but just expected.

Image: still from Fritz Lang’s “Metropolis.”

1 Comment

Filed under Uncategorized



The Spanish edition of The Glass Cage, titled Atrapados: Cómo las Máquinas se Apoderan de Nuestras Vidas, is being published by Taurus on Wednesday. Today’s issue of El País includes a special feature on the book, with a review by Mercedes Cebrián, a profile, an excerpt, and a rejoinder by business professor Enrique Dans. You can also read the opening pages of the Spanish translation here, courtesy of the publisher. The translation is by Pedro Cifuentes.

I’m keeping a list of all forthcoming editions of the book here.

Comments Off

Filed under Uncategorized

The unbearable unlightness of AI


There is a continuing assumption — a faith, really — that at some future moment, perhaps only a decade or two away, perhaps even nearer than that, artificial intelligence will, by means yet unknown, achieve consciousness. A window will open on the computer’s black box, and light will stream in. The universe will take a new turn, as the inanimate becomes, for a second time, animate.

George Lakoff, the linguist who cowrote Metaphors We Live By, says it ain’t going to happen. In a fascinating article by Michael Chorost, Lakoff argues not only that language, being essentially metaphorical, is inextricably bound up in our bodily existence, but that cognition and consciousness, too, flow from our experience as creatures on the earth. Recent neuroscience experiments seem to back Lakoff up. They suggest that even our most abstract thoughts involve the mental simulation of physical experiences.

Writes Chorost:

In a 2011 paper in the Journal of Cognitive Neuroscience, Rutvik Desai, an associate professor of psychology at the University of South Carolina, and his colleagues presented fMRI evidence that brains do in fact simulate metaphorical sentences that use action verbs. When reading both literal and metaphorical sentences, their subjects’ brains activated areas associated with control of action. “The understanding of sensory-motor metaphors is not abstracted away from their sensory-motor origins,” the researchers concluded.

Textural metaphors, too, appear to be simulated. That is, the brain processes “She’s had a rough time” by simulating the sensation of touching something rough. Krish Sathian, a professor of neurology, rehabilitation medicine, and psychology at Emory University, says, “For textural metaphor, you would predict on the Lakoff and Johnson account that it would recruit activity- and texture-selective somatosensory cortex, and that indeed is exactly what we found.”

The evidence points to a new theory about the source of consciousness:

What’s emerging from these studies isn’t just a theory of language or of metaphor. It’s a nascent theory of consciousness. Any algorithmic system faces the problem of bootstrapping itself from computing to knowing, from bit-shuffling to caring. Igniting previously stored memories of bodily experiences seems to be one way of getting there.

That, as Chorost notes, “raises problems for artificial intelligence”:

Since computers don’t have bodies, let alone sensations, what are the implications of these findings for their becoming conscious—that is, achieving strong AI? Lakoff is uncompromising: “It kills it.” Of Ray Kurzweil’s singularity thesis, he says, “I don’t believe it for a second.” Computers can run models of neural processes, he says, but absent bodily experience, those models will never actually be conscious.

Then again, even the algorithmic thinking of computers has a physical substrate. There is no software without hardware. The problem is that computers, unlike animals, have no sensory experience of their own existence. They are, or at least appear to be, radically dualist in their operation, their software oblivious to their hardware. If a computer could think metaphorically, what kind of metaphors would it come up with? It’s hard to imagine they’d be anything recognizable to humans.

Image: “Camera Obscura Test 2” by Jon Lewis.


Filed under Uncategorized

The manipulators


In “The Manipulators,” a new essay in the Los Angeles Review of Books, I explore two much-discussed documents published earlier this year: “Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks” by Adam Kramer et al. and “Judgment in Case C-131/12: Google Spain SL, Google Inc v Agencia Espanola de Proteccion de Datos, Mario Costeja Gonzalez” by the Court of Justice of the European Union. The latter, I argue, helps us make sense of the former. Both challenge us to think afresh about the past and the future of the net.

Here’s how the piece begins:

Since the launch of Netscape and Yahoo twenty years ago, the development of the internet has been a story of new companies and new products, a story shaped largely by the interests of entrepreneurs and venture capitalists. The plot has been linear; the pace, relentless. In 1995 came Amazon and Craigslist; in 1997, Google and Netflix; in 1999, Napster and Blogger; in 2001, iTunes; in 2003, MySpace; in 2004, Facebook; in 2005, YouTube; in 2006, Twitter; in 2007, the iPhone and the Kindle; in 2008, Airbnb; in 2010, Instagram; in 2011, Snapchat; in 2012, Coursera; in 2013, Google Glass. It has been a carnival ride, and we, the public, have been the giddy passengers.

This year something changed. The big news about the net came not in the form of buzzy startups or cool gadgets but in the shape of two dry, arcane documents. One was a scientific paper describing an experiment in which researchers attempted to alter the moods of Facebook users by secretly manipulating the messages they saw. The other was a ruling by the European Union’s highest court granting citizens the right to have outdated or inaccurate information about them erased from Google and other search engines. Both documents provoked consternation, anger, and argument. Both raised important, complicated issues without resolving them. Arriving in the wake of revelations about the NSA’s online spying operation, both seemed to herald, in very different ways, a new stage in the net’s history — one in which the public will be called upon to guide the technology, rather than the other way around. We may look back on 2014 as the year the internet began to grow up.

Read on.

Image: “Marionettes” by Mario De Carli.


Filed under Uncategorized