Siri, how should I live?

John_William_Waterhouse_oracle_1884

Longreads is today featuring an excerpt from The Glass Cage. It’s a piece taken from the second to last chapter, “Your Inner Drone,” which examines the ethical and political implications of the spread of automation from factory production to everyday life.

It begins:

Back in the 1990s, just as the dot-com bubble was beginning to inflate, there was much excited talk about “ubiquitous computing.” Soon, pundits assured us, microchips would be everywhere — embedded in factory machinery and warehouse shelving, affixed to the walls of offices and homes, installed in consumer goods and stitched into clothing, even swimming around in our bodies. Equipped with sensors and transceivers, the tiny computers would measure every variable imaginable, from metal fatigue to soil temperature to blood sugar, and they’d send their readings, via the internet, to data-processing centers, where bigger computers would crunch the numbers and output instructions for keeping everything in spec and in sync. Computing would be pervasive. Our lives would be automated.

One of the main sources of the hype was Xerox PARC, the fabled Silicon Valley research lab where Steve Jobs found the inspiration for the Macintosh. PARC’s engineers and information scientists published a series of papers portraying a future in which computers would be so deeply woven into “the fabric of everyday life” that they’d be “indistinguishable from it.” We would no longer even notice all the computations going on around us. We’d be so saturated with data, so catered to by software, that, instead of experiencing the anxiety of information overload, we’d feel “encalmed.” It sounded idyllic.

The excitement about ubiquitous computing proved premature. The technology of the 1990s was not up to making the world machine-readable, and after the dot-com crash, investors were in no mood to bankroll the installation of expensive microchips and sensors everywhere. But much has changed in the succeeding fifteen years. …

Read on.

Image: detail from John William Waterhouse’s “Consulting the Oracle.”

2 Comments

Filed under Uncategorized

Tainted love: humans and machines

chaplin

The Atlantic is featuring an excerpt from The Glass Cage. Taken from the second chapter, “The Robot at the Gate,” the piece looks at how our fraught relationship with labor-saving technology — a mix of utopian hope and existential fear — dates back to the beginning of the industrial revolution.

Here’s how the excerpt begins:

“We are brothers and sisters of our machines,” the technology his­torian George Dyson once remarked. Sibling relations are notori­ously fraught, and so it is with our technological kin. We love our machines—not just because they’re useful to us, but because we find them companionable and even beautiful. In a well-built machine, we see some of our deepest aspirations take form: the desire to under­stand the world and its workings, the desire to turn nature’s power to our own purposes, the desire to add something new and of our own fashioning to the cosmos, the desire to be awed and amazed. An ingenious machine is a source of wonder and of pride.

But machines are ugly too, and we sense in them a threat to things we hold dear. Machines may be a conduit of human power, but that power has usually been wielded by the industrialists and financiers who own the contraptions, not the people paid to operate them. Machines are cold and mindless, and in their obedience to scripted routines we see an image of society’s darker possibilities. If machines bring something human to the alien cosmos, they also bring some­thing alien to the human world. The mathematician and philoso­pher Bertrand Russell put it succinctly in a 1924 essay: “Machines are worshipped because they are beautiful and valued because they confer power; they are hated because they are hideous and loathed because they impose slavery.”

The tension reflected in Russell’s description of automated machines—they’d either destroy us or redeem us, liber­ate us or enslave us—has a long history. …

Read on.

Comments Off

Filed under Uncategorized

The message a nudge sends

milkbone

In “It’s All for Your Own Good,” an article in the new issue of the New York Review of Books, law professor Jeremy Waldron offers a particularly thoughtful examination of nudge-ism, via a review of two recent books by chief nudgenik Cass Sunstein. Here’s a brief bit from a section in which Waldron explores the tension between nudging and dignity:

Nudging doesn’t teach me not to use inappropriate heuristics or to abandon irrational intuitions or outdated rules of thumb. It does not try to educate my choosing, for maybe I am unteachable. Instead it builds on my foibles. It manipulates my sense of the situation so that some heuristic—for example, a lazy feeling that I don’t need to think about saving for retirement—which is in principle inappropriate for the choice that I face, will still, thanks to a nudge, yield the answer that rational reflection would yield. Instead of teaching me to think actively about retirement, it takes advantage of my inertia. Instead of teaching me not to automatically choose the first item on the menu, it moves the objectively desirable items up to first place.

I still use the same defective strategies but now things have been arranged to make that work out better. Nudging takes advantage of my deficiencies in the way one indulges a child. The people doing this (up in Government House) are not exactly using me as a mere means in violation of some Kantian imperative. They are supposed to be doing it for my own good. Still, my choosing is being made a mere means to my ends by somebody else—and I think this is what the concern about dignity is all about.

Image: Philip Bump.

Comments Off

Filed under Uncategorized

Alightings

metropolis

Jamie Davies, “A Closed Loop“:

The concept of ‘the gene for feature x’ is giving way to a much more complicated story. Think something like: ‘the gene for protein a, that interacts with proteins b, c and d to allow a cell to undertake process p, that allows that cell to co‑ordinate with other cells to make body feature x’. The very length of the above phrase, and the weakness of the blueprint metaphor, emphasises a conceptual distance that is opening up between the molecular-scale, mechanical function of genes and the interesting large-scale features of bodies. The genes matter – of course they do, because something has to build all these proteins. But the helix seems less and less appropriate as an icon for the all-important control systems that run life, especially at larger scales (cells, tissues, organisms, populations, ecosystems and so on).

There is, however, an alternative. It can be represented by an even simpler icon than the double helix. It really does seem to pervade life at all scales. This alternative is a concept, rather than a physical thing. And it can it be glimpsed most clearly if we ask how things structure themselves when they must adapt to an environment that cannot be known in advance.

John R. Searle, “What Your Computer Can’t Know“:

Suppose we took seriously the project of creating an artificial brain that does what real human brains do. … How should we go about it? The absolutely first step is to get clear about the distinction between a simulation or model on the one hand, and a duplication of the causal mechanisms on the other. Consider an artificial heart as an example. Computer models were useful in constructing artificial hearts, but such a model is not an actual functioning causal mechanism. The actual artificial heart has to duplicate the causal powers of real hearts to pump blood. Both the real and artificial hearts are physical pumps, unlike the computer model or simulation.

Now exactly the same distinctions apply to the brain. An artificial brain has to literally create consciousness, unlike the computer model of the brain, which only creates a simulation. So an actual artificial brain, like the artificial heart, would have to duplicate and not just simulate the real causal powers of the original. In the case of the heart, we found that you do not need muscle tissue to duplicate the causal powers. We do not know enough about the operation of the brain to know how much of the specific biochemistry is essential for duplicating the causal powers of the original. Perhaps we can make artificial brains using completely different physical substances as we did with the heart. The point, however, is that whatever the substance is, it has to duplicate and not just simulate, emulate, or model the real causal powers of the original organ.

Michael Sacasas, “Cathedrals, Pyramids, or iPhones: Toward a Very Tentative Theory of Technological Innovation“:

Technological innovation on a grand scale is an act of sublimation, and we are too self-knowing to sublimate. Let me lead into this discussion by acknowledging that this point may be too subtle to be true, so I offer it circumspectly. According to certain schools of psychology, sublimation describes the process by which we channel or redirect certain desires, often destructive or transgressive desires, into productive action. On this view, the great works of civilization are powered by sublimation. But, to borrow a line cited by the late Phillip Reiff, “if you tell people how they can sublimate, they can’t sublimate.” In other words, sublimation is a tacit process. It is the by-product of a strong buy-in into cultural norms and ideals by which individual desire is subsumed into some larger purpose. It is the sort of dynamic, in other words, that conscious awareness hampers and that ironic detachment, our default posture toward reality, destroys. Make of that theory what you will.

Jeffrey Toobin, “The Solace of Oblivion“:

Mayer-Schönberger said that Google, whose market share for Internet searches in Europe is around ninety per cent, does not make sinister use of the information at its disposal. But in “Delete” he describes how, in the nineteen-thirties, the Dutch government maintained a comprehensive population registry, which included the name, address, and religion of every citizen. At the time, he writes, “the registry was hailed as facilitating government administration and improving welfare planning.” But when the Nazis invaded Holland they used the registry to track down Jews and Gypsies. “We may feel safe living in democratic republics, but so did the Dutch,” he said. “We do not know what the future holds in store for us, and whether future governments will honor the trust we put in them to protect information privacy rights.”

Ian Bogost, “Future Ennui“:

Unlike its competitor Google, with its eyeglass wearables and delivery drones and autonomous cars, Apple’s products are reasonable and expected — prosaic even, despite their refined design. Google’s future is truly science fictional, whereas Apple’s is mostly foreseeable. You can imagine wearing Apple Watch, in no small part because you remember thinking that you could imagine carrying Apple’s iPhone — and then you did, and now you always do.

Technology moves fast, but its speed now slows us down. A torpor has descended, the weariness of having lived this change before — or one similar enough, anyway — and all too recently. The future isn’t even here yet, and it’s already exhausted us in advance. It’s a far cry from “future shock,” Alvin Toffler’s 1970 term for the post-industrial sensation that too much change happens in too short a time. Where once the loss of familiar institutions and practices produced a shock, now it produces something more tepid and routine. The planned obsolescence that coaxes us to replace our iPhone 5 with an iPhone 6 is no longer disquieting, but just expected.

Image: still from Fritz Lang’s “Metropolis.”

1 Comment

Filed under Uncategorized