Category Archives: The Glass Cage

Let them eat images of cake

starchild

David Graeber observes:

It used to be that Americans mostly subscribed to a rough-and-ready version of the labor theory of value. Everything we see around us that we consider beautiful, useful, or important was made that way by people who sank their physical and mental efforts into creating and maintaining it. Work is valuable insofar as it creates these things that people like and need. Since the beginning of the 20th century, there has been an enormous effort on the part of the people running this country to turn that around: to convince everyone that value really comes from the minds and visions of entrepreneurs, and that ordinary working people are just mindless robots who bring those visions to reality.

Not only does it make perfect sense, therefore, to replace all those working stiffs, all those glorified ditch-diggers who traffic in the stuff of the world, with actual mindless robots, but in doing so you’re doing the workers a great, if as yet unappreciated, favor. You’re liberating them to become . . . visionaries! “Unemployment” is just a coarse term we use to describe the pre-visionary state. And so Andreessen: “All human time, labor, energy, ambition, and goals reorient to the intangibles: the big questions, the deep needs.” Intangibility is the last refuge of the materialist.

Image of starchild from 2001.

Marx Andreessen

In a series of rhapsodic tweets, venture capitalist Marc Andreessen imagines a world in which robots take over all productive labor:

All human time, labor, energy, ambition, and goals reorient to the intangibles: the big questions, the deep needs. Human nature expresses itself fully, for the first time in history. Without physical need constraints, we will be whoever we want to be. The main fields of human endeavor will be culture, arts, sciences, creativity, philosophy, experimentation, exploration, adventure. Rather than nothing to do, we would have everything to do: curiosity, artistic and scientific creativity, new forms of status seeking. Imagine six, or 10, billion people doing nothing but arts and sciences, culture and exploring and learning. What a world that would be.

What a world, indeed. It would, in fact, be precisely the world that Karl Marx dreamed about, where “nobody has one exclusive sphere of activity but each can become accomplished in any branch he wished.” Marx, too, believed that modern production technology would be instrumental in liberating people from the narrowness of traditional jobs, freeing human nature to express itself fully for the first time in history.

We know the process by which Marx saw his utopia of self-actualization come into being. I wonder how Andreessen would go about making his utopia operational. Would he begin by distributing his own wealth to the masses?

Programming the moral robot

frank

The U.S. Navy’s Office of Naval Research is funding an effort, by scientists at Tufts, Brown, and RPI, to develop military robots capable of moral reasoning:

The ONR-funded project will first isolate essential elements of human moral competence through theoretical and empirical research. Based on the results, the team will develop formal frameworks for modeling human-level moral reasoning that can be verified. Next, it will implement corresponding mechanisms for moral competence in a computational architecture.

That sounds straightforward. But hidden in those three short sentences are, so far as I can make out, at least eight philosophical challenges of extraordinary complexity:

  • Defining “human moral competence”
  • Boiling that competence down to a set of isolated “essential elements”
  • Designing a program of “theoretical and empirical research” that would lead to the identification of those elements
  • Developing mathematical frameworks for explaining moral reasoning
  • Translating those frameworks into formal models of moral reasoning
  • “Verifying” the outputs of those models as truthful
  • Embedding moral reasoning into computer algorithms
  • Using those algorithms to control a robot operating autonomously in the world

Barring the negotiation of a worldwide ban, which seems unlikely for all sorts of reasons, military robots that make life-or-death decisions about human beings are coming (if they’re not already here). So efforts to program morality into robots are themselves now morally necessary. It’s highly unlikely, though, that the efforts will be successful — unless, that is, we choose to cheat on the definition of success.

Selmer Bringsjord, head of the Cognitive Science Department at RPI, and Naveen Govindarajulu, post-doctoral researcher working with him, are focused on how to engineer ethics into a robot so that moral logic is intrinsic to these artificial beings. Since the scientific community has yet to establish what constitutes morality in humans the challenge for Bringsjord and his team is severe.

We’re trying to reverse-engineer something that wasn’t engineered in the first place.

The poetics of progress

Rift2

“I meet an American sailor,” writes Alexis de Tocqueville in his 1840 masterwork Democracy in America, “and I ask him why the vessels of his country are constituted so as not to last for long, and he answers me without hesitation that the art of navigation makes such rapid progress each day, that the most beautiful ship would soon become nearly useless if it lasted beyond a few years. In these chance words said by a coarse man and in regard to a particular fact, I see the general and systematic idea by which a great people conducts all things.”

Far more than a marketing ploy, planned obsolescence is an expression of a deep, romantic faith in technology. It’s a faith that Tocqueville saw as central to the American soul, argues Benjamin Storey in an illuminating essay in The New Atlantis:

For Tocqueville, technology is not a set of morally neutral means employed by human beings to control our natural environment. Technology is an existential disposition intrinsically connected to the social conditions of modern democratic peoples in general and Americans in particular. On this view, to be an American democrat is to be a technological romantic. Nothing is so radical or difficult to moderate as a romantic passion, and the Americans Tocqueville observed accepted only frail and minimal restraints on their technophilia. We have long since broken many of those restraints in our quest to live up to our poetic self-image. …

Democratic peoples, Tocqueville [writes], “imagine an extreme point where liberty and equality meet and merge,” and, in our less sober moments, we believe that technology can help us get there by so thoroughly vanquishing natural scarcity and the limits of human nature that we can eliminate unfreedom and inequality as such. We might be able to improve the human condition so far that what seemed in the past to be permanent facts of human life — ruling and being ruled, wealth and poverty, virtue and vice — can be left behind as we achieve the full realization of our democratic ideal of liberty and equality.

The glory of this view manifests itself in admirable technical skill and an outpouring of ingenious, if disposable, goods. But when embraced as a philosophy, a way of seeing the world, it turns destructive.

Not content with the obvious truth that our technical know-how has made us, on average, healthier and more prosperous than peoples of the past, we insist that it has also made us happier and better — indeed, that human happiness and virtue are technical problems, problems our rightly-celebrated practical know-how can settle, once and for all. Tocqueville saw how the terminology of commerce in the 1830s was coming to penetrate all aspects of American language, “the first instrument of thought.” As our technological utopian project advances, as our science enters further into the domain of the human heart and mind, we come to see our lives less in terms of joys, virtues, sins, and miseries and more in terms of chemical imbalances, hormones, good moods, and depressions — material problems susceptible to technological solutions, not moral challenges or existential conditions with which we must learn to live.

We are flawed not because we are flawed but because we were born into an insufficiently technologized world.

Image of Oculus Rift: Wikipedia.

Transparency through opacity

0541

One of the topics of my forthcoming book, The Glass Cage, is the rise of “technology-centered automation” as the dominant design philosophy of computer engineers and programmers. The philosophy gives precedence to the capabilities of technology over the interests of people. One of its governing characteristics is opacity, the hiding of the workings of an application or system behind a “user-friendly” interface. In an interview with VVVNT, the New Zealand artist and engineer Julian Oliver, coauthor of the Critical Engineering Manifesto, discusses the importance of questioning opaque, or “black box,” design strategies:

We must thoroughly extend our knowledge of automated systems and communication infrastructure and peer inside the black box. Otherwise, we are at a technopolitical disadvantage, and that ignorance can be leveraged to great political effect.

If you were to tell people in the local post office that the postal service had a special room where the mail people have been sending is opened up, [each] letter taken out and carefully copied, the sender and recipient of that letter written down and put into a cabinet, and then the letter put back into its envelope and sent on its way, you’d have a lot of old people burning cars in the street. But the same thing is happening with data retention. In fact, the term data retention itself is so internally opaque that most people can’t even begin working with it critically.

If I were to ask those same people in the post office how the postcard they just received arrived in [their] mailbox, they would be able to give me a relatively coherent description of that whole process. But as to how an email found its way to their inbox? They would be at a complete loss.

Silicon Valley, broadly defined, has become a font of Orwellian doublespeak, though it remains naively unconscious of the fact. It promotes itself as a purveyor of transparency and openness, even as it seeks to wrap the world in opacity. (Its view of transparency is that of the x-ray technician.) And it uses a humanistic, if not utopian, rhetoric, while pursuing a design ethic that is fundamentally misanthropic. Oliver’s idea of “the critical engineer” seems like a good place to start in challenging the status quo.

h/t: Alexis Madrigal.

Image: detail of cover of Velvet Underground album White Light/White Heat.