Category Archives: The Glass Cage

Promoting human error


From a report on a prototype of a self-driving tractor-trailer developed by Daimler as part of its Mercedes-Benz Future Truck 2025 project:

For Daimler, the truck driver of the future looks something like this: He is seated in the cab of a semi, eyes on a tablet and hands resting in his lap …

The Daimler truck retains a steering wheel as a safety measure. This allows a driver to intervene for critical maneuvers …

The experience of guiding a self-driving truck is far less stressful than the vigilance required from a human to respond to traffic conditions. This means that drivers could have enough free time to speak with their families or employers, take care of paperwork or make travel plans …

“It’s strange at first,” said Hans Luft, who sat in the truck’s cab during the demonstration on Thursday. He waved his hands to show observers that he did not need them on the wheel, tapping at his tablet while taking advantage of the 45-degree swivel of his driver’s seat. “But you quickly learn to trust it and then it’s great.”

So you create an automated system that actively undermines the vigilance and situational awareness of the operator while at the same time relying on the operator to take control of the system for “critical maneuvers” in emergencies. This is a textbook case of automation design that borders on the criminally insane. And when an accident occurs — as it will — the crash will be blamed not on “stupid design” but on “human error.”

Image: Randy von Liski.

Comments Off

Filed under The Glass Cage

An android dreams of automation


Google’s Android guru, Sundar Pichai, provides a peek into the company’s conception of our automated future:

“Today, computing mainly automates things for you, but when we connect all these things, you can truly start assisting people in a more meaningful way,” Mr. Pichai said. He suggested a way for Android on people’s smartphones to interact with Android in their cars. “If I go and pick up my kids, it would be good for my car to be aware that my kids have entered the car and change the music to something that’s appropriate for them,” Mr. Pichai said.

What’s illuminating is not the triviality of Pichai’s scenario — that billions of dollars might be invested in developing a system that senses when your kids get in your car and then seamlessly cues up “Baby Beluga” — but what the urge to automate small, human interactions reveals about Pichai and his colleagues. With this offhand example, Pichai gives voice to Silicon Valley’s reigning assumption, which can be boiled down to this: Anything that can be automated should be automated. If it’s possible to program a computer to do something a person can do, then the computer should do it. That way, the person will be “freed up” to do something “more valuable.” Completely absent from this view is any sense of what it actually means to be a human being. Pichai doesn’t seem able to comprehend that the essence, and the joy, of parenting may actually lie in all the small, trivial gestures that parents make on behalf of or in concert with their kids — like picking out a song to play in the car. Intimacy is redefined as inefficiency.

I guess it’s no surprise that what Pichai expresses is a robot’s view of technology in general and automation in particular — mindless, witless, joyless; obsessed with productivity, oblivious to life’s everyday textures and pleasures. But it is telling. What should be automated is not what can be automated but what should be automated.

Image: “Communicating with the Beluga” by Bob.


Filed under The Glass Cage

From endless ladder to downward ramp


A couple of months ago, in the post “The Myth of the Endless Ladder,” I critiqued the widespread assumption that progress in production technology, such as advances in robotics and analytical software, inevitably “frees humans up to work on higher-value tasks,” in the words of economics reporter Annie Lowrey. While such a dynamic has often been true in the past, particularly in the middle years of the last century, there’s no guarantee that it will be true in the future. Evidence is growing, in fact, that a very different dynamic is now playing out, as computers take on more analytical and judgment-making tasks. In place of the endless ladder, we may now have what MIT economics professor and labor-market expert David Autor calls a “downward ramp.” The latest wave of automation technology appears to be “freeing us up” for less-interesting and less-challenging work.

In a New York Times column, Thomas Edsall points to new research, by economists Paul Beaudry, David Green, and Ben Sand, that suggests a widespread erosion in the skill levels of jobs since the year 2000. If in the 20 years leading up to the turn of the millennium we saw a “hollowing” of mid-skill jobs, with employment polarizing between low-skill and high-skill tasks, we now seem to be seeing a rapid loss of high-skill jobs as well. From top to bottom, the researchers report, workers are being pushed down the skill ramp:

After two decades of growth in the demand for occupations high in cognitive tasks, the US economy reversed and experienced a decline in the demand for such skills. The demand for cognitive tasks was to a large extent the motor of the US labor market prior to 2000. Once this motor reversed, the employment rate in the US economy started to contract. As we have emphasized, while this demand for cognitive tasks directly effects mainly high skilled workers, we have provided evidence that it has indirectly affected lower skill workers by pushing them out of jobs that have been taken up by higher skilled worker displaced from cognitive occupations. This has resulted in high growth in employment in low skilled manual jobs with declining wages in those occupations, and has pushed many low skill individuals out of the labor market.

Beaudry, Green, and Sand encapsulate the new deskilling trend in this remarkable chart, which documents the intellectual demands of the jobs taken by college graduates*:

downward ramp

Edsall reports that two other recent studies, one by Andrew Sum et al. and one by Lawrence Mishel et al., also find evidence of the deskilling trend among even the well-educated.

Comments MIT’s Andrew McAfee, co-author of The Second Machine Age:

This is bad news for several reasons. One of the most important is that the downward ramp appears to be leading to a “skills cascade” in which highly skilled / educated workers take jobs lower down the skill / wage ladder (since there’s not much demand at high levels), which in turn pushes less skilled workers even lower down the ladder, and so on. [Harvard economist] Larry Katz has found that “lots of new college graduates are moving into the service sector, that is, into traditionally non-college jobs, displacing young non-college workers.” Where this all ends is anyone’s guess.

At least one thing seems clear: The time has come to challenge not only the assumption that technological advances necessarily push people to higher-skilled work but also the self-serving Silicon Valley ideology that has wrapped itself around that assumption.

*Authors’ explanation of chart: “We plot the average cognitive task intensity of college graduates over the 1980- 2010 period. We measure cognitive intensity by assigning to each 4 digit occupation an average of their scores for cognitive tasks from the Dictionary of Occupation Titles (DOT). We define cognitive tasks as the non-routine analytic and interactive tasks used in Autor, Levy, and Murnane (2003) in their examination of the skill content of jobs. Movements in this cognitive task intensity index reflect movements in college educated workers across occupations. The figure indicates that average cognitive task intensity for college graduates increased from the early 1980s until about the year 2000 and then declined throughout the rest of the series.”

Image: “Guys and Bikes” by Astrid Westvang.


Filed under The Glass Cage

Let them eat images of cake


David Graeber observes:

It used to be that Americans mostly subscribed to a rough-and-ready version of the labor theory of value. Everything we see around us that we consider beautiful, useful, or important was made that way by people who sank their physical and mental efforts into creating and maintaining it. Work is valuable insofar as it creates these things that people like and need. Since the beginning of the 20th century, there has been an enormous effort on the part of the people running this country to turn that around: to convince everyone that value really comes from the minds and visions of entrepreneurs, and that ordinary working people are just mindless robots who bring those visions to reality.

Not only does it make perfect sense, therefore, to replace all those working stiffs, all those glorified ditch-diggers who traffic in the stuff of the world, with actual mindless robots, but in doing so you’re doing the workers a great, if as yet unappreciated, favor. You’re liberating them to become . . . visionaries! “Unemployment” is just a coarse term we use to describe the pre-visionary state. And so Andreessen: “All human time, labor, energy, ambition, and goals reorient to the intangibles: the big questions, the deep needs.” Intangibility is the last refuge of the materialist.

Image of starchild from 2001.


Filed under The Glass Cage

Marx Andreessen

In a series of rhapsodic tweets, venture capitalist Marc Andreessen imagines a world in which robots take over all productive labor:

All human time, labor, energy, ambition, and goals reorient to the intangibles: the big questions, the deep needs. Human nature expresses itself fully, for the first time in history. Without physical need constraints, we will be whoever we want to be. The main fields of human endeavor will be culture, arts, sciences, creativity, philosophy, experimentation, exploration, adventure. Rather than nothing to do, we would have everything to do: curiosity, artistic and scientific creativity, new forms of status seeking. Imagine six, or 10, billion people doing nothing but arts and sciences, culture and exploring and learning. What a world that would be.

What a world, indeed. It would, in fact, be precisely the world that Karl Marx dreamed about, where “nobody has one exclusive sphere of activity but each can become accomplished in any branch he wished.” Marx, too, believed that modern production technology would be instrumental in liberating people from the narrowness of traditional jobs, freeing human nature to express itself fully for the first time in history.

We know the process by which Marx saw his utopia of self-actualization come into being. I wonder how Andreessen would go about making his utopia operational. Would he begin by distributing his own wealth to the masses?


Filed under The Glass Cage

The cover of the cage

Here’s what The Glass Cage will be looking like when it drops on Sept 29.


Wear gloves.

Cover design by Pete Garceau.

1 Comment

Filed under The Glass Cage

Programming the moral robot


The U.S. Navy’s Office of Naval Research is funding an effort, by scientists at Tufts, Brown, and RPI, to develop military robots capable of moral reasoning:

The ONR-funded project will first isolate essential elements of human moral competence through theoretical and empirical research. Based on the results, the team will develop formal frameworks for modeling human-level moral reasoning that can be verified. Next, it will implement corresponding mechanisms for moral competence in a computational architecture.

That sounds straightforward. But hidden in those three short sentences are, so far as I can make out, at least eight philosophical challenges of extraordinary complexity:

  • Defining “human moral competence”
  • Boiling that competence down to a set of isolated “essential elements”
  • Designing a program of “theoretical and empirical research” that would lead to the identification of those elements
  • Developing mathematical frameworks for explaining moral reasoning
  • Translating those frameworks into formal models of moral reasoning
  • “Verifying” the outputs of those models as truthful
  • Embedding moral reasoning into computer algorithms
  • Using those algorithms to control a robot operating autonomously in the world

Barring the negotiation of a worldwide ban, which seems unlikely for all sorts of reasons, military robots that make life-or-death decisions about human beings are coming (if they’re not already here). So efforts to program morality into robots are themselves now morally necessary. It’s highly unlikely, though, that the efforts will be successful — unless, that is, we choose to cheat on the definition of success.

Selmer Bringsjord, head of the Cognitive Science Department at RPI, and Naveen Govindarajulu, post-doctoral researcher working with him, are focused on how to engineer ethics into a robot so that moral logic is intrinsic to these artificial beings. Since the scientific community has yet to establish what constitutes morality in humans the challenge for Bringsjord and his team is severe.

We’re trying to reverse-engineer something that wasn’t engineered in the first place.


Filed under The Glass Cage