Category Archives: Uncategorized

Tesla and the glass cockpit problem


When news spread last week about the fatal crash of a computer-driven Tesla, I thought of a conversation I had a couple of years ago with a top computer scientist at Google. We were talking about some recent airliner crashes caused by “automation complacency” — the tendency for even very skilled pilots to tune out from their work after turning on autopilot systems — and the Google scientist noted that the problem of automation complacency is even more acute for drivers than for pilots. If you’re flying a plane and something unexpected happens, you usually have several seconds or even minutes to respond before the situation becomes dire. If you’re driving a car, you may have only a second or a fraction of a second to take action before you collide with another car, or a bridge abutment, or a tree. There are far more obstacles on the ground than in the sky.

With the Tesla accident, the evidence suggests that the crash happened before the driver even realized that he was about to hit a truck. He seemed to be suffering from automation complacency up to the very moment of impact. He trusted the machine, and the machine failed him. Such complacency is a well-documented problem in human-factors research, and it’s what led Google to change the course of its self-driving car program a couple of years ago, shifting to a perhaps quixotic goal of total automation without any human involvement. In rushing to give drivers the ability to switch on an “Autopilot” mode, Tesla ignored or dismissed the research, with a predictable result. As computer and car companies push the envelope of automative automation, driver complacency and skill loss promise to become ever greater challenges — ones that (as Google appears to have concluded) may not be solvable given the fallibility of software, the psychology of human beings, and the realities of driving.*

Following is a brief excerpt from my book about the human consequences of automation, The Glass Cage, that describes how, as aviation became more automated over the years, pilots flying in so-called glass cockpits grew more susceptible to automation complacency and “skill fade” — to the point that the FAA is now urging pilots to practice manual flying more often.

Premature death was a routine occupational hazard for even the most expert pilots during aviation’s early years. Lawrence Sperry died in 1923 when his plane crashed into the English Channel. Wiley Post died in 1935 when his plane went down in Alaska. Antoine de Saint-Exupéry died in 1944 when his plane disappeared over the Mediterranean. Air travel’s lethal days are, mercifully, behind us. Flying is safe now, and pretty much everyone involved in the aviation business believes that advances in automation are one of the reasons why. Together with improvements in aircraft design, airline safety routines, crew training, and air traffic control, the mechanization and computerization of flight have contributed to the sharp and steady decline in accidents and deaths over the decades.

But this sunny story carries a dark footnote. The overall decline in the number of plane crashes masks the recent arrival of “a spectacularly new type of accident,” says Raja Parasuraman, a psychology professor at George Mason University and one of the world’s leading authorities on automation. When onboard computer systems fail to work as intended or other unexpected problems arise during a flight, pilots are forced to take manual control of the plane. Thrust abruptly into a dangerous situation, they too often make mistakes. The consequences, as the 2009 Continental Connection and Air France disasters show, can be catastrophic. Over the last thirty years, dozens of psychologists, engineers, and human factors researchers have studied what’s gained and lost when pilots share the work of flying with software. They’ve learned that a heavy reliance on computer automation can erode pilots’ expertise, dull their reflexes, and diminish their attentiveness, leading to what Jan Noyes, a human-factors expert at Britain’s University of Bristol, calls “a deskilling of the crew.”

Concerns about the unintended side effects of flight automation aren’t new. They date back at least to the early days of glass cockpits and fly-by-wire controls. A 1989 report from NASA’s Ames Research Center noted that as computers had begun to multiply on airplanes during the preceding decade, industry and governmental researchers “developed a growing discomfort that the cockpit may be becoming too automated, and that the steady replacement of human functioning by devices could be a mixed blessing.” Despite a general enthusiasm for computerized flight, many in the airline industry worried that “pilots were becoming over-dependent on automation, that manual flying skills may be deteriorating, and that situational awareness might be suffering.”

Studies conducted since then have linked many accidents and near misses to breakdowns of automated systems or to automation complacency or other “automation-induced errors” on the part of flight crews. In 2010, the FAA released preliminary results of a major study of airline flights over the preceding ten years which showed that pilot errors had been involved in nearly two-thirds of all crashes. The research further indicated, according to FAA scientist Kathy Abbott, that automation has made such errors more likely. Pilots can be distracted by their interactions with onboard computers, Abbott said, and they can “abdicate too much responsibility to the automated systems.” An extensive 2013 government report on cockpit automation, compiled by an expert panel and drawing on the same FAA data, implicated automation-related problems, such as a complacency-induced loss of situational awareness and weakened hand-flying skills, in more than half of recent accidents.

The anecdotal evidence collected through accident reports and surveys gained empirical backing from a rigorous study conducted by Matthew Ebbatson, a young human-factors researcher at Cranfield University, a top U.K. engineering school. Frustrated by the lack of hard, objective data on what he termed “the loss of manual flying skills in pilots of highly automated airliners,” Ebbatson set out to fill the gap. He recruited sixty-six veteran pilots from a British airline and had each of them get into a flight simulator and perform a challenging maneuver—bringing a Boeing 737 with a blown engine in for a landing during bad weather. The simulator disabled the plane’s automated systems, forcing the pilot to fly by hand. Some of the pilots did exceptionally well in the test, Ebbatson reported, but many performed poorly, barely exceeding “the limits of acceptability.”

Ebbatson then compared detailed measures of each pilot’s performance in the simulator—the pressure exerted on the yoke, the stability of airspeed, the degree of variation in course—with the pilot’s historical flight record. He found a direct correlation between a pilot’s aptitude at the controls and the amount of time that pilot had spent flying without the aid of automation. The correlation was particularly strong with the amount of manual flying done during the preceding two months. The analysis indicated that “manual flying skills decay quite rapidly towards the fringes of ‘tolerable’ performance without relatively frequent practice.” Particularly “vulnerable to decay,” Ebbatson noted, was a pilot’s ability to maintain “airspeed control”—a skill crucial to recognizing, avoiding, and recovering from stalls and other dangerous situations.

It’s no mystery why automation degrades pilot performance. Like many challenging jobs, flying a plane involves a combination of psychomotor skills and cognitive skills—thoughtful action and active thinking. A pilot needs to manipulate tools and instruments with precision while swiftly and accurately making calculations, forecasts, and assessments in his head. And while he goes through these intricate mental and physical maneuvers, he needs to remain vigilant, alert to what’s going on around him and able to distinguish important signals from unimportant ones. He can’t allow himself either to lose focus or to fall victim to tunnel vision. Mastery of such a multifaceted set of skills comes only with rigorous practice. A beginning pilot tends to be clumsy at the controls, pushing and pulling the yoke with more force than necessary. He often has to pause to remember what he should do next, to walk himself methodically through the steps of a process. He has trouble shifting seamlessly between manual and cognitive tasks. When a stressful situation arises, he can easily become overwhelmed or distracted and end up overlooking a critical change in circumstances.

In time, after much rehearsal, the novice gains confidence. He becomes less halting in his work and more precise in his actions. There’s little wasted effort. As his experience continues to deepen, his brain develops so-called mental models—dedicated assemblies of neurons—that allow him to recognize patterns in his surroundings. The models enable him to interpret and react to stimuli intuitively, without getting bogged down in conscious analysis. Eventually, thought and action become seamless. Flying becomes second nature. Years before researchers began to plumb the workings of pilots’ brains, Wiley Post described the experience of expert flight in plain, precise terms. He flew, he said in 1935, “without mental effort, letting my actions be wholly controlled by my subconscious mind.” He wasn’t born with that ability. He developed it through hard work.

When computers enter the picture, the nature and the rigor of the work change, as does the learning the work engenders. As software assumes moment-by-moment control of the craft, the pilot is relieved of much manual labor. This reallocation of responsibility can provide an important benefit. It can reduce the pilot’s workload and allow him to concentrate on the cognitive aspects of flight. But there’s a cost. Psychomotor skills get rusty, which can hamper the pilot on those rare but critical occasions when he’s required to take back the controls. There’s growing evidence that recent expansions in the scope of automation also put cognitive skills at risk. When more advanced computers begin to take over planning and analysis functions, such as setting and adjusting a flight plan, the pilot becomes less engaged not only physically but also mentally. Because the precision and speed of pattern recognition appear to depend on regular practice, the pilot’s mind may become less agile in interpreting and reacting to fast-changing situations. He may suffer what Ebbatson calls “skill fade” in his mental as well as his motor abilities.

Pilots are not blind to automation’s toll. They’ve always been wary about ceding responsibility to machinery. Airmen in World War I, justifiably proud of their skill in maneuvering their planes during dogfights, wanted nothing to do with the newfangled Sperry autopilots. In 1959, the original Mercury astronauts rebelled against NASA’s plan to remove manual flight controls from spacecraft. But aviators’ concerns are more acute now. Even as they praise the enormous gains in flight technology, and acknowledge the safety and efficiency benefits, they worry about the erosion of their talents. As part of his research, Ebbatson surveyed commercial pilots, asking them whether “they felt their manual flying ability had been influenced by the experience of operating a highly automated aircraft.” More than three-fourths reported that “their skills had deteriorated”; just a few felt their skills had improved. A 2012 pilot survey conducted by the European Aviation Safety Agency found similarly widespread concerns, with 95 percent of pilots saying that automation tended to erode “basic manual and cognitive flying skills.”

Rory Kay, a long-time United Airlines captain who until recently served as the top safety official with the Air Line Pilots Association, fears the aviation industry is suffering from “automation addiction.” In a 2011 interview with the Associated Press, he put the problem in stark terms: “We’re forgetting how to fly.”

What the aviation industry has discovered is that there’s a tradeoff between computer automation and human skill and attentiveness. Getting the balance right is exceedingly tricky. Just because some degree of automation is good, that doesn’t mean that more automation is necessarily better. We seem fated to learn this hard lesson once again with the even trickier process of automotive automation.

*UPDATE (7/7): The Times reports: “Experiments conducted last year by Virginia Tech researchers and supported by the national safety administration found that it took drivers of [self-driving] cars an average of 17 seconds to respond to takeover requests. In that period, a vehicle going 65 m.p.h. would have traveled 1,621 feet — more than five football fields.”

After math

Will Davies cuts through the prevailing emotionalism in dissecting the Brexit vote:

The Remain campaign continued to rely on forecasts, warnings and predictions, in the hope that eventually people would be dissuaded from ‘risking it’. But to those that have given up on the future already, this is all just more political rhetoric. In any case, the entire practice of modelling the future in terms of ‘risk’ has lost credibility, as evidenced by the now terminal decline of opinion polling as a tool for political control. …

In place of facts, we now live in a world of data. Instead of trusted measures and methodologies being used to produce numbers, a dizzying array of numbers is produced by default, to be mined, visualised, analysed and interpreted however we wish. If risk modelling (using notions of statistical normality) was the defining research technique of the 19th and 20th centuries, sentiment analysis is the defining one of the emerging digital era. We no longer have stable, ‘factual’ representations of the world, but unprecedented new capacities to sense and monitor what is bubbling up where, who’s feeling what, what’s the general vibe. …

As the 23rd June turned into 24th June, it became manifestly clear that prediction markets are little more than an aggregative representation of the same feelings and moods that one might otherwise detect via twitter. They’re not in the business of truth-telling, but of mood-tracking.

The global village of violence

facebook-angry

We assume that communication and harmony go hand in hand, like a pair of flower children on a garden path. If only we all could share our thoughts and feelings with everyone else all the time, we’d overcome our distrust and fear and live together peaceably. We’d see that we are all one. Facebook and other social media disabuse us of this notion. To be “all one” is to be dissolved — and for many people that is a threat that requires a reaction.

Eamonn Fitzgerald points to a recently uploaded video of a Canadian TV interview with Marshall McLuhan that aired in 1977. By the mid-seventies, a decade after his allotted minutes of fame, McLuhan had come to be dismissed as a mumbo-jumbo-spewing charlatan by the intelligentsia. What the intelligentsia found particularly irritating was that the mumbo jumbo McLuhan spewed fit no piety and often hit uncomfortably close to the mark.

Early on in the clip, the interviewer notes that McLuhan had long ago predicted that electronic communication systems would turn the world into a global village. Most of McLuhan’s early readers had taken this as a utopian prophecy. “But it seems,” the interviewer says, with surprise, “that this tribal world is not very friendly.” McLuhan responds:

The closer you get together, the more you like each other? There is no evidence of that in any situation that we have ever heard of. When people get close together, they get more and more savage and impatient with each other. [Man’s] tolerance is tested in those narrow circumstances very much. Village people are not that much in love with each other. The global village is a place of very arduous interfaces and very abrasive situations.

Instantaneous, universal communication is at least as likely to breed nationalism, xenophobia, and cultism as it is to breed harmony and fellow-feeling, McLuhan argues. As media dissolve individual identity, people rush to join “little groups” as a way to reestablish a sense of themselves, and they’ll go to extremes to defend their group identity, sometimes twisting the medium to their ends:

Ordinary people find the need for violence as they lose their identities. It is only the threat to people’s identity that makes them violent. Terrorists, hijackers — these are people minus identity. They are determined to make it somehow, to get coverage, to get noticed.

That’s simplistic — to a man with a media theory, everything looks like a media effect — but it’s not wrong.

People in all times have been this way. In our time, when things happen very quickly, there’s very little time to adjust to new situations at the speed of light. There is little time to get accustomed to anything.

With perfect communication comes perfect surveillance, McLuhan goes on to say, and that, too, tends to dissolve private identity:

We now have the means to keep everybody under surveillance. No matter what part of the world they are in, we can put them under surveillance. This has become one of the main occupations of mankind, just watching other people and keeping a record of their goings on. … Everybody has become porous. The light and the message go right through us.

At this moment, we are on the air. We do not have any physical body. When you’re on the telephone or on radio or on T.V., you don’t have a physical body — you’re just an image on the air. When you don’t have a physical body, you’re a discarnate being. You have a very different relation to the world around you. I think this has been one of the big effects of the electric age. It has deprived people really of their identity.

Anticipating Simon Reynolds’s Retromania, McLuhan also ties the dissolution of personal identity to culture’s turn toward nostalgia:

By the way, one of the big parts of the loss of identity is nostalgia. So there are revivals in every phase of life today. Revivals of clothing, of dances, of music, of shows, of everything. We live by the revival. It tells us who we are or were.

Everyone needs to be someone, for better or worse.

The explainable

calculatingpencil

Signature has an interview with Denis Boyles about his new book on the eleventh edition of the Encyclopedia Britannica, Everything Explained That Is Explainable. The title of Boyles’s book was one of the marketing slogans used to sell the encyclopedia when it went on sale in 1910 and 1911. In the interview, Boyles talks about how the monumental reference work was very much a reflection of its time:

Signature: It was a period of great change. The 11th was essentially published in the heart of the Progressive Era.  How did that impact its success?

Boyles: That really is the subject of the 11th. When considered as a book, it’s something like forty million words, but the topic is a singular one: progress. It tells you all the different ways progress can be seen. And it was secular, overwhelmingly so. The 11th was all about what could be measured, what could be known.  And that made progress essentially the turf of technicians and scientists and technical and scientific advances became confused with progress. The latest idea was the best idea. Now, we’re far enough into this to realize that the “latest idea” is just another idea. It may be good, it may not. We’re more weary [sic] as a society, but largely still as secular.

I sense that that last bit, about how we’ve moved beyond the assumption that the latest idea is the best idea, may be wishful thinking on Boyles’s part, or at least a reflection of the fact that he lives in Europe. Here in the U.S. we seem more than ever convinced that progress is “essentially the turf of technicians and scientists.” We see the newness of an idea as the idea’s validation, novelty being contemporary American culture’s central criterion.

Boyles points out that the Britannica’s eleventh edition underpins Wikipedia, and in Wikipedia we see, more clearly than ever, the elevation of and emphasis on measurement as the standard of knowledge and knowability. Wikipedia is pretty good, and ambitiously thorough, on technical and scientific topics, but it’s scattershot, and often just flat-out bad, in its coverage of topics in the humanities. Wikipedia’s editors, as Edward Mendelson has recently suggested, are comfortable in documenting consensus but completely uncomfortable in exercising taste. The kind of informed subjective judgment that is essential to any perceptive discussion of art, literature, or even history is explicitly outlawed at Wikipedia. And Wikipedia, like the eleventh edition of the Britannica, is a reflection of its time. The boundary we draw around “the explainable” is tighter than ever.

“Technical and scientific advances became confused with progress,” says Boyles, and so it is today, a century later.

Technological unemployment, then and now

robot2

In “Promise and Peril of Automation,” an article in the New York Times, David Morse writes:

The key area of social change stimulated by automation is employment. Everywhere one finds two things: a positive emphasis on opportunity and a keen sensitivity to change, often translated more concretely into fear.

The emphasis on opportunity is welcome and indicative of the climate in which automation will come to maturity. The sensitivity to change is equally significant. If fears about the future, especially job worries, are dismissed as “unreal” or “unimportant,” human resistance to change will be a major impediment to deriving full social benefit from automation.

What is the basis for these fears? Partly, a natural human uneasiness in the face of the unknown. Partly, the fact that few things are more serious to a worker than unemployment. Partly, too, the fear that automation undercuts the whole employment structure on which society as we know it is based. If, for example, automation cuts direct labor, often by 50 percent or more, and if this goes on from one industry to another, what happens? Even with shorter hours and new opportunities, will not a saturation point be reached, with old jobs disappearing faster than new ones are created, and unemployment on a wide scale raising its ugly head and creeping from one undertaking and industry to another?

Morse’s article was published on June 9, 1957.

Today, nearly sixty years later, the Times is running a new article on the specter of technological unemployment, by Eduardo Porter. He writes:

[Lawrence Summers] reminisced about his undergraduate days at M.I.T. in the 1970s, when the debate over the idea of technological unemployment pitted “smart people,” exemplified by the great economist Robert Solow, and “stupid people,” “exemplified by a bunch of sociologists.”

It was stupid to think technological progress would reduce employment. If technology increased productivity — allowing companies and their workers to make more stuff in less time — people would have more money to spend on more things that would have to be made, creating jobs for other people.

But at some point Mr. Summers experienced an epiphany. “It sort of occurred to me,” he said. “Suppose the stupid people were right. What would it look like?” And what it looked like fits pretty well with what the world looks like today.

The fears about automation’s job-killing potential that erupted in the 1950s didn’t pan out. That’s one reason why smart economists — no, it’s not an oxymoron — became so convinced that technological unemployment, as a broad rather than a local phenomenon, was mythical. But yesterday’s automation is not today’s automation. What if a new wave of computer-generated automation, rather than putting more money into the hands of masses of consumers, ended up concentrating that wealth, in the form of greater profits, into the hands of a rather small group of plutocrats who owned and controlled the means of automation? And what if automation’s reach extended so far into the human skill set that the range of jobs immune to automation was no longer sufficient to absorb displaced workers? There may not be a “lump of labor,” but we may discover that there is a “lump of skills.”

Henry Ford increased the hourly wage of workers beyond what was economically necessary because he knew that the workers would use the money to buy Ford cars. He saw that he had an interest in broadening prosperity. It seems telling that it has now become popular among the Silicon Valley elite to argue that the government should step in and start paying people a universal basic income. With a universal basic income, even the unemployed would still be able to afford their smartphone data plans.

Image: detail of “Friend or Foe?” by Leslie Illingworth.

The width of now

micrometer

“Human character changed on or about December 2010,” writes Edward Mendelson in “In the Depths of the Digital Age,” when “everyone, it seemed, started carrying a smartphone.”

For the first time, practically anyone could be found and intruded upon, not only at some fixed address at home or at work, but everywhere and at all times. Before this, everyone could expect, in the ordinary course of the day, some time at least in which to be left alone, unobserved, unsustained and unburdened by public or familial roles. That era now came to an end.

The self exploded as the social world imploded. The fuse had been burning for a long time, of course.

Mendelson continues:

In Thomas Pynchon’s Gravity’s Rainbow (1973), an engineer named Kurt Mondaugen enunciates a law of human existence: “Personal density … is directly proportional to temporal bandwidth.” The narrator explains: “’Temporal bandwidth’ is the width of your present, your now. … The more you dwell in the past and future, the thicker your bandwidth, the more solid your persona. But the narrower your sense of Now, the more tenuous you are.”

The genius of Mondaugen’s Law is its understanding that the unmeasurable moral aspects of life are as subject to necessity as are the measurable physical ones . . . You cannot reduce your engagement with the past and future without diminishing yourself, without becoming “more tenuous.”

The term “personal density” brings me back, yet again, to an observation the playwright Richard Foreman made, just before the arrival of the smartphone: “I see within us all (myself included) the replacement of complex inner density with a new kind of self — evolving under the pressure of information overload and the technology of the ‘instantly available.'”

The intensification of communication, and the attendant flow of information, aids in the development of personal density, of inner density, but only up to a point. Then the effect reverses. One is so overwhelmed by the necessity of communication — a necessity that may well be felt as a form of pleasure — that there is no longer any time for the synthesis or consolidation needed to build density. Little adheres, less coheres. Personal density at this point becomes inversely proportional to informational density. The only way to deal with the expansion of informational bandwidth is to constrict one’s temporal bandwidth — to narrow the “Now.” We are not unbounded; tradeoffs must be made.

Image: Andrew Gustar.

Just Google it

From Michael S. Evans’s review of Pedro Domingos’s  The Master Algorithm:

For those in power, machine learning offers all of the benefits of human knowledge without the attendant dangers of organized human resistance. The Master Algorithm describes a world in which individuals can be managed faster than they can respond. What does this future really look like? It looks like the world we already have, but without recourse.