On autopilot: the dangers of overautomation

The grounding of Boeing’s popular new 737 Max 8 planes, after two recent crashes, has placed a new focus on flight automation. Here’s an excerpt from my 2014 book on automation and its human consequences, The Glass Cage, that seems relevant to the discussion.

The lives of aviation’s pioneers were exciting but short. Lawrence Sperry died in 1923 when his plane crashed into the English Channel. Wiley Post died in 1935 when his plane went down in Alaska. Antoine de Saint-Exupéry died in 1944 when his plane disappeared over the Mediterranean. Premature death was a routine occupational hazard for pilots during aviation’s early years; romance and adventure carried a high price. Passengers died with alarming frequency, too. As the airline industry took shape in the 1920s, the publisher of a U.S. aviation journal implored the government to improve flight safety, noting that “a great many fatal accidents are daily occurring to people carried in airplanes by inexperienced pilots.”

Air travel’s lethal days are, mercifully, behind us. Flying is safe now, and pretty much everyone involved in the aviation business believes that advances in automation are one of the reasons why. Together with improvements in aircraft design, airline safety routines, crew training, and air traffic control, the mechanization and computerization of flight have contributed to the sharp and steady decline in accidents and deaths over the decades. In the United States and other Western countries, fatal airliner crashes have become exceedingly rare. Of the more than seven billion people who boarded U.S. flights in the ten years from 2002 through 2011, only 153 ended up dying in a wreck, a rate of two deaths for every million passengers. In the ten years from 1962 through 1971, by contrast, 1.3 billion people took flights, and 1,696 of them died, for a rate of 133 deaths per million.

But this sunny story carries a dark footnote. The overall decline in plane crashes masks the recent arrival of  “a spectacularly new type of accident,” says Raja Parasuraman, a psychology professor at George Mason University and one of the world’s leading authorities on automation. When onboard computer systems fail to work as intended or other unexpected problems arise during a flight, pilots are forced to take manual control of the plane. Thrust abruptly into what has become a rare role, they too often make mistakes. The consequences, as the Continental Connection and Air France disasters of 2009 show, can be catastrophic. Over the last 30 years, scores of psychologists, engineers, and other ergonomics, or “human factors,” researchers have studied what’s gained and lost when pilots share the work of flying with software. What they’ve learned is that a heavy reliance on computer automation can erode pilots’ expertise, dull their reflexes, and diminish their attentiveness, leading to what Jan Noyes, a human factors expert at Britain’s University of Bristol, calls “a deskilling of the crew.”

Concerns about the unintended side effects of flight automation aren’t new. They date back at least to the early days of fly-by-wire controls. A 1989 report from NASA’s Ames Research Center noted that, as computers had begun to multiply on airplanes during the preceding decade, industry and governmental researchers “developed a growing discomfort that the cockpit may be becoming too automated, and that the steady replacement of human functioning by devices could be a mixed blessing.” Despite a general enthusiasm for computerized flight, many in the airline industry worried that “pilots were becoming over-dependent on automation, that manual flying skills may be deteriorating, and that situational awareness might be suffering.”

Many studies since then have linked particular accidents or near misses to breakdowns of automated systems or to “automation-induced errors” on the part of flight crews. In 2010, the Federal Aviation Administration released some preliminary results of a major study of airline flights over the preceding ten years, which showed that pilot errors had been involved in more than 60 percent of crashes. The research further indicated, according to a report from FAA scientist Kathy Abbott, that automation has made such errors more likely. Pilots can be distracted by their interactions with onboard computers, Abbott said, and they can “abdicate too much responsibility to the automated systems.”

In the worst cases, automation can place added and unexpected demands on pilots during moments of crisis—when, for instance, the technology fails. The pilots may have to interpret computerized alarms, input data, and scan information displays even as they’re struggling to take manual control of the plane and orient themselves to their circumstances. The tasks and attendant distractions increase the odds that the aviators will make mistakes. Researchers refer to this as the “automation paradox.” As Mark Scerbo, a psychologist and human-factors expert at Virginia’s Old Dominion University, has explained, “The irony behind automation arises from a growing body of research demonstrating that automated systems often increase workload and create unsafe working conditions.”

The anecdotal and theoretical evidence collected through accident reports, surveys, and studies received empirical backing from a rigorous experiment conducted by Matthew Ebbatson, a young human factors researcher at the University of Cranfield, a top U.K. engineering school. Frustrated by the lack of hard, objective data on what he termed “the loss of manual flying skills in pilots of highly automated airliners,” Ebbatson set out to fill the gap. He recruited 66 veteran pilots from a British airline and had each of them get into a flight simulator and perform a challenging maneuver—bringing a Boeing 737 with a blown engine in for a landing in bad weather. The simulator disabled the plane’s automated systems, forcing the pilots to fly by hand. Some of the pilots did exceptionally well in the test, Ebbatson reported, but many of them performed poorly, barely exceeding “the limits of acceptability.”

Ebbatson then compared detailed measures of each pilot’s performance in the simulator—the pressure they exerted on the yoke, the stability of their airspeed, the degree of variation in their course—with their historical flight records. He found a direct correlation between a pilot’s aptitude at the controls and the amount of time the pilot had spent flying by hand, without the aid of automation. The correlation was particularly strong with the amount of manual flying done during the preceding two months. The analysis indicated that “manual flying skills decay quite rapidly towards the fringes of ‘tolerable’ performance without relatively frequent practice.” Particularly “vulnerable to decay,” Ebbatson noted, was a pilot’s ability to maintain “airspeed control”—a skill that’s crucial to recognizing, avoiding, and recovering from stalls and other dangerous situations.

It’s no mystery why automation takes a toll on pilot performance. Like many challenging jobs, flying a plane involves a combination of psychomotor skills and cognitive skills—thoughtful action and active thinking, in simple terms. A pilot needs to manipulate tools and instruments with precision while swiftly and accurately making calculations, forecasts, and assessments in his head. And while he goes through these intricate mental and physical maneuvers, he needs to remain vigilant, alert to what’s going on around him and adept at distinguishing important signals from unimportant ones. He can’t allow himself either to lose focus or to fall victim to tunnel vision. Mastery of such a multifaceted set of skills comes only with rigorous practice. A beginning pilot tends to be clumsy at the controls, pushing and pulling the yoke with more force than is necessary. He often has to pause to remember what he should do next, to walk himself methodically through the steps of a process. He has trouble shifting seamlessly between manual and cognitive tasks. When a stressful situation arises, he can easily become overwhelmed or distracted and end up overlooking a critical change in his circumstances.

In time, after much rehearsal, the novice gains confidence. He becomes less halting in his work and much more precise in his actions. There’s little wasted effort. As his experience continues to deepen, his brain develops so-called mental models—dedicated assemblies of neurons—that allow him to recognize patterns in his surroundings. The models enable him to interpret and react to stimuli as if by instinct, without getting bogged down in conscious analysis. Eventually, thought and action become seamless. Flying becomes second nature. Years before researchers began to plumb the workings of pilots’ brains, Wiley Post described the experience of expert flight in plain, precise terms. He flew, he said in 1935, “without mental effort, letting my actions be wholly controlled by my subconscious mind.” He wasn’t born with that ability. He developed it through lots of hard work.

When computers enter the picture, the nature and the rigor of the work changes, as does the learning the work engenders. As software assumes moment-by-moment control of the craft, the pilot is relieved of much manual labor. This reallocation of responsibility can provide an important benefit. It can reduce the pilot’s workload and allow him to concentrate on the cognitive aspects of flight. But there’s a cost. Exercised much less frequently, the psychomotor skills get rusty, which can hamper the pilot on those rare but critical occasions when he’s required to take back the controls. There’s growing evidence that recent expansions in the scope of automation also put cognitive skills at risk. When more advanced computers begin to take over planning and analysis functions, such as setting and adjusting a flight plan, the pilot becomes less engaged not only physically but mentally. Because the precision and speed of pattern recognition appear to depend on regular practice, the pilot’s mind may become less agile in interpreting and reacting to fast-changing situations. He may suffer what Ebbatson calls “skill fade” in his mental as well as his motor abilities.

Pilots themselves are not blind to automation’s toll. They’ve always been wary about ceding responsibility to machinery. Airmen in World War I, justifiably proud of their skill in maneuvering their planes during dogfights, wanted nothing to do with the fancy Sperry autopilots that had recently been introduced. In 1959, the original Mercury astronauts famously rebelled against NASA’s plan to remove manual flight controls from spacecraft. But aviators’ concerns are more acute now. Even as they praise the enormous gains being made in flight technology, and acknowledge the safety and efficiency benefits, they worry about the erosion of their talents. As part of his research, Ebbatson surveyed commercial pilots, asking them whether “they felt their manual flying ability had been influenced by the experience of operating a highly automated aircraft.” Fully 77 percent reported that “their skills had deteriorated”; just 7 percent felt their skills had improved.

The worries seem particularly pronounced among more experienced pilots, especially those who began their careers before computers became entwined with so many aspects of aviation. Rory Kay, a long-time United Airlines captain who until recently served as the top safety official with the Air Line Pilots Association, fears the aviation industry is suffering from “automation addiction.” In a 2011 interview, he put the problem in stark terms: “We’re forgetting how to fly.”

One thought on “On autopilot: the dangers of overautomation

  1. Ash

    The only thing which came to mind after the boeing news is the The Glass Cage.
    I imagine you yelling in your head ” Thats what I have been telling you guys all these years!” far too many times.
    Thanks for keeping your cool Nick. It makes me feel there is a lot more good work coming.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.