Industry Turns To Self-Help For Improving Pilot Monitoring

By Sean Broderick
Source: Aviation Week & Space Technology

Then came Asiana Flight 214, which is developing as a rich case study in monitoring deficiencies. Facts released by NTSB Chairman Deborah Hersman just after the July 6 accident revealed that the Boeing 777-200ER's final approach airspeed was too slow, falling below the crew's targeted landing speed at least 30 sec. before impact. Three pilots on the flight deck—a captain flying the approach, a check airman in the right seat serving as the pilot-in-command (PIC), and a first officer in the jumpseat—did not see the deviation until it was too late to recover.

The PIC told investigators he had expected the aircraft's autothrottle to maintain a safe speed. Even if the 777 malfunctioned—and nothing released suggests this is the case—it would not explain why the pilots took so long to detect the problem.

“The crew is required to maintain a safe aircraft,” Hersman said three days after the accident, which killed three passengers. “That means they need to monitor.”

The NTSB's initial Asiana briefings were enough to make the human factors community, and some airline training managers, lean forward. “Was [inadequate monitoring] a factor in Asiana Flight 214? We'll have to wait for the final accident report to know for sure,” Dempsey says. “Nevertheless, initial reports . . . have increased the level of interest and, I think, a sense of urgency for the APM report.”

Completed probes of recent monitoring-related incidents found at least one common factor. While inadequate monitoring alone did not cause any of them, proficient monitoring might have prevented each one. The FAA's CRM guidance updates suggest that monitoring initiatives were headed in the right direction, but the NTSB's breakdowns of the Pueblo, Buffalo and Jackson Hole accidents—and early word from San Francisco—paint a different picture.

“We felt like, OK, good, we're on the right path,” says Sumwalt. “But we are now reminded that this is a problem that never really went away.”

Part of the challenge is that human behavior is not the only obstacle for effective monitoring. Environment is another huge factor.

An oft-referenced example is the 1972 crash of Eastern Airlines Flight 401, caused by an undetected change in altitude when the pilot accidentally disengaged the altitude-hold function by bumping his control column. The crew, believing the aircraft was in a holding pattern, was fixated on a landing gear warning light. The NTSB determined they were not monitoring their instruments and failed to detect the deviation until just before the aircraft crashed into the Florida Everglades. A system design quirk allowed the Lockheed L-1011's altitude-hold mode to switch off without prompting a “disconnect” warning light, adding to the crew's monitoring challenges.

Changing factors such as cockpit layouts takes years. In its 1996 study, “The Interfaces Between Flightcrews and Modern Flight Deck Systems,” the FAA's Human Factors Team made 51 recommendations, including evaluating flight-deck design and systems from a human performance perspective.

Comments On Articles