Monitor Smart, Part 1: Automation Bias

One of my most persistent curiosities is in finding a new way to describe something that I thought I knew enough about already. In support of this urge, the journals I subscribe to pile up on my desk weekly, waiting to be opened and scoured for new knowledge that I can apply to my own flying, and share in forums like this. Most of the time—I freely admit—my search comes up with little or nothing worth notice; but sometimes it does. And when that happens, it’s like finding a 20-dollar bill in a pair of jeans that has been through the laundry—only I’m likely to get less out of the twenty bucks than the new nugget of knowledge.

We’ve been putting a good deal of stress on monitoring over the past few months, and I think it’s worthwhile to keep the discussion going, especially as we each assess the impact of recent reports across the industry that have brought that single principle of Automation Airmanship into focus. Some of that emphasis has been on clearly defining what it is that monitoring consists of, what should be monitored, and why the contemporary flight deck environment can make this so challenging. In reading the most recent edition of the Journal of Aviation Psychology, my interest was piqued by a submission from a group of Dutch researchers. Their report deals with a significant aspect of contemporary monitoring, automation bias. I found it very informative that the authors offered several definitions of automation bias in the article:

“…the phenomenon that users of automation have a tendency to trust and follow the signals of the automated system to the extent that contradictory information available from other sources is ignored or is detected too late.” (Mosier & Slitka, 1996)

“…(a) withdrawal of attention in terms of incomplete cross-checking of information, (b) active discounting of contradictory system information, and (c) inattentive processing of contradictory information [analogous] to a ‘looking -but-not-seeing’ effect” ( Manzey, Rechenbach, and Onnasch (2012)

“…trusting the system instead of vigilantly seeking contradictory information…” (DeBoer, Heems and Hurts, 2014)

That’s a bunch of academic language to describe what can prevent crewmembers from accurately “seeing what’s there” and enhancing Situation and Mode Awareness (SMA). It also shows how the definition of automation bias has evolved along with the technology on the flight deck. The researchers conducted a study to see, basically, if the commonly accepted “standard for detection of visible alarms” of 23 to 45 seconds was still viable. In other words, how much time does it take a pilot to detect, assess, and act on an alert in the cockpit that changes their perception of how aircraft automated systems are performing. Their research suggests that it actually can take much longer than that (the median time in their study was 143 seconds).

From a practical point of view, what this might mean to pilots is that knowledge-based monitoring might actually have a greater role on the flight deck than conventional wisdom suggests, even on the highly automated flight deck. Our next post will take this challenge on, and suggest a way for crews to bring their monitoring skills to a new level of vigilance, by phase of flight. In the meantime, getting comfortable with what automation bias is can be a big multiplier in more effective monitoring.

Think about it.

Until our next report, fly safe, and always, fly first.

Reference: De Boer, Robert J. et. al. The Duration of Automation Bias in a Realistic Setting. The International Journal of Aviation Psychology (October-December 2014), 24(4), 287-299. Taylor and Francis Group. Philadelphia, PA.

Leave a Comment