Throwback Thursday: The ‘problem ’ with automation: inappropriate feedback and interaction, not ‘over-automation’

Today's Throwback article is from Donald Norman.  If that name sounds familiar, it is the same Dr. Norman who authored the widely influential, "The Design of Everyday Things."

In this 1990 paper published in the Philosophical Transactions of the Royal Society, Dr. Norman argued that much of the criticism of automation at the time (and today) is not due to the automation itself (or even over-automation) but due to its poor design; namely the lack of inadequate feedback to the user.  

This is a bit different than the concept of the out-of-the-loop (OOTL) scenario that we've talk about before; there is a subtle emphasis difference.  Yes, lack of feedback contributes to OOTL, but here, feedback is discussed more as an opaqueness of automation status and operations, not that it is carrying out a task that you previously performed.

He first starts off with a statement that should sound familiar if you've read our past Throwback posts:

The problem, I suggest, is that the automation is at an intermediate level of intelligence, powerful enough to take over control that used to be done by people, but not powerful enough to handle all abnormalities.
— pp. 137

The obvious solution, then is to make the automation even more intelligent (i.e., a higher level of automation):

To solve this problem, the automation should either be made less intelligent or more so, but the current level is quite inappropriate.
— pp. 137

If a higher level of automation is what is meant by "more intelligent," then we already  know that this is also not a viable solution (the research to show that was done after the publication of this paper).  However, this point is merely a setup to further the idea that problems with automation are caused not by the mere presence of automation, but by its lack of feedback.  Intelligence means giving just the right feedback at the right time for the task.

He provides aviation case studies that imply that the use of automation lead to out-of-the-loop performance issues (see previous post).  He next directs us through a thought experiment to help drive home his point:

Consider two thought experiments. In the first, imagine a captain of a plane who turns control over to the autopilot, as in the case studies of the loss of engine power and the fuel leak. In the second thought experiment, imagine that the captain turns control over to the first officer, who flies the plane ‘by hand’. In both of these situations, as far as the captain is concerned, the control has been automated: by an autopilot in one situation and by the first officer in the other.
— pp. 141

The implication is that when control is handed over to any entity (automation or a co-pilot), feedback is critical.   Norman cites the widely influential work of Hutchins who found that informal chatter, in addition to lots of other incidental verbal interaction, is crucial to what is essentially situation awareness in human-human teams (although Norman invokes the concept of mental models).  Humans do this, automation does not.  Back then, we did not know how to do it and we probably still do not know how to do it.  The temptation is to provide as much feedback as possible:

We do have a good example of how not to inform people of possible difficulties: overuse of alarms. One of the problems of modern automation is the unintelligent use of alarms, each individual instrument having a single threshold condition that it uses to sound a buzzer or flash a message to the operator, warning of problems.
— pp. 143

This is the current state of automation feedback.  If you have spent any time in a hospital, alerts are omnipresent and overlapping (cf. Seagull & Sanderson, 2001).  Norman ends with some advice about the design of future automation:

What is needed is continual feedback about the state of the system...This means designing systems that are informative, yet non-intrusive, so the interactions are done normally and continually, where the amount and form of feedback adapts to the interactive style of the participants and the nature of the problem.
— pp. 143

Information visualization and presentation research (e.g., sonification; or the work of Edward Tufte) tackles part of the problem. These techniques are an attempt to provide constant, non-intrusive information.   Adaptive automation, or automation that scales its level based on physiological indicators, is another attempt to get closer to Norman's vision but, in my opinion, may be insufficient and more disruptive as they do not address feedback.

To conclude, Norman's human action cycle is completely consistent with, and probably heavily informs, his thoughts on automation.

Further reading:  

Reference: Norman, D. A. (1990). The 'problem' with automation: inappropriate feedback and interaction, not'over-automation'. Philosophical Transactions of the Royal Society of London B: Biological Sciences327(1241), 585-593.