Throwback Thursday: The ‘problem ’ with automation: inappropriate feedback and interaction, not ‘over-automation’
Today's Throwback article is from Donald Norman. If that name sounds familiar, it is the same Dr. Norman who authored the widely influential, "The Design of Everyday Things."
In this 1990 paper published in the Philosophical Transactions of the Royal Society, Dr. Norman argued that much of the criticism of automation at the time (and today) is not due to the automation itself (or even over-automation) but due to its poor design; namely the lack of inadequate feedback to the user.
This is a bit different than the concept of the out-of-the-loop (OOTL) scenario that we've talk about before; there is a subtle emphasis difference. Yes, lack of feedback contributes to OOTL, but here, feedback is discussed more as an opaqueness of automation status and operations, not that it is carrying out a task that you previously performed.
He first starts off with a statement that should sound familiar if you've read our past Throwback posts:
The obvious solution, then is to make the automation even more intelligent (i.e., a higher level of automation):
If a higher level of automation is what is meant by "more intelligent," then we already know that this is also not a viable solution (the research to show that was done after the publication of this paper). However, this point is merely a setup to further the idea that problems with automation are caused not by the mere presence of automation, but by its lack of feedback. Intelligence means giving just the right feedback at the right time for the task.
He provides aviation case studies that imply that the use of automation lead to out-of-the-loop performance issues (see previous post). He next directs us through a thought experiment to help drive home his point:
The implication is that when control is handed over to any entity (automation or a co-pilot), feedback is critical. Norman cites the widely influential work of Hutchins who found that informal chatter, in addition to lots of other incidental verbal interaction, is crucial to what is essentially situation awareness in human-human teams (although Norman invokes the concept of mental models). Humans do this, automation does not. Back then, we did not know how to do it and we probably still do not know how to do it. The temptation is to provide as much feedback as possible:
This is the current state of automation feedback. If you have spent any time in a hospital, alerts are omnipresent and overlapping (cf. Seagull & Sanderson, 2001). Norman ends with some advice about the design of future automation:
Information visualization and presentation research (e.g., sonification; or the work of Edward Tufte) tackles part of the problem. These techniques are an attempt to provide constant, non-intrusive information. Adaptive automation, or automation that scales its level based on physiological indicators, is another attempt to get closer to Norman's vision but, in my opinion, may be insufficient and more disruptive as they do not address feedback.
To conclude, Norman's human action cycle is completely consistent with, and probably heavily informs, his thoughts on automation.
Why nobody likes a smart machine [NYTIMES.com]