Throwback Thursday: Use, Misuse, Disuse, and Abuse of Automation
In this throwback post, I will introduce some important but similar-sounding terms from the automation literature and their causes: use, misuse, disuse, and abuse. I will highlight the key takeaways from the article followed by a give a light commentary.
I'm not very good with directions and find myself relying a lot on Google Maps when I am in an unfamiliar city. I use it because I know that I can rely on it most of the time. In other words, my positive attitude and use of Google Maps (i.e., the automated navigation aid) is influenced by the high reliability of Google Maps as well as due to my higher confidence in Google Maps compared to my own navigational skills (there have been numerous occasions where Google Maps have come to my rescue!).
Similarly, operators in complex environments will tend to defer to automation when they think it is highly reliable and when their confidence in the automation exceeds their confidence in their own abilities to perform the task.
While it is true that when reliable, automation may be better than humans at some tasks, the costs associated with automation failure are high for these highly reliable automated systems. As Rich described in his last throwback post , a potential consequence of highly reliable, automated systems is the out of the loop performance syndrome or inability of operators to take over manual control in the event of an automation failure due to their overreliance on automation as well as due to degradation of their manual skills.
High trust in automation can also make human operators less attentive to other sources of contradictory information; operators become so fixated on the notion that the automation is right that they fail to examine other information in the environment that seem to suggest otherwise. In research, we call this automation bias (more on this in a later post).
Misuse can be minimized by designing automation that is transparent about its state and its actions and that provides salient feedback to human operators. Next week's throwback post will elaborate on this point.
This means that automation that has a high propensity for false alarms is less likely to be trusted. For example, if the fire alarm in my building goes off all the time, I am less likely to respond to it (the cry wolf effect). It's not so simple as to say, "just make it less sensitive!" Designing an automated system with a low false alarm rate is a bit of a conundrum because with a low false alarm rate also comes a miss rate; that is, the fire alarm may not sound when there is a real fire.
While the cost of distrusting and disusing automation is high, the cost of missing an event can also be high in safety critical domains. Designers should therefore consider the comparitive costs of high false alarm rates and high miss rates when designing automation. This would obviously depend on the context in which the automation is used.
My earlier throwback post discusses the importance of considering the human performance consequences associated with automation use. Completely eliminating the human operator from the equation by assuming that this will eliminate human errors in its entirety is not a wise choice. This can leave operators with a higher workload and in a position to perform tasks for which they are not suited. This irony was discussed by Bainbridge in 1983. In short, operators' responsibilities should be based on their capabilities.
Citation: Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39, 230-253.
Downloadable link here.