Several car manufacturers and technology companies currently have autonomous cars in their road maps. However, highly reliable automated systems introduce a huge conundrum (Endsley, 2016) – it makes it difficult for human operators to monitor critical pieces of information and take over manual control when needed.
For example, drivers with high automation trust are less likely to monitor the automated driving system (e.g., Hergeth, Lorenz, Vilimek, & Krems, 2016a) and more likely to engage in nondriving tasks (e.g., Carsten, Lai, Barnard, Jamson, & Merat, 2012).
Whether we like it or not, automation is here to stay and to grow. Given that, how can we make human–automation interaction safer (Endsley, 2016; Hergeth, Lorenz, Vilimek, & Krems, 2016b)?
- Allow automation to degrade gracefully.
- Create transparent automation user interfaces that enable human operators to understand what is going on and to make predictions.
- As the automation learns new behaviors, convey the changes to operators to enable them to maintain an accurate mental model of the automation.
- Design automated systems that cooperate, coordinate, and collaborate with human operators.
- Enable shared situation awareness between the human and the automation, which in turn promotes goal alignment, knowing what each team member (i.e., human and the machine) is doing, and permits reallocation of responsibilities and communication of strategies and actions.
- Allow operators to experience automation failure situations during practice trials − which is more effective than merely informing them about automation failures − to prevent the effects of “first automation failure.” For example, in a simulated driving task, driver performance in the first manual takeover situation was better with prior familiarization with takeover requests and worst without prior familiarization.
- Acknowledge that there are individual differences in responding to consecutive automation failures and provide training to improve working memory and sustained attentional skills to enable faster response times (Jipp, 2016).
Carsten, O., Lai, F. C. H., Barnard, Y., Jamson, A. H., & Merat, N. (2012). Control task substitution in semiautomated driving: Does it matter what aspects are automated? Human Factors, 54, 747−761. http://journals.sagepub .com/doi/10.1177/0018720812460246
Endsley, M. R. (2016). From here to autonomy: Lessons learned from human–automation research. Human Factors, 20, 1−23. http:// journals.sagepub.com/doi/10.1177/001 8720816681350
Hergeth, S., Lorenz, L., Vilimek, R., & Krems, J. F. (2016a). Keep your scanners peeled: Gaze behavior as a measure of automation trust during highly automated driving. Human Factors, 58, 509−519. http://journals.sage pub.com/doi/10.1177/0018720815625744
Hergeth, S., Lorenz, L., Vilimek, R., & Krems, J. F. (2016b). Prior familiarization with takeover requests affects drivers’ takeover performance and automation trust. Human Factors, published online December 20, 2016. http://journals.sagepub.com/doi/10.1177/0 018720816678714
Jipp, M. (2016). Reaction times to consecutive automation failures: A function of working memory and sustained attention. Human Factors, 58, 1248–1261. http://journals.sage pub.com/doi/10.1177/0018720816662374