Throwback Thursday: The Out-of-the-Loop Performance Problem and Level of Control in Automation
By definition, when users interact with automated systems, they are carrying out some of the task while the automation does the rest. When everything is running smoothly, this split in task allocation between user/machine causes no outward problems; the user is, by definition, “out of the loop” in part of the task.
Problems only start to appear when the automation fails or is unavailable and the user is suddenly left to do the activities previously carried out by automation. The user is now forced back “into the loop.”
I sort of experienced this when I got my new 2017 car. Naturally, as an automation researcher I asked for “all the things!” when it came to driver assistance features (yeah, for “research purposes”): active lane assist, adaptive cruise control with stop and go, autonomous emergency braking, auto-high-beams, auto-wipers, and more.
The combined driver assistance features probably equated to somewhere around Level 2 autonomy (see figure). I had fun testing the capabilities and limitations of each feature (safely!). They worked, for the most part, and I turned them on and forgot about it for the past 9 months.
However, while my car was serviced recently, I drove a loaner car that had none of the features and was quite shocked to see how sloppy my driving had become. Driving on the interstate seemed much more effortful than before. In addition, I had become accustomed to playing with my phone, messing with my music player or doing other things while on long stretches of straight highway. I could no longer do this safely with no driver assistance features.
This was when I realized how much of the lower-level task of driving (staying in the lane, keeping a constant distance to the car ahead, turning on and off my high beams) was not done by me. As an automation researcher, I was quite surprised.
This anecdote illustrates two phenomena in automation: complacency and the resultant skill degradation. Complacency is when I easily and willingly gave up a good chunk of the driving task to automation (in my personal car). I admit this complacency and high trust but it was only after several weeks of testing the limits of automation to understand the conditions where it worked best (e.g., lane keeping did not work well in conditions of high contrast road shadows). It is doubtful that regular people do this.
Because of my complacency (and high trust), I had mild skill degradation: the ability to smoothly and effortlessly maintain my lane and car distance.
You may have experienced a similar disorientation when you use your phone for GPS-guided directions and it gets taken away (e.g., you move to a low signal area, phone crashes). Suddenly, you have no idea where you are or what to do next.
So, what, exactly, causes this performance degradation when automation was taken away?
This "out of the loop" (OOTL) performance problem (decreased performance due to being out of the loop and suddenly being brough back into the loop) is the topic of this week's paper by Endsley and Kiris (1995). In the paper, Endsley and Kiris explored the out-of-the-loop performance problem for possible specific causes, and solutions.
Endsley and Kiris make the claim that most of the problems associated with OOTL are due to a loss of situation awareness (SA). SA is a concept that Endsley refined in an earlier paper. It basically means your awareness of the current situation and your ability to use this information to predict what will happen in the near future. Endsley defines situation awareness as:
Within this definition of SA, there are three levels:
- Level 1: perception of the elements (cues) in the environment: can you see the warning light on the dashboard?
- Level 2: comprehension of the meaning of the perceived elements: do you understand what the engine symbol means on the dashboard?
- Level 3: projection of the future: given what you know, can you predict whether breaking should be done to prevent a collision?
In this paper, she argued that the presence of automation essentially interfered with all levels of situation awareness:
Much of the existing research at the time had only examined physical/manual tasks, not cognitive/decision-making tasks. The purpose of Endsley and Kiris' paper was:
The key hypotheses they had was that as level of automation increased (see Arathi's earlier Throwback post on level of automation), situation awareness would decrease and this would be evidenced specifically by increased time for users to make a decision, and reduced confidence in decisions since users would feel less skilled or qualified (because they were so OOTL).
The bottom line was that, yes, the hypotheses were confirmed: increasingly higher levels of automation do seem to negatively impact situation awareness and this is evidenced as longer time to make decisions (because you had less SA), and slightly reduced confidence in your decisions. Using the examples above, the driver assistance features (a lower level form of automation) did not really lead to loss of situation awareness for me, but it did lead to skill degradation. However, in the GPS example, a much higher form of automation, it DOES lead to loss of situation awareness.
So what do we do with this information? First, it should clearly show that high or full autonomy of higher level cognitive-type tasks is not a very desirable goal. It might be only if automation is proven to be 100% reliable, which is never assured. Instead, there will be situations where the autonomy will fail and the user will have to assume manual control (think of a self driving car that fails). In these circumstances, the driver will have dramatically reduced SA and thus poor performance.
The surprising design recommendation?
Let's hope designers of future autonomy are listening!