Posts in throwback
Throwback Thursday: The ‘problem ’ with automation: inappropriate feedback and interaction, not ‘over-automation’

Today's Throwback article is from Donald Norman.  If that name sounds familiar, it is the same Dr. Norman who authored the widely influential, "The Design of Everyday Things."

In this 1990 paper published in the Philosophical Transactions of the Royal Society, Dr. Norman argued that much of the criticism of automation at the time (and today) is not due to the automation itself (or even over-automation) but due to its poor design; namely the lack of inadequate feedback to the user.  

This is a bit different than the concept of the out-of-the-loop (OOTL) scenario that we've talk about before; there is a subtle emphasis difference.  Yes, lack of feedback contributes to OOTL, but here, feedback is discussed more as an opaqueness of automation status and operations, not that it is carrying out a task that you previously performed.

He first starts off with a statement that should sound familiar if you've read our past Throwback posts:

The problem, I suggest, is that the automation is at an intermediate level of intelligence, powerful enough to take over control that used to be done by people, but not powerful enough to handle all abnormalities.
— pp. 137

The obvious solution, then is to make the automation even more intelligent (i.e., a higher level of automation):

To solve this problem, the automation should either be made less intelligent or more so, but the current level is quite inappropriate.
— pp. 137

If a higher level of automation is what is meant by "more intelligent," then we already  know that this is also not a viable solution (the research to show that was done after the publication of this paper).  However, this point is merely a setup to further the idea that problems with automation are caused not by the mere presence of automation, but by its lack of feedback.  Intelligence means giving just the right feedback at the right time for the task.

He provides aviation case studies that imply that the use of automation lead to out-of-the-loop performance issues (see previous post).  He next directs us through a thought experiment to help drive home his point:

Consider two thought experiments. In the first, imagine a captain of a plane who turns control over to the autopilot, as in the case studies of the loss of engine power and the fuel leak. In the second thought experiment, imagine that the captain turns control over to the first officer, who flies the plane ‘by hand’. In both of these situations, as far as the captain is concerned, the control has been automated: by an autopilot in one situation and by the first officer in the other.
— pp. 141

The implication is that when control is handed over to any entity (automation or a co-pilot), feedback is critical.   Norman cites the widely influential work of Hutchins who found that informal chatter, in addition to lots of other incidental verbal interaction, is crucial to what is essentially situation awareness in human-human teams (although Norman invokes the concept of mental models).  Humans do this, automation does not.  Back then, we did not know how to do it and we probably still do not know how to do it.  The temptation is to provide as much feedback as possible:

We do have a good example of how not to inform people of possible difficulties: overuse of alarms. One of the problems of modern automation is the unintelligent use of alarms, each individual instrument having a single threshold condition that it uses to sound a buzzer or flash a message to the operator, warning of problems.
— pp. 143

This is the current state of automation feedback.  If you have spent any time in a hospital, alerts are omnipresent and overlapping (cf. Seagull & Sanderson, 2001).  Norman ends with some advice about the design of future automation:

What is needed is continual feedback about the state of the system...This means designing systems that are informative, yet non-intrusive, so the interactions are done normally and continually, where the amount and form of feedback adapts to the interactive style of the participants and the nature of the problem.
— pp. 143

Information visualization and presentation research (e.g., sonification; or the work of Edward Tufte) tackles part of the problem. These techniques are an attempt to provide constant, non-intrusive information.   Adaptive automation, or automation that scales its level based on physiological indicators, is another attempt to get closer to Norman's vision but, in my opinion, may be insufficient and more disruptive as they do not address feedback.

To conclude, Norman's human action cycle is completely consistent with, and probably heavily informs, his thoughts on automation.

Further reading:  

Reference: Norman, D. A. (1990). The 'problem' with automation: inappropriate feedback and interaction, not'over-automation'. Philosophical Transactions of the Royal Society of London B: Biological Sciences327(1241), 585-593.

Throwback Thursday: Use, Misuse, Disuse, and Abuse of Automation

In this throwback post, I will introduce some important but similar-sounding terms from the automation literature and their causes: use, misuse, disuse, and abuse. I will highlight the key takeaways from the article followed by a give a light commentary.

Until recently, the primary criteria for applying automation were technological feasibility and cost. To the extent that automation could perform a function more efficiently, reliably, or accurately than the human operator, or merely replace the operator at a lower cost, automation has been applied at the highest level possible.
— Parasuraman & Riley, pp. 232

The above statement was made by the authors 20 years ago and the irony is it seems to be the dominant design and engineering philosophy to apply the highest levels of automation whenever technological feasible without much regard for the consequences to human performance.

Automation use: Automation usage and attitudes towards automation are correlated. Often these attitudes are shaped by the reliability or accuracy of the automation.

— Parasuraman & Riley, pp. 234

I'm not very good with directions and find myself relying a lot on Google Maps when I am in an unfamiliar city. I use it because I know that I can rely on it most of the time.  In other words, my positive attitude and use of Google Maps (i.e., the automated navigation aid) is influenced by the high reliability of Google Maps as well as due to my higher confidence in Google Maps compared to my own navigational skills (there have been numerous occasions where Google Maps have come to my rescue!).

Similarly, operators in complex environments will tend to defer to automation when they think it is highly reliable and when their confidence in the automation exceeds their confidence in their own abilities to perform the task.

Misuse: Excessive trust can lead operators to rely uncritically on automation without recognizing its limitations or fail to monitor the automation’s behavior. Inadequate monitoring of automated systems has been implicated in several aviation accidents.
— Parasuraman & Riley, pp. 238-239

While it is true that when reliable, automation may be better than humans at some tasks, the costs associated with automation failure are high for these highly reliable automated systems.  As Rich described in his last throwback post , a potential consequence of highly reliable, automated systems is the out of the loop performance syndrome or inability of operators to take over manual control in the event of an automation failure due to their overreliance on automation as well as due to degradation of their manual skills. 

High trust in automation can also make human operators less attentive to other sources of contradictory information; operators become so fixated on the notion that the automation is right that they fail to examine other information in the environment that seem to suggest otherwise. In research, we call this automation bias (more on this in a later post).

Misuse can be minimized by designing automation that is transparent about its state and its actions and that provides salient feedback to human operators.  Next week's throwback post will elaborate on this point.

Disuse: If a system is designed to minimize misses at all costs, then frequent device false alarms may result. A low false alarm rate is necessary for acceptance of warning systems by human operators.
— Parasuraman & Riley, pp. 244

This means that automation that has a high propensity for false alarms is less likely to be trusted. For example, if the fire alarm in my building goes off all the time, I am less likely to respond to it (the cry wolf effect).  It's not so simple as to say, "just make it less sensitive!"  Designing an automated system with a low false alarm rate is a bit of a conundrum because with a low false alarm rate also comes a miss rate; that is, the fire alarm may not sound when there is a real fire. 

While the cost of distrusting and disusing automation is high, the cost of missing an event can also be high in safety critical domains.  Designers should therefore consider the comparitive costs of high false alarm rates and high miss rates when designing automation.  This would obviously depend on the context in which the automation is used.

Abuse: Automation abuse is the automation of functions by designers and implementation by managers without due regard for the consequences for human (and hence system) performance and the operator’s authority over the system.
— Parasuraman & Riley, pp. 246

My earlier throwback post discusses the importance of considering the human performance consequences associated with automation use.  Completely eliminating the human operator from the equation by assuming that this will eliminate human errors in its entirety is not a wise choice. This can leave  operators with a higher workload and in a position to perform tasks for which they are not suited.  This irony was discussed by Bainbridge in 1983.  In short, operators' responsibilities should be based on their capabilities.  

 

Citation:  Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39, 230-253.

Downloadable link here.

Throwback Thursday: The Out-of-the-Loop Performance Problem and Level of Control in Automation

Dust off your Hypercolor t-shirts, and fanny packs filled with pogs because we are going back to the 1990s for this week's Throwback post.

By definition, when users interact with automated systems, they are carrying out some of the task while the automation does the rest.  When everything is running smoothly, this split in task allocation between user/machine causes no outward problems; the user is, by definition, “out of the loop” in part of the task.

Problems only start to appear when the automation fails or is unavailable and the user is suddenly left to do the activities previously carried out by automation.  The user is now forced back “into the loop.”

I sort of experienced this when I got my new 2017 car.  Naturally, as an automation researcher I asked for “all the things!” when it came to driver assistance features (yeah, for “research purposes”):  active lane assist, adaptive cruise control with stop and go, autonomous emergency braking, auto-high-beams, auto-wipers, and more.

The combined driver assistance features probably equated to somewhere around Level 2 autonomy (see figure).  I had fun testing the capabilities and limitations of each feature (safely!).  They worked, for the most part, and I turned them on and forgot about it for the past 9 months.

However, while my car was serviced recently, I drove a loaner car that had none of the features and was quite shocked to see how sloppy my driving had become.  Driving on the interstate seemed much more effortful than before.  In addition, I had become accustomed to playing with my phone, messing with my music player or doing other things while on long stretches of straight highway.  I could no longer do this safely with no driver assistance features.  

This was when I realized how much of the lower-level task of driving (staying in the lane, keeping a constant distance to the car ahead, turning on and off my high beams) was not done by me.  As an automation researcher, I was quite surprised.

This anecdote illustrates two phenomena in automation:  complacency and the resultant skill degradation.  Complacency is when I easily and willingly gave up a good chunk of the driving task to automation (in my personal car).  I admit this complacency and high trust but it was only after several weeks of testing the limits of automation to understand the conditions where it worked best (e.g., lane keeping did not work well in conditions of high contrast road shadows).  It is doubtful that regular people do this.

Because of my complacency (and high trust), I had mild skill degradation: the ability to smoothly and effortlessly maintain my lane and car distance.  

You may have experienced a similar disorientation when you use your phone for GPS-guided directions and it gets taken away (e.g., you move to a low signal area, phone crashes).  Suddenly, you have no idea where you are or what to do next.

So, what, exactly, causes this performance degradation when automation was taken away?

This "out of the loop" (OOTL) performance problem (decreased performance due to being out of the loop and suddenly being brough back into the loop) is the topic of this week's paper by Endsley and Kiris (1995).  In the paper, Endsley and Kiris explored the out-of-the-loop performance problem for possible specific causes, and solutions.

It is the central thesis of this paper that a loss of situation awareness (SA) underlies a great deal of the out-of-the-loop performance problem (Endsley, 1987).
— pp. 382

Endsley and Kiris make the claim that most of the problems associated with OOTL are due to a loss of situation awareness (SA). SA is a concept that Endsley refined in an earlier paper.  It basically means your awareness of the current situation and your ability to use this information to predict what will happen in the near future.  Endsley defines situation awareness as:

the perception of elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future
— pp. 392

Within this definition of SA, there are three levels:

  1. Level 1: perception of the elements (cues) in the environment:  can you see the warning light on the dashboard?
  2. Level 2: comprehension of the meaning of the perceived elements:  do you understand what the engine symbol means on the dashboard?
  3. Level 3: projection of the future: given what you know, can you predict whether breaking should be done to prevent a collision?

In this paper, she argued that the presence of automation essentially interfered with all levels of situation awareness:

In many cases, certain critical cues may be eliminated with automation and replaced by other cues that do not result in the same level of performance.
— pp. 384

Much of the existing research at the time had only examined physical/manual tasks, not cognitive/decision-making tasks.  The purpose of Endsley and Kiris' paper was:

In order to investigate the hypothesis that the level of automation can have a significant impact on the out-of-the-loop performance problem and to experimentally verify the role of situation awareness in this process, a study was conducted in which a cognitive task was automated via a simulated expert system.
— pp. 385

The key hypotheses they had was that as level of automation increased (see Arathi's earlier Throwback post on level of automation), situation awareness would decrease and this would be evidenced specifically by increased time for users to make a decision, and reduced confidence in decisions since users would feel less skilled or qualified (because they were so OOTL).

The bottom line was that, yes, the hypotheses were confirmed:  increasingly higher levels of automation do seem to negatively impact situation awareness and this is evidenced as longer time to make decisions (because you had less SA), and slightly reduced confidence in your decisions.  Using the examples above, the driver assistance features (a lower level form of automation) did not really lead to loss of situation awareness for me, but it did lead to skill degradation.  However, in the GPS example, a much higher form of automation, it DOES lead to loss of situation awareness.

So what do we do with this information?  First, it should clearly show that high or full autonomy of higher level cognitive-type tasks is not a very desirable goal.  It might be only if automation is proven to be 100% reliable, which is never assured.  Instead, there will be situations where the autonomy will fail and the user will have to assume manual control (think of a self driving car that fails).  In these circumstances, the driver will have dramatically reduced SA and thus poor performance.

The surprising design recommendation?

this study did find that full automation produced more problems than did partial automation
— pp. 392

Let's hope designers of future autonomy are listening!

Reference: Endsley, M. R., & Kiris, E. O. (1995). The out-of-the-loop performance problem and level of control in automation. Human Factors, 37(2), 381–394. http://doi.org/10.1518/001872095779064555

Throwback Thursday: A model for types and levels of automation

This is our second post on our “throwback” series. In this paper, I will take you through an article written by the best in the human factors and ergonomics field, the late Raja Parasuraman, Tom Sheridan, and Chris Wickens. Though several authors have introduced the concept of automation being implemented at various levels, for me this article nailed it.

The key excerpts from this article are highlighted below along with my commentary. Companies chasing automation blindly should keep these points in mind when designing their systems.

Automation is not all or none, but can vary across a continuum of levels, from the lowest level of fully manual performance to the highest level of full automation.
— Parasuraman, Sheridan, & Wickens, pp. 287

This means that between the extremes of a machine offering no assistance to a human to a machine doing everything for  the human, there are other automation design options. For example, the machine can offer a suggestion or implement a suggestion if the human approves or does everything autonomously and then informs the human or does everything autonomously and informs the human when asked. Let's consider the context of driving. In the example below, as we move from 1 to 4, the level of automation increases.

  1. I drive my car to work 
  2. I drive my car, KITT (from the Knight Rider) tells me the fastest route to work but I chose to override its suggestion 
  3. I drive my car, KITT tells me the fastest route to work and does not give me the option to override its suggestion 
  4. KITT plans and drives me to work
Automation can be applied to four broad classes of functions: 1) information acquisition; 2) information analysis; 3) decision and action selection; and 4) action implementation. Within each of these types, automation can be applied across a continuum of levels from low to high, i.e., from fully manual to fully automatic.
— Parasuraman, Sheridan, & Wickens, pp. 286

The way humans process information can be divided into four stages:

  1. information acquisition, which involves sensing data
  2. information analysis which involves making inferences with data
  3. decision and action selection, which involves making decision from among various choices
  4. action implementation, which involves doing the action.

Here are four examples of automation applied at each level:

  1. information acquisition, which involves sensing data
    • Example: night vision goggles enhance external data
  2. information analysis which involves making inferences with data
    • Example: historical graph of MPG in some cars
  3. decision and action selection, which involves making decision from among various choices
    • Example: Google Maps routes to a destination; where it presents 3 possible routes based on different criteria
  4. action implementation, which involves doing the action.
    • Example: automatic stapling in a photocopier

The authors say that automation can be applied to each of these stages of human information processing

An important consideration in deciding upon the type and level of automation in any system design is the evaluation of the consequences for human operator performance in the resulting system.

— Parasuraman, Sheridan, & Wickens, pp. 290

Choosing an automation design without any regard for the strengths and limitations of the human operator or for the characteristics of the environment in which the operator works in (e.g., high stress) is not an effective strategy.  When choosing the degree of automation, it is important to consider the impacts it may have on the operator.

  • How would it affect the operator workload?
  • How would it affect the operator's understanding of the environment (in research we call this situation awareness)?
  • How would it affect the combined operator-machine performance?
  • Would operators over-trust the machine and be unable to overcome automation failures?

It is worth noting that NHTSA's current description of vehicle autonomy (figure) is NOT human-centered and is instead focused on the capabilities and tasks of the machine.

 From NHTSA.gov (https://www.nhtsa.gov/technology-innovation/automated-vehicles-safety)

From NHTSA.gov (https://www.nhtsa.gov/technology-innovation/automated-vehicles-safety)

Citation:  Parasuraman, R., Sheridan, T. B., & Wickens, C. (2000). A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man, and Cybernetics, 30, 286-297.

Downloadable link here.

 

Throwback Thursday: The Ironies of Automation
If I have seen further, it is by standing on the shoulders of giants
— Isaac Newton, 1675

Don't worry, our Throwback Thursday doesn’t involve embarrassing pictures of me or Arathi from 5 years ago.  Instead, it is more cerebral.  The social science behind automation and autonomy is long and rich, and despite being one of the earliest topics of study in engineering psychology, it has even more relevance today.

Instead of re-inventing the wheel, why don't we look at the past literature to see what is still relevant?

In an effort to honor that past but also inform the future, the inaugural "Throwback Thursday" post will highlight scientific literature from the past that is relevant to modern discussion of autonomy.

Both Arathi and I have taught graduate seminars in automation and autonomy so we have a rich treasure trove of literature from which to draw.  Don't worry: while some of the readings can be complex and academic, in deference to our potentially diverse readership, we will focus on key points and discuss their relevance today.

The Ironies of Automation

In this aptly titled paper, Bainbridge discusses, back in 1983(!), the ironic things that can happen when humans interact with automation.  The words of this paper ring especially true today when the design strategy of some companies is to consider the human as an error term to be eliminated:

The designer’s view of the human operator may be that the operator is unreliable and inefficient, so should be eliminated from the system.
— Bainbridge, pp. 775

But is this design strategy sustainable?  Bainbridge later wisely points out that:

The second irony is that the designer who tries to eliminate the operator still leaves the operator to do the tasks which the designer cannot think how to automate.
— Bainbridge, pp. 775

The paper then discusses, how, under such an approach, many unintended problems arise.  The ultimate irony, however, is that the implementation of very high levels of automation (including eliminating the driver in a self-driving car) will ultimately lead to a higher workload burden for the "passenger."

A more serious irony is that the automatic control system has been put in because it can do the job better than the operator, but yet the operator is being asked to monitor that it is working effectively.
— Bainbridge, pp. 775