The social science research that we cover in this blog is carried out by a multitude of talented scientists across the world; each studying a different facet of the problem. In our second post in a new series, we interview one the leaders in the study of the human factors of autonomy, Dr. Mica Endsley.
About Dr. Mica Endsley
Dr. Mica Endsley is President of SA Technologies, a cognitive engineering firm specializing in the analysis, design, measurement and training of situation awareness in advanced systems, including the next generation of systems for aviation, air traffic control, health care, power grid operations, transportation, military operations, homeland security, and cyber.
From 2013 to 2015, she served as Chief Scientist of the U.S. Air Force, reporting to the Chief of Staff and Secretary of the Air Force, providing guidance and direction on research and development to support Air Force future operations and providing assessments on a wide range of scientific and technical issues affecting the Air Force mission.
She has also held the position of Visiting Associate Professor at MIT in the Department of Aeronautics and Astronautics and Associate Professor of Industrial Engineering at Texas Tech University. Dr. Endsley received a Ph.D. in Industrial and Systems Engineering from the University of Southern California.
Dr. Endsley is a recognized world leader in the design, development and evaluation of systems to support human situation awareness (SA) and decision-making. She is the author of over 200 scientific articles and reports on situation awareness and decision-making, automation, cognitive engineering, and human system integration. She is co-author of Analysis and Measurement of Situation Awareness and Designing for Situation Awareness. Dr. Endsley received the Human Factors and Ergonomics Society Jack Kraft Innovator Award for her work in situation awareness.
She is a fellow in the Human Factors and Ergonomics Society, its Past-President, was co-founder of the Cognitive Engineering and Decision Making Technical Group of HFES, and served on its Executive Council. Dr. Endsley has received numerous awards for teaching and research, is a Certified Professional Ergonomist and a Registered Professional Engineer. She is the founder and former Editor-in-Chief of the Journal of Cognitive Engineering and Decision Making and serves on the editorial board for three major journals.
What were the human-automation challenges you encountered in your role as the Chief Scientist for the Air Force?
Autonomous systems are being developed or are under consideration for a wide range of operational missions. This includes:
- Manned aircraft, as more automation is added to both on-board and supporting functions such as mission planning, information/network management, vehicle health management and failure detection
- Unmanned aircraft are currently being used for surveillance missions and are being considered for a much wider range of activities where:
- people would be at high levels of risk (e.g., near to hostilities),
- communications links for direct control are unreliable due o jamming or other interference effects,
- where speed of operations is useful (e.g., re-tasking sensors based on observed target features), or
- to undertake new forms of warfare that may be enabled by intelligent, but expendable, systems, or closely coordinated flights of RPAs [remotely piloted aircraft] (e.g., swarms)
- Space operations can also benefit from autonomous systems that provide a means to build resilient space networks that can reconfigure themselves in the face of attacks, preserving essential functions under duress. It also provides a mechanism for significantly reducing the extensive manpower requirements for manual control of satellites and generation of space situation awareness through real-time surveillance and analysis of the enormous number of objects in orbit around the Earth.
- Cyber operations can benefit from autonomy due to the rapidity of cyber-attacks, and the sheer volume of attacks that could potentially occur. Autonomous software can react in milliseconds to protect critical systems and mission components. In addition, the ever-increasing volume of novel cyber threats creates a need for autonomous defensive cyber solutions, including cyber vulnerability detection and mitigation; compromise detection and repair (self-healing); real-time response to threats; network and mission mapping; and anomaly resolution.
- ISR [intelligence, surveillance, and reconnaissance] and Command and Control operations will also see increased use of autonomous systems to assist with integrating information across multiple sensors, platforms and sources, and to provide assistance in mission planning, re-planning, monitoring, and coordination activities.
Many common challenges exist for people to work in collaboration with these autonomous systems across all of these future applications. These include:
- Difficulties in creating autonomy software that is robust enough to function without human intervention and oversight are significant. Creating systems that can accurately not only sense but also understand (recognize and categorize) objects detected, and their relationship to each other and broader system goals, has proven to be significantly challenging for automation, especially when unexpected (i.e., not designed for) objects, events, or situations are encountered. This capability is required for intelligent decision-making, particularly in adversarial situations where uncertainty is high, and many novel situations may be encountered.
- A lowering of human situation awareness when using automation often leads to out-of-the-loop performance decrements. People are both slow to detect that a problem has occurred with the automation, or with the system being controlled by the automation, and then slow to come up to speed in diagnosing the problem to intervene appropriately, leading to accidents. Substantial research on this problem shows that as more automation is added to a system, and the more reliable and robust that automation is, the less likely human operators are in overseeing the automation and taking over manual control when needed. I have labeled this the Automation Conundrum.
- Increases in cognitive workload are often required in order to interact with the greater complexity associated with automation. Workload can often increase as understanding and interacting with automation increases demands.
- Increased time to make decisions can be found when decision aids are provided, often without the desired increase in decision accuracy. Evidence shows that people actually take-in system assessments and recommendations that they then combine with their own knowledge and understanding of the situation. A faulty decision aid can lead to people being more likely to make a mistake due to decision biasing by the aid. And the time required to make a decision can actually increase, as it is an additional source of information to take into account.
Challenges occur when people working with automation develop a level of trust that is inappropriately calibrated to the reliability and functionality of the system in various circumstances. In order for people to operate effectively with autonomous systems, they will need to be able to determine how much to trust the autonomy to perform its tasks.
This trust is a function of not just the overall reliability of the system, but also a situationally determined assessment of how well it performs particular tasks in particular situations. For this, people need to develop informed trust – an accurate assessment of when and how much autonomy should be employed, and when to intervene.
Given that it is unlikely that autonomy in the foreseeable future will work perfectly for all functions and operations, and that human interaction with autonomy will continue to be needed at some level, these factors work to create the need for a new approach to the design of autonomous systems that will allow them to serve as an effective teammate with the people who will need to depend on them to do their jobs.
What does the autonomous future look like for you? Is it good, bad or ugly?
The future with autonomous systems may be good, bad, or very ugly, depending on how successful we are in designing and implementing effective human-autonomy collaboration and coordination.
In the bad scenario, if we continue to develop autonomous systems that are brittle, and that fail to provide the people who must work with automation with the needed situation awareness to be able to effective in their roles, then the true advantages of both people and autonomy will be compromised.
The ugly scenario will occur only if decision makers forget about the power of people to be creative and innovative, and try to supplant them with autonomous systems in a failed belief in its superiority. Nothing in the past 40 years of automation research has justified such an action, and such a move would be truly disastrous in the long run.
In a successful vision of the future, autonomous systems will be designed to serve as part of a collaborative team with people. Flexible autonomy will allow the control of tasks, functions, sub-systems, and even entire vehicles to pass back and forth over time between people and the autonomous system, as needed to succeed under changing circumstances. Many functions will be supported at varying levels of autonomy, from fully manual, to recommendations for decision aiding, to human-on-the-loop supervisory control of an autonomous system, to one that operates fully autonomously with no human intervention at all.
People will be able to make informed choices about where and when to invoke autonomy based on considerations of trust, the ability to verify its operations, the level of risk and risk mitigation available for a particular operation, the operational need for the autonomy, and the degree to which the system supports the needed partnership with the human.
In certain limited cases, the system may allow the autonomy to take over automatically from the human, when timelines are very short for example, or when loss of lives are imminent. However, human decision making for the exercise of force with weapon systems is a fundamental requirement, in keeping with Department of Defense directives.
The development of autonomy that provides sufficient robustness, span of control, ease of interaction, and automation transparency is critical to achieving this vision. In addition, a high level of shared situation awareness between the human and the autonomy will be critical. Shared situation awareness is needed to ensure that the autonomy and the human operator are able to align their goals, track function allocation and re-allocation over time, communicate decisions and courses of action, and align their respective tasks to achieve coordinated actions.
Critical situation awareness requirements that communicate not just status information, but also comprehension and projections associated with the situation (the higher levels of situation awareness), must be built into future two-way communications between the human and the autonomy.
This new paradigm is a significant departure from the past in that it will directly support high levels of shared situation awareness between human operators and autonomous systems, creating situationally relevant informed trust, ease of interaction and control, and manageable workload levels needed for mission success. By focusing on human-autonomy teaming, we can create successful systems that get the best benefits of autonomous software along with the innovation of empowered operators.