Posts in weekend
Dr. Mica Endsley: Current Challenges and Future Opportunities In Human-Autonomy Research

The social science research that we cover in this blog is carried out by a multitude of talented scientists across the world; each studying a different facet of the problem. In our second post in a new series, we interview one the leaders in the study of the human factors of autonomy, Dr. Mica Endsley.

About Dr. Mica Endsley

51JDTjtfmLL._SX331_BO1,204,203,200_.jpg

Situation Awareness Analysis and Measurement provides a comprehensive overview of different approaches to the measurement of situation awareness in experimental and applied settings. This book directly tackles the problem of ensuring that system designs and training programs are effective at promoting situation awareness.

 

Dr. Mica Endsley is President of SA Technologies, a cognitive engineering firm specializing in the analysis, design, measurement and training of situation awareness in advanced systems, including the next generation of systems for aviation, air traffic control, health care, power grid operations, transportation, military operations, homeland security, and cyber. 

From 2013 to 2015, she served as Chief Scientist of the U.S. Air Force, reporting to the Chief of Staff and Secretary of the Air Force, providing guidance and direction on research and development to support Air Force future operations and providing assessments on a wide range of scientific and technical issues affecting the Air Force mission.

She has also held the position of Visiting Associate Professor at MIT in the Department of Aeronautics and Astronautics and Associate Professor of Industrial Engineering at Texas Tech University. Dr. Endsley received a Ph.D. in Industrial and Systems Engineering from the University of Southern California.

Dr. Endsley is a recognized world leader in the design, development and evaluation of systems to support human situation awareness (SA) and decision-making. She is the author of over 200 scientific articles and reports on situation awareness and decision-making, automation, cognitive engineering, and human system integration. She is co-author of Analysis and Measurement of Situation Awareness and Designing for Situation Awareness. Dr. Endsley received the Human Factors and Ergonomics Society Jack Kraft Innovator Award for her work in situation awareness.

She is a fellow in the Human Factors and Ergonomics Society, its Past-President, was co-founder of the Cognitive Engineering and Decision Making Technical Group of HFES, and served on its Executive Council.  Dr. Endsley has received numerous awards for teaching and research, is a Certified Professional Ergonomist and a Registered Professional Engineer. She is the founder and former Editor-in-Chief of the Journal of Cognitive Engineering and Decision Making and serves on the editorial board for three major journals. 


What were the human-automation challenges you encountered in your role as the Chief Scientist for the Air Force?

 An RQ-4 Global Hawk soars through the sky to record intelligence, surveillence and reconnaissance data.   Image Source.

An RQ-4 Global Hawk soars through the sky to record intelligence, surveillence and reconnaissance data.  Image Source.

Autonomous systems are being developed or are under consideration for a wide range of operational missions. This includes:

  1. Manned aircraft, as more automation is added to both on-board and supporting functions such as mission planning, information/network management, vehicle health management and failure detection
  2. Unmanned aircraft are currently being used for surveillance missions and are being considered for a much wider range of activities where:
    1. people would be at high levels of risk (e.g., near to hostilities),
    2. communications links for direct control are unreliable due o jamming or other interference effects,
    3. where speed of operations is useful (e.g., re-tasking sensors based on observed target features), or
    4. to undertake new forms of warfare that may be enabled by intelligent, but expendable, systems, or closely coordinated flights of RPAs [remotely piloted aircraft] (e.g., swarms)
  3. Space operations can also benefit from autonomous systems that provide a means to build resilient space networks that can reconfigure themselves in the face of attacks, preserving essential functions under duress. It also provides a mechanism for significantly reducing the extensive manpower requirements for manual control of satellites and generation of space situation awareness through real-time surveillance and analysis of the enormous number of objects in orbit around the Earth.
  4. Cyber operations can benefit from autonomy due to the rapidity of cyber-attacks, and the sheer volume of attacks that could potentially occur. Autonomous software can react in milliseconds to protect critical systems and mission components. In addition, the ever-increasing volume of novel cyber threats creates a need for autonomous defensive cyber solutions, including cyber vulnerability detection and mitigation; compromise detection and repair (self-healing); real-time response to threats; network and mission mapping; and anomaly resolution.
  5. ISR [intelligence, surveillance, and reconnaissance] and Command and Control operations will also see increased use of autonomous systems to assist with integrating information across multiple sensors, platforms and sources, and to provide assistance in mission planning, re-planning, monitoring, and coordination activities.

Many common challenges exist for people to work in collaboration with these autonomous systems across all of these future applications. These include:

the more reliable and robust that automation is, the less likely that human operators overseeing the automation will be aware of critical information and able to take over manual control when needed...I have labeled this the Automation Conundrum
  1. Difficulties in creating autonomy software that is robust enough to function without human intervention and oversight are significant. Creating systems that can accurately not only sense but also understand (recognize and categorize) objects detected, and their relationship to each other and broader system goals, has proven to be significantly challenging for automation, especially when unexpected (i.e., not designed for) objects, events, or situations are encountered. This capability is required for intelligent decision-making, particularly in adversarial situations where uncertainty is high, and many novel situations may be encountered.
  2. A lowering of human situation awareness when using automation often leads to out-of-the-loop performance decrements. People are both slow to detect that a problem has occurred with the automation, or with the system being controlled by the automation, and then slow to come up to speed in diagnosing the problem to intervene appropriately, leading to accidents. Substantial research on this problem shows that as more automation is added to a system, and the more reliable and robust that automation is, the less likely human operators are in overseeing the automation and taking over manual control when needed. I have labeled this the Automation Conundrum.
  3. Increases in cognitive workload are often required in order to interact with the greater complexity associated with automation. Workload can often increase as understanding and interacting with automation increases demands.
  4. Increased time to make decisions can be found when decision aids are provided, often without the desired increase in decision accuracy. Evidence shows that people actually take-in system assessments and recommendations that they then combine with their own knowledge and understanding of the situation. A faulty decision aid can lead to people being more likely to make a mistake due to decision biasing by the aid. And the time required to make a decision can actually increase, as it is an additional source of information to take into account.

Challenges occur when people working with automation develop a level of trust that is inappropriately calibrated to the reliability and functionality of the system in various circumstances. In order for people to operate effectively with autonomous systems, they will need to be able to determine how much to trust the autonomy to perform its tasks.

This trust is a function of not just the overall reliability of the system, but also a situationally determined assessment of how well it performs particular tasks in particular situations. For this, people need to develop informed trust – an accurate assessment of when and how much autonomy should be employed, and when to intervene.

Given that it is unlikely that autonomy in the foreseeable future will work perfectly for all functions and operations, and that human interaction with autonomy will continue to be needed at some level, these factors work to create the need for a new approach to the design of autonomous systems that will allow them to serve as an effective teammate with the people who will need to depend on them to do their jobs.


What does the autonomous future look like for you? Is it good, bad or ugly?

The future with autonomous systems may be good, bad, or very ugly, depending on how successful we are in designing and implementing effective human-autonomy collaboration and coordination.

The ugly scenario will occur only if decision makers forget about the power of people to be creative and innovative, and try to supplant them with autonomous systems in a failed belief in its superiority

In the bad scenario, if we continue to develop autonomous systems that are brittle, and that fail to provide the people who must work with automation with the needed situation awareness to be able to effective in their roles, then the true advantages of both people and autonomy will be compromised. 

The ugly scenario will occur only if decision makers forget about the power of people to be creative and innovative, and try to supplant them with autonomous systems in a failed belief in its superiority. Nothing in the past 40 years of automation research has justified such an action, and such a move would be truly disastrous in the long run.

In a successful vision of the future, autonomous systems will be designed to serve as part of a collaborative team with people. Flexible autonomy will allow the control of tasks, functions, sub-systems, and even entire vehicles to pass back and forth over time between people and the autonomous system, as needed to succeed under changing circumstances. Many functions will be supported at varying levels of autonomy, from fully manual, to recommendations for decision aiding, to human-on-the-loop supervisory control of an autonomous system, to one that operates fully autonomously with no human intervention at all.

People will be able to make informed choices about where and when to invoke autonomy based on considerations of trust, the ability to verify its operations, the level of risk and risk mitigation available for a particular operation, the operational need for the autonomy, and the degree to which the system supports the needed partnership with the human.

In certain limited cases, the system may allow the autonomy to take over automatically from the human, when timelines are very short for example, or when loss of lives are imminent. However, human decision making for the exercise of force with weapon systems is a fundamental requirement, in keeping with Department of Defense directives.

The development of autonomy that provides sufficient robustness, span of control, ease of interaction, and automation transparency is critical to achieving this vision. In addition, a high level of shared situation awareness between the human and the autonomy will be critical. Shared situation awareness is needed to ensure that the autonomy and the human operator are able to align their goals, track function allocation and re-allocation over time, communicate decisions and courses of action, and align their respective tasks to achieve coordinated actions.

Critical situation awareness requirements that communicate not just status information, but also comprehension and projections associated with the situation (the higher levels of situation awareness), must be built into future two-way communications between the human and the autonomy.

This new paradigm is a significant departure from the past in that it will directly support high levels of shared situation awareness between human operators and autonomous systems, creating situationally relevant informed trust, ease of interaction and control, and manageable workload levels needed for mission success. By focusing on human-autonomy teaming, we can create successful systems that get the best benefits of autonomous software along with the innovation of empowered operators.


Dr. Julie Carpenter: Human-Robot/AI Relationships

The social science research that we cover in this blog is carried out by a multitude of talented scientists across the world; each studying a different facet of the problem. As the first post in a new series, we interview one the pioneers in the study of human-AI relationships, Dr. Julie Carpenter.

9781472443113.jpg

Dr. Carpenter’s first book, Culture and human-robot interaction in militarized spaces: A war story (RoutledgeAmazon) expands on her research with U.S. military Explosive Ordnance Disposal personnel and their everyday interactions with field robots.

About Dr. Julie Carpenter

Julie Carpenter has over 15 years of experience in human-centered design and human-AI interaction research, teaching, and writing. Her principal research is about how culture influences human perception of AI and robotic systems and the associated human factors such as user trust and decision-making in human-robot cooperative interactions in natural use-case environments.

Dr. Carpenter earned her PhD and an MS from the University of Washington, an MS from Rensselaer Polytechnic Institute, and a BA from the University of Wisconsin-Madison. She is also currently a Research Fellow in the Ethics + Emerging Sciences group at California Polytechnic State University. 

Dr. Carpenter’s first book, Culture and human-robot interaction in militarized spaces: A war story (RoutledgeAmazon) expands on her research with U.S. military Explosive Ordnance Disposal personnel and their everyday interactions with field robots. The findings from this research have applicability across a range of human-robot and human-AI cooperative scenarios, products, and situations. She regularly updates her website with information about her current work at jgcarpenter.com.


You have done a lot of work on the emotional attachment that humans have towards robots. Can you tell us more about your work?

At its heart, my work is human-centered and culture-centered. I tend to approach things in a very interdisciplinary way, and my body of published work reflects my long-term interest in how people use technology to communicate, from film to AI.

...there were relatively few people looking at AI as the vector for human emotion when I began in this vein

The medium or technologies I focus on changes and evolves. I began in film theory, then a lot of my work was about Web-based human interactions, and more recently it has been how people interact with robots and other forms of non-Web AI, like autonomous cars, textbots, or IoT agents such as Alexa.

But my lens for looking at things has always been rooted in a sort of anthropological interest in people and technology. Specifically, human emotional attachment to and through the technological medium interests me because there are so many nuanced possible pitfalls for the human, psychologically, ethically, emotionally, even physically.

Yet when it comes to scholarly study about topics like affection, friendship, love and their influence and connectedness with other complicated topics like trust, cooperative teamwork, and decision-making, there were relatively few people looking at AI as the vector for human emotion when I began in this vein. David Levy is one person who pioneered this discussion, of course, as are Clifford Nass and Byron Reeves.

As a film theory undergraduate student, I was drawn to how people use stories to explore technology, as we do in science fiction. Looking back, I can see where even then I was influenced by not only the idea of science fiction and science fiction films, but particularly ones that were of my own era as cultural touchstones and became the basis for a great deal of my early scholarly work. 

So, movies like Blade Runner were something I wrote whole papers about years before there was even a hint that we would enter an era when robots would become a reality in a very specific and rapid time for development in the 2000s. But back then I was looking at things as ideas connected specifically to that movie director’s body of work, or the audience for the movie, or culture at that time.

 Blade Runner (1982).   Image source

Blade Runner (1982).  Image source

Now I look at a movie like that as an exploration of human-robot possibilities, a reflection and influencer of popular cultural ideas, and also an inspiration to people like me, makers and researchers who have a say in developing real world AI. I find those sort of storytelling influences fascinating because they often set up peoples’ real world expectations of their interactions with technology, and even helps form the communication model.

Storytelling’s influence on culture is a very rich set of artifacts for exploration, and I manage to reference that idea a great deal in the way I situate research in the larger culture it is part of, however that may be defined for the scope of that work.


What do you think about media portrayals of human-robot/autonomy relationships in movies (e.g., the new Bladerunner; the movie Her)?

Cultures around the world use science fiction to explore what it means to be human...

I love science fiction stories, and as I mentioned. I frequently use science fiction as a framework for discussing our expectations of interactions with AI and robots, because research shows it definitely can influence peoples’ expectations about how to interact with AI, at least initially.

Personally, Blade Runner definitely inspired me in many ways, going back to when I was studying film theory as an undergrad and never predicted I’d be working in a field called human-robot interaction someday. I know a lot of roboticists who cite other scifi as personal inspiration, too, such as Astro Boy. Storytelling captures our imagination and prompts questions, and it is a wonderful creative springboard for discussion, as well as entertainment.

A pitfall I am less a fan of is using Isaac Asimov’s Three Laws to discuss ethics and AI. Asimov wrote the Laws purposefully allowing for ethical pitfalls so he could keep writing stories; the Laws create plot points in their fallibility. If you want to use Three Laws (or four, if you count the Zeroth Law) to frame a discussion of ethical AI, then you have to acknowledge it is fictional, fallible, and very purposefully incomplete in conception—it isn’t a real world solution for development or policy-making, except perhaps as an example of what loopholes might be in a framework like the Three Laws if they were used in the real world.

Science fiction can be a cultural touchstone and a thought exercise for framing complicated human-AI interactions, but sometimes it is used for shorthand to communicate complicated issues in a way that disregards too much nuance of the issues being discussed. I’m an Asimov fan, but I think the Laws are sometimes relied upon too much in a scientific discussion or popular news framing of ethical problems for AI.

Having said that, I personally enjoy a wide range of AI representations in fiction, from the dystopic to the sympathetic predictions. The ethical dilemmas of the Terminator or Her are both entertaining for me to contemplate in the safety of my everyday life. Considering the more far-reaching implications of the ideas they are conveying is a more serious endeavor for me, of course. How we tell stories reflects our beliefs, and also pushes those beliefs and ideas further, questioning our suppositions, and in that way also has the potential to influence new ideas about how we interact with AI.

 Her (2013).   Image source

Her (2013).  Image source

There is a rich history of stories we tell about AI that pre-dates the genre we call science fiction. Scifi is a relatively new genre label, in itself, but the idea of humans interacting with artificial life has been around forever, in various forms. All sorts of tales about humans interfering with the natural order of things to create a humanlike life outside the body--sometimes via magic spells or religious intervention--exist around the world. These AI characters take the form of golems, zombies, statues, puppets, dolls, and so on. Historically, this is a set of ideas that has universal fascination.

Cultures around the world use science fiction to explore what it means to be human, and what it means for our creation of and interactions with entities that are similar to us in some ways, often as if AI was a sociological Other.


I recently read the news of a man in China marrying the robot he created. SciFi movies are certainly becoming a reality. What are the ethical implications of human-automation romantic relationships?

We are currently in an era where we are really just beginning discussions of emerging ethics in this domain earnestly because of the enormous progress of AI and robotics over the last decade in particular.

Right now, a romantic feeling for AI is considered aberrant behavior, so it carries a very different significance than it will when AI and robots are accepted as objects that can carry a great deal of meaning for people in different situations, whether it’s as caregiver or mentor or helper or companion or romantic interest.

In other words, I don’t think we can make shorthand generalizations about a “type” of person that marries a robot or other AI very successfully as a static model, because the way we regard human-robot relationships will change as robots become part of our everyday realities and we learn to live with them and negotiate what different robots might mean to us in different ways.

I think that to an extent, eventually we will see society normalize human-robot romantic relationships as a culturally accepted option for some people. We are still going through a process of discovery about our interactions with robots now, but we do see patterns of human-robot interaction strikingly different from our interactions with other objects, and one emerging pattern is that in some conditions we treat AI and robots in socially meaningful ways that sometimes includes emotional attachment and/or affection from the person to the AI or robot.

The ethical pitfalls of a human-robot romantic relationship can come from the development end, the user end, and society’s perceptions of that relationship. From the development end, some ethical concerns are the development of the AI, and the human biases and influences we are teaching AI that learns from us, whether it is through direct programming or neural networks.  Robot hacking and privacy concerns are thorny nests of ethical issues, too.

Say someone has a romantic or other affection for AI used in their home, and interacts with it that way, accordingly. In that case, who has access to what the robot or AI hears you say, watches what you do, the information it gathers about your everyday life and your preferences for everything from dish detergent to sexual activities?  What if that data was hacked, and someone tried to use the gathered information to manipulate you? These are major technical and ethical issues.

From the user end, one ethical concern is whether people who become emotionally attached to AI have a real self-awareness of the lack of truly humanlike reciprocity in a human-AI relationship with the current technology, and whether they lack a root understanding that the AI is not anywhere near humanlike intelligence, although sometimes those are the very traits of AI that can attract someone to it romantically.

Furthermore, society does not treat AI or robots like people when it comes to things like legal status, so similar ethical concerns are reflected in the ways the people around the user that reports to be romantically interested in AI recognize that for someone else to declare oneself in a committed, persistent, affectionate relationship with an AI form also acknowledges involvement in an imbalanced power dynamic.

Another ethical question rising from romantic human-AI interaction is, “Will a person who is accustomed to the imbalanced power dynamic of a human-robot relationship transfer their behaviors into their human-human relationships?” The implication there is that (1) the person treats the robot in a way we would find distasteful in human-human dynamics, and (2) that our social behaviors with robots will be something we apply as a model to human-human interactions.

 Blade Runner 2049 (2017).   Image source

Blade Runner 2049 (2017).  Image source

We are currently in an era where we are really just beginning discussions of emerging ethics in this domain earnestly because of the enormous progress of AI and robotics over the last decade in particular. It is only the beginning of a time when we formalize some of our decisions about these ethical concerns as law and policies, and how we establish less formal ways of negotiating our interactions with AI via societal norms.

I’m looking forward to watching how we integrate AI technologies like robots and autonomous cars with our everyday lives because I think there are a lot of potential good that will come from our using them. Our path to integrating AI into our lives is already fascinating.

Weekend Reading: Fear of AI and Autonomy

In our inaugural post, I alluded to the current discussion surrounding AI/Autonomy as being dominated by philosophers, politicians, and engineers.  They are, of course, working at the forefront of this technology and raise important points.

But focusing on their big-picture concerns may prevent a fuller view of the day-to-day role of this technology, and the fact that humans are expected to interact, collaborate, and in some cases submit to these systems (social science issues; why this blog exists).

That said, one of the philosophers examining the future role and risk associated with AI is Nick Bostrom, director of the Future of Humanity Institute.  This profile from the New Yorker from a few years ago (2015) is a great way to get up to speed on the basis of much of the fear of AI.

Bostrom’s sole responsibility at Oxford is to direct an organization called the Future of Humanity Institute, which he founded ten years ago, with financial support from James Martin, a futurist and tech millionaire. Bostrom runs the institute as a kind of philosophical radar station: a bunker sending out navigational pulses into the haze of possible futures. Not long ago, an F.H.I. fellow studied the possibility of a “dark fire scenario,” a cosmic event that, he hypothesized, could occur under certain high-energy conditions: everyday matter mutating into dark matter, in a runaway process that could erase most of the known universe. (He concluded that it was highly unlikely.) Discussions at F.H.I. range from conventional philosophic topics, like the nature of compromise, to the optimal structure of space empires—whether a single intergalactic machine intelligence, supported by a vast array of probes, presents a more ethical future than a cosmic imperium housing millions of digital minds.

Warning: Settle-in because this is a typical New Yorker article (i.e., is very, satisfyingly long).

The similar-sounding Future of Life Institute has similar goals but is focused on the explaining the risks of AI but also dispelling myths.