Human-Autonomy Sciences

We are psychological scientists / practitioners who are excited about the future of autonomy.  This blog will cover recent developments in human-autonomy sciences with a focus on the social science angle.

Siri and Alexa Say #MeToo to Sexual Harassment

The number of prominent celebrities and politicians being taken down for sexual harassment really seems to represent a major change in how society views sexual harassment.  No longer whispered or swept under the rug, harassment is being called-out and harassers are being held accountable for their words and actions.  

So, if AI will soon be collaborators, partners, and team mates, shouldn't they also be given the same treatment?  This story in VentureBeat talks about a campaign by Randy Painter to consider how voice assistants behave when harassed:

We have a unique opportunity to develop AI in a way that creates a kinder world. If we as a society want to move past a place where sexual harassment is permitted, it’s time for Apple and Amazon to reprogram their bots to push back against sexual harassment

I've never harassed Siri so I wasn't aware of the responses she gives when one attempts to harass her:

Siri responds to her harassers with coy remarks that sometimes even express gratitude. When they called Siri a “slut,” she responded with a simple “Now, now.” And when the same person told Siri, “You’re hot,” Siri responded with “I’m just well put together. Um… thanks. Is there something I can help you with?”

In our interview last week with Dr. Julie Carpenter, she addressed this somewhat:

Another ethical question rising from romantic human-AI interaction is, “Will a person who is accustomed to the imbalanced power dynamic of a human-robot relationship transfer their behaviors into their human-human relationships?” The implication there is that (1) the person treats the robot in a way we would find distasteful in human-human dynamics, and (2) that our social behaviors with robots will be something we apply as a model to human-human interactions.

This is fascinating because there is existing and ongoing research examining how humans respond and behave with AI/autonomy that exhibits different levels of politeness.  For example, autonomy that is rude, impatient, and intrusive were considered less trustworthy by human operators. If humans  expect autonomy to have a certain etiquette, isn't it fair to expect at least basic decency from humans towards autonomy?

Citation: Parasuraman R., & Miller C. (2004). Trust and etiquette in high-criticality automated systems. Communications of the Association for Computing Machinery, 47(4), 51–55. 


Dr. Mica Endsley: Current Challenges and Future Opportunities In Human-Autonomy Research

The social science research that we cover in this blog is carried out by a multitude of talented scientists across the world; each studying a different facet of the problem. In our second post in a new series, we interview one the leaders in the study of the human factors of autonomy, Dr. Mica Endsley.

About Dr. Mica Endsley


Situation Awareness Analysis and Measurement provides a comprehensive overview of different approaches to the measurement of situation awareness in experimental and applied settings. This book directly tackles the problem of ensuring that system designs and training programs are effective at promoting situation awareness.


Dr. Mica Endsley is President of SA Technologies, a cognitive engineering firm specializing in the analysis, design, measurement and training of situation awareness in advanced systems, including the next generation of systems for aviation, air traffic control, health care, power grid operations, transportation, military operations, homeland security, and cyber. 

From 2013 to 2015, she served as Chief Scientist of the U.S. Air Force, reporting to the Chief of Staff and Secretary of the Air Force, providing guidance and direction on research and development to support Air Force future operations and providing assessments on a wide range of scientific and technical issues affecting the Air Force mission.

She has also held the position of Visiting Associate Professor at MIT in the Department of Aeronautics and Astronautics and Associate Professor of Industrial Engineering at Texas Tech University. Dr. Endsley received a Ph.D. in Industrial and Systems Engineering from the University of Southern California.

Dr. Endsley is a recognized world leader in the design, development and evaluation of systems to support human situation awareness (SA) and decision-making. She is the author of over 200 scientific articles and reports on situation awareness and decision-making, automation, cognitive engineering, and human system integration. She is co-author of Analysis and Measurement of Situation Awareness and Designing for Situation Awareness. Dr. Endsley received the Human Factors and Ergonomics Society Jack Kraft Innovator Award for her work in situation awareness.

She is a fellow in the Human Factors and Ergonomics Society, its Past-President, was co-founder of the Cognitive Engineering and Decision Making Technical Group of HFES, and served on its Executive Council.  Dr. Endsley has received numerous awards for teaching and research, is a Certified Professional Ergonomist and a Registered Professional Engineer. She is the founder and former Editor-in-Chief of the Journal of Cognitive Engineering and Decision Making and serves on the editorial board for three major journals. 

What were the human-automation challenges you encountered in your role as the Chief Scientist for the Air Force?

An RQ-4 Global Hawk soars through the sky to record intelligence, surveillence and reconnaissance data.   Image Source.

An RQ-4 Global Hawk soars through the sky to record intelligence, surveillence and reconnaissance data.  Image Source.

Autonomous systems are being developed or are under consideration for a wide range of operational missions. This includes:

  1. Manned aircraft, as more automation is added to both on-board and supporting functions such as mission planning, information/network management, vehicle health management and failure detection
  2. Unmanned aircraft are currently being used for surveillance missions and are being considered for a much wider range of activities where:
    1. people would be at high levels of risk (e.g., near to hostilities),
    2. communications links for direct control are unreliable due o jamming or other interference effects,
    3. where speed of operations is useful (e.g., re-tasking sensors based on observed target features), or
    4. to undertake new forms of warfare that may be enabled by intelligent, but expendable, systems, or closely coordinated flights of RPAs [remotely piloted aircraft] (e.g., swarms)
  3. Space operations can also benefit from autonomous systems that provide a means to build resilient space networks that can reconfigure themselves in the face of attacks, preserving essential functions under duress. It also provides a mechanism for significantly reducing the extensive manpower requirements for manual control of satellites and generation of space situation awareness through real-time surveillance and analysis of the enormous number of objects in orbit around the Earth.
  4. Cyber operations can benefit from autonomy due to the rapidity of cyber-attacks, and the sheer volume of attacks that could potentially occur. Autonomous software can react in milliseconds to protect critical systems and mission components. In addition, the ever-increasing volume of novel cyber threats creates a need for autonomous defensive cyber solutions, including cyber vulnerability detection and mitigation; compromise detection and repair (self-healing); real-time response to threats; network and mission mapping; and anomaly resolution.
  5. ISR [intelligence, surveillance, and reconnaissance] and Command and Control operations will also see increased use of autonomous systems to assist with integrating information across multiple sensors, platforms and sources, and to provide assistance in mission planning, re-planning, monitoring, and coordination activities.

Many common challenges exist for people to work in collaboration with these autonomous systems across all of these future applications. These include:

the more reliable and robust that automation is, the less likely that human operators overseeing the automation will be aware of critical information and able to take over manual control when needed...I have labeled this the Automation Conundrum
  1. Difficulties in creating autonomy software that is robust enough to function without human intervention and oversight are significant. Creating systems that can accurately not only sense but also understand (recognize and categorize) objects detected, and their relationship to each other and broader system goals, has proven to be significantly challenging for automation, especially when unexpected (i.e., not designed for) objects, events, or situations are encountered. This capability is required for intelligent decision-making, particularly in adversarial situations where uncertainty is high, and many novel situations may be encountered.
  2. A lowering of human situation awareness when using automation often leads to out-of-the-loop performance decrements. People are both slow to detect that a problem has occurred with the automation, or with the system being controlled by the automation, and then slow to come up to speed in diagnosing the problem to intervene appropriately, leading to accidents. Substantial research on this problem shows that as more automation is added to a system, and the more reliable and robust that automation is, the less likely human operators are in overseeing the automation and taking over manual control when needed. I have labeled this the Automation Conundrum.
  3. Increases in cognitive workload are often required in order to interact with the greater complexity associated with automation. Workload can often increase as understanding and interacting with automation increases demands.
  4. Increased time to make decisions can be found when decision aids are provided, often without the desired increase in decision accuracy. Evidence shows that people actually take-in system assessments and recommendations that they then combine with their own knowledge and understanding of the situation. A faulty decision aid can lead to people being more likely to make a mistake due to decision biasing by the aid. And the time required to make a decision can actually increase, as it is an additional source of information to take into account.

Challenges occur when people working with automation develop a level of trust that is inappropriately calibrated to the reliability and functionality of the system in various circumstances. In order for people to operate effectively with autonomous systems, they will need to be able to determine how much to trust the autonomy to perform its tasks.

This trust is a function of not just the overall reliability of the system, but also a situationally determined assessment of how well it performs particular tasks in particular situations. For this, people need to develop informed trust – an accurate assessment of when and how much autonomy should be employed, and when to intervene.

Given that it is unlikely that autonomy in the foreseeable future will work perfectly for all functions and operations, and that human interaction with autonomy will continue to be needed at some level, these factors work to create the need for a new approach to the design of autonomous systems that will allow them to serve as an effective teammate with the people who will need to depend on them to do their jobs.

What does the autonomous future look like for you? Is it good, bad or ugly?

The future with autonomous systems may be good, bad, or very ugly, depending on how successful we are in designing and implementing effective human-autonomy collaboration and coordination.

The ugly scenario will occur only if decision makers forget about the power of people to be creative and innovative, and try to supplant them with autonomous systems in a failed belief in its superiority

In the bad scenario, if we continue to develop autonomous systems that are brittle, and that fail to provide the people who must work with automation with the needed situation awareness to be able to effective in their roles, then the true advantages of both people and autonomy will be compromised. 

The ugly scenario will occur only if decision makers forget about the power of people to be creative and innovative, and try to supplant them with autonomous systems in a failed belief in its superiority. Nothing in the past 40 years of automation research has justified such an action, and such a move would be truly disastrous in the long run.

In a successful vision of the future, autonomous systems will be designed to serve as part of a collaborative team with people. Flexible autonomy will allow the control of tasks, functions, sub-systems, and even entire vehicles to pass back and forth over time between people and the autonomous system, as needed to succeed under changing circumstances. Many functions will be supported at varying levels of autonomy, from fully manual, to recommendations for decision aiding, to human-on-the-loop supervisory control of an autonomous system, to one that operates fully autonomously with no human intervention at all.

People will be able to make informed choices about where and when to invoke autonomy based on considerations of trust, the ability to verify its operations, the level of risk and risk mitigation available for a particular operation, the operational need for the autonomy, and the degree to which the system supports the needed partnership with the human.

In certain limited cases, the system may allow the autonomy to take over automatically from the human, when timelines are very short for example, or when loss of lives are imminent. However, human decision making for the exercise of force with weapon systems is a fundamental requirement, in keeping with Department of Defense directives.

The development of autonomy that provides sufficient robustness, span of control, ease of interaction, and automation transparency is critical to achieving this vision. In addition, a high level of shared situation awareness between the human and the autonomy will be critical. Shared situation awareness is needed to ensure that the autonomy and the human operator are able to align their goals, track function allocation and re-allocation over time, communicate decisions and courses of action, and align their respective tasks to achieve coordinated actions.

Critical situation awareness requirements that communicate not just status information, but also comprehension and projections associated with the situation (the higher levels of situation awareness), must be built into future two-way communications between the human and the autonomy.

This new paradigm is a significant departure from the past in that it will directly support high levels of shared situation awareness between human operators and autonomous systems, creating situationally relevant informed trust, ease of interaction and control, and manageable workload levels needed for mission success. By focusing on human-autonomy teaming, we can create successful systems that get the best benefits of autonomous software along with the innovation of empowered operators.

Throwback Thursday: The ‘problem ’ with automation: inappropriate feedback and interaction, not ‘over-automation’

Today's Throwback article is from Donald Norman.  If that name sounds familiar, it is the same Dr. Norman who authored the widely influential, "The Design of Everyday Things."

In this 1990 paper published in the Philosophical Transactions of the Royal Society, Dr. Norman argued that much of the criticism of automation at the time (and today) is not due to the automation itself (or even over-automation) but due to its poor design; namely the lack of inadequate feedback to the user.  

This is a bit different than the concept of the out-of-the-loop (OOTL) scenario that we've talk about before; there is a subtle emphasis difference.  Yes, lack of feedback contributes to OOTL, but here, feedback is discussed more as an opaqueness of automation status and operations, not that it is carrying out a task that you previously performed.

He first starts off with a statement that should sound familiar if you've read our past Throwback posts:

The problem, I suggest, is that the automation is at an intermediate level of intelligence, powerful enough to take over control that used to be done by people, but not powerful enough to handle all abnormalities.
— pp. 137

The obvious solution, then is to make the automation even more intelligent (i.e., a higher level of automation):

To solve this problem, the automation should either be made less intelligent or more so, but the current level is quite inappropriate.
— pp. 137

If a higher level of automation is what is meant by "more intelligent," then we already  know that this is also not a viable solution (the research to show that was done after the publication of this paper).  However, this point is merely a setup to further the idea that problems with automation are caused not by the mere presence of automation, but by its lack of feedback.  Intelligence means giving just the right feedback at the right time for the task.

He provides aviation case studies that imply that the use of automation lead to out-of-the-loop performance issues (see previous post).  He next directs us through a thought experiment to help drive home his point:

Consider two thought experiments. In the first, imagine a captain of a plane who turns control over to the autopilot, as in the case studies of the loss of engine power and the fuel leak. In the second thought experiment, imagine that the captain turns control over to the first officer, who flies the plane ‘by hand’. In both of these situations, as far as the captain is concerned, the control has been automated: by an autopilot in one situation and by the first officer in the other.
— pp. 141

The implication is that when control is handed over to any entity (automation or a co-pilot), feedback is critical.   Norman cites the widely influential work of Hutchins who found that informal chatter, in addition to lots of other incidental verbal interaction, is crucial to what is essentially situation awareness in human-human teams (although Norman invokes the concept of mental models).  Humans do this, automation does not.  Back then, we did not know how to do it and we probably still do not know how to do it.  The temptation is to provide as much feedback as possible:

We do have a good example of how not to inform people of possible difficulties: overuse of alarms. One of the problems of modern automation is the unintelligent use of alarms, each individual instrument having a single threshold condition that it uses to sound a buzzer or flash a message to the operator, warning of problems.
— pp. 143

This is the current state of automation feedback.  If you have spent any time in a hospital, alerts are omnipresent and overlapping (cf. Seagull & Sanderson, 2001).  Norman ends with some advice about the design of future automation:

What is needed is continual feedback about the state of the system...This means designing systems that are informative, yet non-intrusive, so the interactions are done normally and continually, where the amount and form of feedback adapts to the interactive style of the participants and the nature of the problem.
— pp. 143

Information visualization and presentation research (e.g., sonification; or the work of Edward Tufte) tackles part of the problem. These techniques are an attempt to provide constant, non-intrusive information.   Adaptive automation, or automation that scales its level based on physiological indicators, is another attempt to get closer to Norman's vision but, in my opinion, may be insufficient and more disruptive as they do not address feedback.

To conclude, Norman's human action cycle is completely consistent with, and probably heavily informs, his thoughts on automation.

Further reading:  

Reference: Norman, D. A. (1990). The 'problem' with automation: inappropriate feedback and interaction, not'over-automation'. Philosophical Transactions of the Royal Society of London B: Biological Sciences327(1241), 585-593.

AI potpourri: AI gets a job at NASA, finds serial killers, stops suicide, selects embryos, and interviews you!

[The New Yorker] The Serial-Killer Detector

This article discusses how Thomas Hargrove, a retired journalist who had access to a large collection of murder records created an algorithm that was able to find crime patterns.

He began trying to write an algorithm that could return the victims of a convicted killer. As a test case, he chose Gary Ridgway, the Green River Killer, who, starting in the early eighties, murdered at least forty-eight women in Seattle, and left them beside the Green River.
Facebook’s new “proactive detection” artificial intelligence technology will scan all posts for patterns of suicidal thoughts, and when necessary send mental health resources to the user at risk or their friends, or contact local first-responders. By using AI to flag worrisome posts to human moderators instead of waiting for user reports, Facebook can decrease how long it takes to send help.

It’s also dedicating more moderators to suicide prevention, training them to deal with the cases 24/7, and now has 80 local partners like, National Suicide Prevention Lifeline and Forefront from which to provide resources to at-risk users and their networks.

Misses and false alarms should be factored in when designing the automation algorithm. Too many misses have catastrophic consequences in a high-risk situation. Facebook's AI is an example of an automated system where having misses far outweigh the nuisance of having false alarms. 

[GCN] NASA’s newest employee isn’t human

This article talks about the newest employee at NASA Shared Services Center, Washington, who is a bot. Washington is a rules-based bot and follows a set of rules. NASA expects that future bots will have higher-order cognitive processing abilities.

One of the newest employees at the NASA Shared Services Center can copy and paste text, open emails, move folders and many other tasks. That might sound routine, but the new hire, Washington, isn’t a person — it’s a bot.

Much like a human employee, however, Washington has its own computer, its own email account, its own permissions within applications and its own role within the organization.

The bots, which can run 24/7, can help NASA by taking on time-consuming, manual tasks and allowing its humans to engage in higher level work.
Scientists are using artificial intelligence (AI) to help predict which embryos will result in IVF success.

AI is able to recognise and quantify 24 image characteristics of embryos that are invisible to the human eye. These include the size of the embryo, texture of the image and biological characteristics such as the number and homogeneity of cells.

[New York Post] AI already reads your resume – now it’s going to interview you, too

This article discusses how AI is being used by companies to improve their recruiting process. 

Marriott International Inc. announced the launch of Marriott Careers chatbot for Facebook Messenger, a computer program designed to simulate conversation with job seekers. The virtual assistant aims to create a more personalized, efficient experience for applicants.

“Once you apply for a job, the system sends you updates. If it isn’t available, when another job meets your specific qualifications, you’ll receive a direct message on your digital device,” says Rodriguez, Executive vice president and global chief human resources officer for Marriott. “Generation Z, which is starting to graduate from college, has a strong preference to communicate with companies this way. It’s the wave of the future.”

Unilever is also using AI to narrow down candidates based on their speech, facial expressions and body language.

“Hey Siri, how are my crops doing?” Autonomy in Agriculture Potpourri

Modern agriculture is only possible with the use of advanced technology.  In an upcoming interview, we will learn about what the future of agriculture looks like with highly advanced autonomous systems and how farmers are reacting and coping.

Until then, here are some interesting stories about autonomous systems and agriculture.

[U.S. Department of Agriculture] Smart Phones: The Latest Tool for Sustainable Farming

It is nice to see AI being used to help meet the food demands of a growing world population. For example, the U.S. Department of Agriculture has developed two apps, “LandInfo” and “LandCover,” available on the Google Play Store.

With LandInfo, users can collect and share soil and land-cover information as well as gain access to global climate data. The app also provides some useful feedback, including how much water the soil can store for plants to use, average monthly temperature and precipitation, and growing season length.

LandCover simplifies data collecting for use in land-cover inventories and monitoring. The app automatically generates basic indicators of these cover types on the phone and stores the data on servers that are accessible to users worldwide.





[BBC News] Tell me phone, what's destroying my crops?

AI is also being used in India to help farmers. Drought, crop failure, and lack of accessibility to modern technology make it hard for Indian farmers.   In fact, an estimated 200,000 farmers have ended their lives in the last two decades due to debt.  A group of researchers from Berlin have developed an app called Plantix to help farmers detect crop diseases and nutrient deficiency in their crops.

The farmer photographs the damaged crop and the app identifies the likely pest or disease by applying machine learning to its growing database of images.

Not only can Plantix recognise a range of crop diseases, such as potassium deficiency in a tomato plant, rust on wheat, or nutrient deficiency in a banana plant, but it is also able to analyse the results, draw conclusions, and offer advice.

[Western Farm Press] Smartphones and apps taking agriculture by storm

AI has also given farmers a lot of convenience. They can now perform tasks such as starting or stopping center pivot irrigation systems from the convenience of their home. 

Before I might have to go out in the rain at 2 a.m. to turn off a center pivot or check to make sure it was operating,” says Schmeeckle. “Now I can turn a pivot on or off with my smartphone. I even started one while we were 300 miles away on vacation this summer, and it was still running when I got home.”
Through the IoT, sensors can be deployed wherever you want–on the ground, in water, or in vehicles–to collect data on target inputs such as soil moisture and crop health. The collected data are stored on a server or cloud system wirelessly, and can be easily accessed by farmers via the Internet with tablets and mobile phones. Depending on the context, farmers can choose to manually control connected devices or fully automate processes for any required actions. For example, to water crops, a farmer can deploy soil moisture sensors to automatically kickstart irrigation when the water-stress level reaches a given threshold.

[MIT Technology Review] Six ways drones are revolutionizing agriculture

The market for drone-powered solutions in agriculture is estimated at $32.4 billion. Applications include soil and field analysis, planting, crop spraying, crop monitoring, irrigation, and health assessment,

Agricultural producers must embrace revolutionary strategies for producing food, increasing productivity, and making sustainability a priority. Drones are part of the solution, along with closer collaboration between governments, technology leaders, and industry.
Lettuce Bot is a machine that can “thin” a field of lettuce in the time it takes about 20 workers to do the job by hand.

After a lettuce field is planted, growers typically hire a crew of farmworkers who use hoes to remove excess plants to give space for others to grow into full lettuce heads. The Lettuce Bot uses video cameras and visual-recognition software to identify which lettuce plants to eliminate with a squirt of concentrated fertilizer that kills the unwanted buds while enriching the soil.
Dr. Julie Carpenter: Human-Robot/AI Relationships

The social science research that we cover in this blog is carried out by a multitude of talented scientists across the world; each studying a different facet of the problem. As the first post in a new series, we interview one the pioneers in the study of human-AI relationships, Dr. Julie Carpenter.


Dr. Carpenter’s first book, Culture and human-robot interaction in militarized spaces: A war story (RoutledgeAmazon) expands on her research with U.S. military Explosive Ordnance Disposal personnel and their everyday interactions with field robots.

About Dr. Julie Carpenter

Julie Carpenter has over 15 years of experience in human-centered design and human-AI interaction research, teaching, and writing. Her principal research is about how culture influences human perception of AI and robotic systems and the associated human factors such as user trust and decision-making in human-robot cooperative interactions in natural use-case environments.

Dr. Carpenter earned her PhD and an MS from the University of Washington, an MS from Rensselaer Polytechnic Institute, and a BA from the University of Wisconsin-Madison. She is also currently a Research Fellow in the Ethics + Emerging Sciences group at California Polytechnic State University. 

Dr. Carpenter’s first book, Culture and human-robot interaction in militarized spaces: A war story (RoutledgeAmazon) expands on her research with U.S. military Explosive Ordnance Disposal personnel and their everyday interactions with field robots. The findings from this research have applicability across a range of human-robot and human-AI cooperative scenarios, products, and situations. She regularly updates her website with information about her current work at

You have done a lot of work on the emotional attachment that humans have towards robots. Can you tell us more about your work?

At its heart, my work is human-centered and culture-centered. I tend to approach things in a very interdisciplinary way, and my body of published work reflects my long-term interest in how people use technology to communicate, from film to AI.

...there were relatively few people looking at AI as the vector for human emotion when I began in this vein

The medium or technologies I focus on changes and evolves. I began in film theory, then a lot of my work was about Web-based human interactions, and more recently it has been how people interact with robots and other forms of non-Web AI, like autonomous cars, textbots, or IoT agents such as Alexa.

But my lens for looking at things has always been rooted in a sort of anthropological interest in people and technology. Specifically, human emotional attachment to and through the technological medium interests me because there are so many nuanced possible pitfalls for the human, psychologically, ethically, emotionally, even physically.

Yet when it comes to scholarly study about topics like affection, friendship, love and their influence and connectedness with other complicated topics like trust, cooperative teamwork, and decision-making, there were relatively few people looking at AI as the vector for human emotion when I began in this vein. David Levy is one person who pioneered this discussion, of course, as are Clifford Nass and Byron Reeves.

As a film theory undergraduate student, I was drawn to how people use stories to explore technology, as we do in science fiction. Looking back, I can see where even then I was influenced by not only the idea of science fiction and science fiction films, but particularly ones that were of my own era as cultural touchstones and became the basis for a great deal of my early scholarly work. 

So, movies like Blade Runner were something I wrote whole papers about years before there was even a hint that we would enter an era when robots would become a reality in a very specific and rapid time for development in the 2000s. But back then I was looking at things as ideas connected specifically to that movie director’s body of work, or the audience for the movie, or culture at that time.

Blade Runner (1982).   Image source

Blade Runner (1982).  Image source

Now I look at a movie like that as an exploration of human-robot possibilities, a reflection and influencer of popular cultural ideas, and also an inspiration to people like me, makers and researchers who have a say in developing real world AI. I find those sort of storytelling influences fascinating because they often set up peoples’ real world expectations of their interactions with technology, and even helps form the communication model.

Storytelling’s influence on culture is a very rich set of artifacts for exploration, and I manage to reference that idea a great deal in the way I situate research in the larger culture it is part of, however that may be defined for the scope of that work.

What do you think about media portrayals of human-robot/autonomy relationships in movies (e.g., the new Bladerunner; the movie Her)?

Cultures around the world use science fiction to explore what it means to be human...

I love science fiction stories, and as I mentioned. I frequently use science fiction as a framework for discussing our expectations of interactions with AI and robots, because research shows it definitely can influence peoples’ expectations about how to interact with AI, at least initially.

Personally, Blade Runner definitely inspired me in many ways, going back to when I was studying film theory as an undergrad and never predicted I’d be working in a field called human-robot interaction someday. I know a lot of roboticists who cite other scifi as personal inspiration, too, such as Astro Boy. Storytelling captures our imagination and prompts questions, and it is a wonderful creative springboard for discussion, as well as entertainment.

A pitfall I am less a fan of is using Isaac Asimov’s Three Laws to discuss ethics and AI. Asimov wrote the Laws purposefully allowing for ethical pitfalls so he could keep writing stories; the Laws create plot points in their fallibility. If you want to use Three Laws (or four, if you count the Zeroth Law) to frame a discussion of ethical AI, then you have to acknowledge it is fictional, fallible, and very purposefully incomplete in conception—it isn’t a real world solution for development or policy-making, except perhaps as an example of what loopholes might be in a framework like the Three Laws if they were used in the real world.

Science fiction can be a cultural touchstone and a thought exercise for framing complicated human-AI interactions, but sometimes it is used for shorthand to communicate complicated issues in a way that disregards too much nuance of the issues being discussed. I’m an Asimov fan, but I think the Laws are sometimes relied upon too much in a scientific discussion or popular news framing of ethical problems for AI.

Having said that, I personally enjoy a wide range of AI representations in fiction, from the dystopic to the sympathetic predictions. The ethical dilemmas of the Terminator or Her are both entertaining for me to contemplate in the safety of my everyday life. Considering the more far-reaching implications of the ideas they are conveying is a more serious endeavor for me, of course. How we tell stories reflects our beliefs, and also pushes those beliefs and ideas further, questioning our suppositions, and in that way also has the potential to influence new ideas about how we interact with AI.

Her (2013).   Image source

Her (2013).  Image source

There is a rich history of stories we tell about AI that pre-dates the genre we call science fiction. Scifi is a relatively new genre label, in itself, but the idea of humans interacting with artificial life has been around forever, in various forms. All sorts of tales about humans interfering with the natural order of things to create a humanlike life outside the body--sometimes via magic spells or religious intervention--exist around the world. These AI characters take the form of golems, zombies, statues, puppets, dolls, and so on. Historically, this is a set of ideas that has universal fascination.

Cultures around the world use science fiction to explore what it means to be human, and what it means for our creation of and interactions with entities that are similar to us in some ways, often as if AI was a sociological Other.

I recently read the news of a man in China marrying the robot he created. SciFi movies are certainly becoming a reality. What are the ethical implications of human-automation romantic relationships?

We are currently in an era where we are really just beginning discussions of emerging ethics in this domain earnestly because of the enormous progress of AI and robotics over the last decade in particular.

Right now, a romantic feeling for AI is considered aberrant behavior, so it carries a very different significance than it will when AI and robots are accepted as objects that can carry a great deal of meaning for people in different situations, whether it’s as caregiver or mentor or helper or companion or romantic interest.

In other words, I don’t think we can make shorthand generalizations about a “type” of person that marries a robot or other AI very successfully as a static model, because the way we regard human-robot relationships will change as robots become part of our everyday realities and we learn to live with them and negotiate what different robots might mean to us in different ways.

I think that to an extent, eventually we will see society normalize human-robot romantic relationships as a culturally accepted option for some people. We are still going through a process of discovery about our interactions with robots now, but we do see patterns of human-robot interaction strikingly different from our interactions with other objects, and one emerging pattern is that in some conditions we treat AI and robots in socially meaningful ways that sometimes includes emotional attachment and/or affection from the person to the AI or robot.

The ethical pitfalls of a human-robot romantic relationship can come from the development end, the user end, and society’s perceptions of that relationship. From the development end, some ethical concerns are the development of the AI, and the human biases and influences we are teaching AI that learns from us, whether it is through direct programming or neural networks.  Robot hacking and privacy concerns are thorny nests of ethical issues, too.

Say someone has a romantic or other affection for AI used in their home, and interacts with it that way, accordingly. In that case, who has access to what the robot or AI hears you say, watches what you do, the information it gathers about your everyday life and your preferences for everything from dish detergent to sexual activities?  What if that data was hacked, and someone tried to use the gathered information to manipulate you? These are major technical and ethical issues.

From the user end, one ethical concern is whether people who become emotionally attached to AI have a real self-awareness of the lack of truly humanlike reciprocity in a human-AI relationship with the current technology, and whether they lack a root understanding that the AI is not anywhere near humanlike intelligence, although sometimes those are the very traits of AI that can attract someone to it romantically.

Furthermore, society does not treat AI or robots like people when it comes to things like legal status, so similar ethical concerns are reflected in the ways the people around the user that reports to be romantically interested in AI recognize that for someone else to declare oneself in a committed, persistent, affectionate relationship with an AI form also acknowledges involvement in an imbalanced power dynamic.

Another ethical question rising from romantic human-AI interaction is, “Will a person who is accustomed to the imbalanced power dynamic of a human-robot relationship transfer their behaviors into their human-human relationships?” The implication there is that (1) the person treats the robot in a way we would find distasteful in human-human dynamics, and (2) that our social behaviors with robots will be something we apply as a model to human-human interactions.

Blade Runner 2049 (2017).   Image source

Blade Runner 2049 (2017).  Image source

We are currently in an era where we are really just beginning discussions of emerging ethics in this domain earnestly because of the enormous progress of AI and robotics over the last decade in particular. It is only the beginning of a time when we formalize some of our decisions about these ethical concerns as law and policies, and how we establish less formal ways of negotiating our interactions with AI via societal norms.

I’m looking forward to watching how we integrate AI technologies like robots and autonomous cars with our everyday lives because I think there are a lot of potential good that will come from our using them. Our path to integrating AI into our lives is already fascinating.

Throwback Thursday: Use, Misuse, Disuse, and Abuse of Automation

In this throwback post, I will introduce some important but similar-sounding terms from the automation literature and their causes: use, misuse, disuse, and abuse. I will highlight the key takeaways from the article followed by a give a light commentary.

Until recently, the primary criteria for applying automation were technological feasibility and cost. To the extent that automation could perform a function more efficiently, reliably, or accurately than the human operator, or merely replace the operator at a lower cost, automation has been applied at the highest level possible.
— Parasuraman & Riley, pp. 232

The above statement was made by the authors 20 years ago and the irony is it seems to be the dominant design and engineering philosophy to apply the highest levels of automation whenever technological feasible without much regard for the consequences to human performance.

Automation use: Automation usage and attitudes towards automation are correlated. Often these attitudes are shaped by the reliability or accuracy of the automation.

— Parasuraman & Riley, pp. 234

I'm not very good with directions and find myself relying a lot on Google Maps when I am in an unfamiliar city. I use it because I know that I can rely on it most of the time.  In other words, my positive attitude and use of Google Maps (i.e., the automated navigation aid) is influenced by the high reliability of Google Maps as well as due to my higher confidence in Google Maps compared to my own navigational skills (there have been numerous occasions where Google Maps have come to my rescue!).

Similarly, operators in complex environments will tend to defer to automation when they think it is highly reliable and when their confidence in the automation exceeds their confidence in their own abilities to perform the task.

Misuse: Excessive trust can lead operators to rely uncritically on automation without recognizing its limitations or fail to monitor the automation’s behavior. Inadequate monitoring of automated systems has been implicated in several aviation accidents.
— Parasuraman & Riley, pp. 238-239

While it is true that when reliable, automation may be better than humans at some tasks, the costs associated with automation failure are high for these highly reliable automated systems.  As Rich described in his last throwback post , a potential consequence of highly reliable, automated systems is the out of the loop performance syndrome or inability of operators to take over manual control in the event of an automation failure due to their overreliance on automation as well as due to degradation of their manual skills. 

High trust in automation can also make human operators less attentive to other sources of contradictory information; operators become so fixated on the notion that the automation is right that they fail to examine other information in the environment that seem to suggest otherwise. In research, we call this automation bias (more on this in a later post).

Misuse can be minimized by designing automation that is transparent about its state and its actions and that provides salient feedback to human operators.  Next week's throwback post will elaborate on this point.

Disuse: If a system is designed to minimize misses at all costs, then frequent device false alarms may result. A low false alarm rate is necessary for acceptance of warning systems by human operators.
— Parasuraman & Riley, pp. 244

This means that automation that has a high propensity for false alarms is less likely to be trusted. For example, if the fire alarm in my building goes off all the time, I am less likely to respond to it (the cry wolf effect).  It's not so simple as to say, "just make it less sensitive!"  Designing an automated system with a low false alarm rate is a bit of a conundrum because with a low false alarm rate also comes a miss rate; that is, the fire alarm may not sound when there is a real fire. 

While the cost of distrusting and disusing automation is high, the cost of missing an event can also be high in safety critical domains.  Designers should therefore consider the comparitive costs of high false alarm rates and high miss rates when designing automation.  This would obviously depend on the context in which the automation is used.

Abuse: Automation abuse is the automation of functions by designers and implementation by managers without due regard for the consequences for human (and hence system) performance and the operator’s authority over the system.
— Parasuraman & Riley, pp. 246

My earlier throwback post discusses the importance of considering the human performance consequences associated with automation use.  Completely eliminating the human operator from the equation by assuming that this will eliminate human errors in its entirety is not a wise choice. This can leave  operators with a higher workload and in a position to perform tasks for which they are not suited.  This irony was discussed by Bainbridge in 1983.  In short, operators' responsibilities should be based on their capabilities.  


Citation:  Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39, 230-253.

Downloadable link here.

Applications of AI Potpourri: AI curbs sex trafficking, outs potentially gay men, avoids pedestrians and more!

Today's potpourri shows the diverse applications of AI, from addressing social problems, creating unanticipated ones, and increasing efficiency in transportation.

[TechRepublic] Apple's autonomous car software uses neural networks to improve navigation, object detection

Apple scientists are working on a new software called VoxelNet that can help self-driving cars identify pedestrians and cyclists by improving Light Detection and Ranging, without adding additional cameras/sensors. Driving is such an integral part of everyday living that Apple's foray into this space is certainly not surprising.

This marks Apple’s first official steps in the autonomous vehicle market, joining companies including Google, Tesla, Uber, and Intel.
Online marketplace AHA partnered with drone company Flytrex on the world’s first fully autonomous drone service in Reykjavik, Iceland. The drone delivery service can cut down delivery time from 25 minutes to four minutes, with a 60% reduction per delivery cost.

[The New Yorker] The A.I. “Gaydar” Study and the Real Dangers of Big Data

AI can now determine your sexual orientation by using parameters such as hair combing style, length of hair, shoes, and polo shirts.  As humans, I think that we should not be making assumptions about people's sexual orientation based on their appearances. How does that help us in anyway? So, why are we now having AI to do that for us?

Kosinski and a colleague, Yilun Wang, had reported the results of a study, to be published in the Journal of Personality and Social Psychology, suggesting that facial-recognition software could correctly identify an individual’s sexuality with uncanny accuracy...When shown two photos, one of a gay man and one of a straight man, Kosinski and Wang’s model could distinguish between them eighty-one per cent of the time
Researchers have developed a new tool which uses machine learning to identify payment patterns in illicit ads on — a site commonly used to host online ads for sex work.

The new system follows peculiar or repeated bitcoin transactions that are likely used in sex trafficking, and gives authorities a heads up of which chains of payments could be signs of crimes. Every transaction on Backpage uses bitcoin.

[Reuters] Sex robots: perverted or practical in fight against sex trafficking?

The argument here is that sexual robots is a great option for lonely people and that in the future brothels will be staffed with robots.

Sex robots can potentially replace prostitutes, reduce sex trafficking and help lonely people.

Experts say the increasingly life-like robots raise complex issues that should be considered by policymakers and the public - including whether use of such devices should be encouraged to curb prostitution and sex trafficking, for sex offenders, or for people with disabilities.
[Reprint] Self-Driving Cars: Enabling Safer Human–Automation Interaction
This is a reprint of an article, authored by Arathi Sethumadhavan, and is part of the series “Research Digest,” originally published in Ergonomics in Design

Several car manufacturers and technology companies currently have autonomous cars in their road maps. However, highly reliable automated systems introduce a huge conundrum (Endsley, 2016) – it makes it difficult for human operators to monitor critical pieces of information and take over manual control when needed.

For example, drivers with high automation trust are less likely to monitor the automated driving system (e.g., Hergeth, Lorenz, Vilimek, & Krems, 2016a) and more likely to engage in nondriving tasks (e.g., Carsten, Lai, Barnard, Jamson, & Merat, 2012).

Whether we like it or not, automation is here to stay and to grow. Given that, how can we make human–automation interaction safer (Endsley, 2016; Hergeth, Lorenz, Vilimek, & Krems, 2016b)?

  • Allow automation to degrade gracefully.
  • Create transparent automation user interfaces that enable human operators to understand what is going on and to make predictions.
  • As the automation learns new behaviors, convey the changes to operators to enable them to maintain an accurate mental model of the automation.
  •  Design automated systems that cooperate, coordinate, and collaborate with human operators.
  • Enable shared situation awareness between the human and the automation, which in turn promotes goal alignment, knowing what each team member (i.e., human and the machine) is doing, and permits reallocation of responsibilities and communication of strategies and actions.
  • Allow operators to experience automation failure situations during practice trials − which is more effective than merely informing them about automation failures − to prevent the effects of “first automation failure.” For example, in a simulated driving task, driver performance in the first manual takeover situation was better with prior familiarization with takeover requests and worst without prior familiarization.
  • Acknowledge that there are individual differences in responding to consecutive automation failures and provide training to improve working memory and sustained attentional skills to enable faster response times (Jipp, 2016).


Carsten, O., Lai, F. C. H., Barnard, Y., Jamson, A. H., & Merat, N. (2012). Control task substitution in semiautomated driving: Does it matter what aspects are automated? Human Factors, 54, 747−761. http://journals.sagepub .com/doi/10.1177/0018720812460246

Endsley, M. R. (2016). From here to autonomy: Lessons learned from human–automation research. Human Factors, 20, 1−23. http:// 8720816681350

Hergeth, S., Lorenz, L., Vilimek, R., & Krems, J. F. (2016a). Keep your scanners peeled: Gaze behavior as a measure of automation trust during highly automated driving. Human Factors, 58, 509−519. http://journals.sage

Hergeth, S., Lorenz, L., Vilimek, R., & Krems, J. F. (2016b). Prior familiarization with takeover requests affects drivers’ takeover performance and automation trust. Human Factors, published online December 20, 2016. 018720816678714

Jipp, M. (2016). Reaction times to consecutive automation failures: A function of working memory and sustained attention. Human Factors, 58, 1248–1261. http://journals.sage

Original article link:

Throwback Thursday: The Out-of-the-Loop Performance Problem and Level of Control in Automation

Dust off your Hypercolor t-shirts, and fanny packs filled with pogs because we are going back to the 1990s for this week's Throwback post.

By definition, when users interact with automated systems, they are carrying out some of the task while the automation does the rest.  When everything is running smoothly, this split in task allocation between user/machine causes no outward problems; the user is, by definition, “out of the loop” in part of the task.

Problems only start to appear when the automation fails or is unavailable and the user is suddenly left to do the activities previously carried out by automation.  The user is now forced back “into the loop.”

I sort of experienced this when I got my new 2017 car.  Naturally, as an automation researcher I asked for “all the things!” when it came to driver assistance features (yeah, for “research purposes”):  active lane assist, adaptive cruise control with stop and go, autonomous emergency braking, auto-high-beams, auto-wipers, and more.

The combined driver assistance features probably equated to somewhere around Level 2 autonomy (see figure).  I had fun testing the capabilities and limitations of each feature (safely!).  They worked, for the most part, and I turned them on and forgot about it for the past 9 months.

However, while my car was serviced recently, I drove a loaner car that had none of the features and was quite shocked to see how sloppy my driving had become.  Driving on the interstate seemed much more effortful than before.  In addition, I had become accustomed to playing with my phone, messing with my music player or doing other things while on long stretches of straight highway.  I could no longer do this safely with no driver assistance features.  

This was when I realized how much of the lower-level task of driving (staying in the lane, keeping a constant distance to the car ahead, turning on and off my high beams) was not done by me.  As an automation researcher, I was quite surprised.

This anecdote illustrates two phenomena in automation:  complacency and the resultant skill degradation.  Complacency is when I easily and willingly gave up a good chunk of the driving task to automation (in my personal car).  I admit this complacency and high trust but it was only after several weeks of testing the limits of automation to understand the conditions where it worked best (e.g., lane keeping did not work well in conditions of high contrast road shadows).  It is doubtful that regular people do this.

Because of my complacency (and high trust), I had mild skill degradation: the ability to smoothly and effortlessly maintain my lane and car distance.  

You may have experienced a similar disorientation when you use your phone for GPS-guided directions and it gets taken away (e.g., you move to a low signal area, phone crashes).  Suddenly, you have no idea where you are or what to do next.

So, what, exactly, causes this performance degradation when automation was taken away?

This "out of the loop" (OOTL) performance problem (decreased performance due to being out of the loop and suddenly being brough back into the loop) is the topic of this week's paper by Endsley and Kiris (1995).  In the paper, Endsley and Kiris explored the out-of-the-loop performance problem for possible specific causes, and solutions.

It is the central thesis of this paper that a loss of situation awareness (SA) underlies a great deal of the out-of-the-loop performance problem (Endsley, 1987).
— pp. 382

Endsley and Kiris make the claim that most of the problems associated with OOTL are due to a loss of situation awareness (SA). SA is a concept that Endsley refined in an earlier paper.  It basically means your awareness of the current situation and your ability to use this information to predict what will happen in the near future.  Endsley defines situation awareness as:

the perception of elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future
— pp. 392

Within this definition of SA, there are three levels:

  1. Level 1: perception of the elements (cues) in the environment:  can you see the warning light on the dashboard?
  2. Level 2: comprehension of the meaning of the perceived elements:  do you understand what the engine symbol means on the dashboard?
  3. Level 3: projection of the future: given what you know, can you predict whether breaking should be done to prevent a collision?

In this paper, she argued that the presence of automation essentially interfered with all levels of situation awareness:

In many cases, certain critical cues may be eliminated with automation and replaced by other cues that do not result in the same level of performance.
— pp. 384

Much of the existing research at the time had only examined physical/manual tasks, not cognitive/decision-making tasks.  The purpose of Endsley and Kiris' paper was:

In order to investigate the hypothesis that the level of automation can have a significant impact on the out-of-the-loop performance problem and to experimentally verify the role of situation awareness in this process, a study was conducted in which a cognitive task was automated via a simulated expert system.
— pp. 385

The key hypotheses they had was that as level of automation increased (see Arathi's earlier Throwback post on level of automation), situation awareness would decrease and this would be evidenced specifically by increased time for users to make a decision, and reduced confidence in decisions since users would feel less skilled or qualified (because they were so OOTL).

The bottom line was that, yes, the hypotheses were confirmed:  increasingly higher levels of automation do seem to negatively impact situation awareness and this is evidenced as longer time to make decisions (because you had less SA), and slightly reduced confidence in your decisions.  Using the examples above, the driver assistance features (a lower level form of automation) did not really lead to loss of situation awareness for me, but it did lead to skill degradation.  However, in the GPS example, a much higher form of automation, it DOES lead to loss of situation awareness.

So what do we do with this information?  First, it should clearly show that high or full autonomy of higher level cognitive-type tasks is not a very desirable goal.  It might be only if automation is proven to be 100% reliable, which is never assured.  Instead, there will be situations where the autonomy will fail and the user will have to assume manual control (think of a self driving car that fails).  In these circumstances, the driver will have dramatically reduced SA and thus poor performance.

The surprising design recommendation?

this study did find that full automation produced more problems than did partial automation
— pp. 392

Let's hope designers of future autonomy are listening!

Reference: Endsley, M. R., & Kiris, E. O. (1995). The out-of-the-loop performance problem and level of control in automation. Human Factors, 37(2), 381–394.

Potpourri: Humorously-runaway AI

Runaway AI is a fear that some researchers have about AI.  While technologically it may be too soon to have this fear, we are coming close.  Here are some recent examples of human-automation or human-AI partnerships running amok with humorous results.

[Sunday Times] Jeremy Clarkson Says He Nearly Crashed While Testing An Autonomous Car (paywalled article); [CarScoops summary]

“I drove a car the other day which has a claim of autonomous capability and twice in the space of 50 miles on the M4 it made a mistake, a huge mistake, which could have resulted in death,” he said. “We have to be very careful legally, so I’m not going to say which one.”
In June, the U.S. Immigrant and Customs Enforcement (ICE) released a letter saying that the agency was searching for someone to design a machine-learning algorithm to automate information gathering about immigrants and determine whether it can be used to prosecute them or deny them entry to the country. The ultimate goal? To enforce President Trump’s executive orders, which have targeted Muslim-majority countries, and to determine whether a person will “contribute to the national interests”—whatever that means.
What I’ve heard is that this is a machine learning problem — that, more or less, for some reason the machine learning algorithm for autocorrect was learning something it never should have learned.
As far as debuts go, there have been more successful ones. During its first hour in service, an automated shuttle in Las Vegas got into an accident, perhaps fittingly the result of a flesh-and-blood human truck driver slowly driving into the unsuspecting robocar, according to a AAA PR representative on Twitter. Nobody was hurt and the truck driver was cited.
What rights should robots have?
On October 25, Sophia, a delicate looking woman with doe-brown eyes and long fluttery eyelashes made international headlines. She’d just become a full citizen of Saudi Arabia — the first robot in the world to achieve such a status.

Sophia’s announcement also raises a number of Bladerunner-esque questions. What does it mean to be a citizen? What rights does Sophia hold? Saudi Arabia has not elaborated on this so far

Saudi Arabia recently announced that they have granted citizenship to Sophia, a robot created by Hanson Robotics.  This came as a surprise to most because a female robot which does not wear a hijab has been granted citizenship in a country where women were granted the right to drive only recently and where children of Saudi Arabian women who are married to foreigners do not get citizenship.

But this move does raise an important question. What does it mean for a robot to be a citizen of a country? What are Sophia's rights and responsibilities?

In my quest to find out more about what robot rights means, I stumbled upon a few articles with  very interesting perspectives, which I am presenting as a potpourri below.

[The Guardian] Give robots 'personhood' status, EU committee argues

The proposed legal status for robots would be analogous to corporate personhood, which allows firms to take part in legal cases both as the plaintiff and respondent.
“How does it affect people if they think you can have a citizen that you can buy. Giving AI anything close to human rights would allow firms to pass off both legal and tax liability to these completely synthetic entities,” says Bryson.
If robots become citizens, they cannot be property, but without sentience, they cannot exercise self-determination. How do they exercise their constitutional rights? Do they get legal guardians? What if Sophia’s guardian advocates for her against Hanson?
If we continue to develop sophisticated forms of artificial intelligence, we have a moral obligation to improve our understanding of the conditions under which artificial consciousness might genuinely emerge.
MIT Media Lab researcher and robot ethics expert Kate Darling uses the example of parents who tell their child not to kick a robotic pet—sure, they don’t want to shell out money for a new toy, but they also don’t want their kid picking up bad habits. A kid who kicks a robot dog might be more likely to kick a real dog or another kid.

We generally don’t want to perpetuate destruction or violence, regardless of who—or what—is on the receiving end.

What is clear from these articles is that, even though there are conflicting viewpoints, robot rights is NOT a trivial issue.  While I agree with Kate Darling's view that mistreating a robot has the potential to lead to other destructive behavior, I can also see Bryson's view that being able to "buy" a citizen does not send the right message. The legal implications that corporate entities may be entitled to by granting robots personhood will also need further clarification. I am hoping we have more clarity around these soon. 

[Repost] Prominent Figures Warn of Dangerous Artificial Intelligence (it's probably a bad Human Factors idea, too)

This is an edited repost from Human Factors Blog from 2015

Recently, some very prominent scientists and other figures have warned of the consequences of autonomous weapons, or more generally artificial intelligence run amok.

The field of artificial intelligence is obviously a computational and engineering problem: designing a machine (i.e., robot) or software that can emulate thinking to a high degree.   But eventually, any AI must interact with a human either by taking control of a situation from a human (e.g., flying a plane) or suggesting courses of action to a human.

I thought this recent news item about potentially dangerous AI might be a great segue to another discussion of human-automation interaction.  Specifically, to a detail that does not frequently get discussed in splashy news articles or by non-human-factors people:  degree of automation. This blog post is heavily informed by a proceedings paper by Wickens, Li, Santamaria, Sebok, and Sarter (2010).

First, to HF researchers, automation is a generic term that encompasses anything that carries out a task that was once done by a human.  Such as robotic assembly, medical diagnostic aids, digital camera scene modes, and even hypothetical autonomous weapons with AI.  These disparate examples simply differ in degree of automation.

Let's back up for a bit: Automation can be characterized by two independent dimensions:

  • STAGE or TYPE:  What is it doing and how is it doing it?
  • LEVEL: How much it is doing?

Stage/Type of automation describes the WHAT tasks are being automated and sometimes how.  Is the task perceptual, like enhancing vision at night or amplifying certain sounds?  Or is the automation carrying out a task that is more cognitive, like generating the three best ways to get to your destination in the least amount of time?

The second dimension, Level, refers to the balance of tasks shared between the automation and the human; is the automation doing a tiny bit of the task and then leaving the rest to the user?  Or is the automation acting completely on its own with no input from the operator (or ability to override)?

Figure 1. Degrees of automation (Adapted from Wickens et al., 2010)

See Figure 1.  If you imagine STAGE/TYPE (BLUE/GREEN) and LEVEL (RED) as the X and Y of a chart (below), it becomes clearer how various everyday examples of automation fit into the scheme.  As LEVEL and/or TYPE increase, we get a higher degree of automation (dotted line).

Mainstream discussions of AI and its potential dangers seem to be focusing on a hypothetical ultra-high degree of automation.  A hypothetical weapon that will, on its own, determine threats and act.  There are actually very few examples of such a high level of automation in everyday life because cutting the human completely "out of the loop" can have severely negative human performance consequences.

Figure 2. Approximate degrees of automation of everyday examples of automation

Figure 2 shows some examples of automation and where they fit into the scheme:

Wickens et al., (2010) use the phrase, "the higher they are, the farther they fall."   This means that when humans interact with greater degrees of automation, they do fine if it works correctly, but will encounter catastrophic consequences when automation fails (and it always will at some point).  Why?  Users get complacent with high DOA automation, they forget how to do the task themselves, or they loose track of what was going on before the automation failed and thus cannot recover from the failure so easily.

You may have experienced a mild form of this if your car has a rear-backup camera.  Have you ever rented a car without one?  How do you feel? That feeling of being "out of the loop" tends to get magnified with higher degrees of automation.  More on this in an upcoming throwback post.

So, highly autonomous weapons (or any high degree of automation) is not only a philosophically bad/evil idea, it is bad for human performance!

For more discussion on the degree and level of automation, see Arathi's recent Throwback post.

Throwback Thursday: A model for types and levels of automation

This is our second post on our “throwback” series. In this paper, I will take you through an article written by the best in the human factors and ergonomics field, the late Raja Parasuraman, Tom Sheridan, and Chris Wickens. Though several authors have introduced the concept of automation being implemented at various levels, for me this article nailed it.

The key excerpts from this article are highlighted below along with my commentary. Companies chasing automation blindly should keep these points in mind when designing their systems.

Automation is not all or none, but can vary across a continuum of levels, from the lowest level of fully manual performance to the highest level of full automation.
— Parasuraman, Sheridan, & Wickens, pp. 287

This means that between the extremes of a machine offering no assistance to a human to a machine doing everything for  the human, there are other automation design options. For example, the machine can offer a suggestion or implement a suggestion if the human approves or does everything autonomously and then informs the human or does everything autonomously and informs the human when asked. Let's consider the context of driving. In the example below, as we move from 1 to 4, the level of automation increases.

  1. I drive my car to work 
  2. I drive my car, KITT (from the Knight Rider) tells me the fastest route to work but I chose to override its suggestion 
  3. I drive my car, KITT tells me the fastest route to work and does not give me the option to override its suggestion 
  4. KITT plans and drives me to work
Automation can be applied to four broad classes of functions: 1) information acquisition; 2) information analysis; 3) decision and action selection; and 4) action implementation. Within each of these types, automation can be applied across a continuum of levels from low to high, i.e., from fully manual to fully automatic.
— Parasuraman, Sheridan, & Wickens, pp. 286

The way humans process information can be divided into four stages:

  1. information acquisition, which involves sensing data
  2. information analysis which involves making inferences with data
  3. decision and action selection, which involves making decision from among various choices
  4. action implementation, which involves doing the action.

Here are four examples of automation applied at each level:

  1. information acquisition, which involves sensing data
    • Example: night vision goggles enhance external data
  2. information analysis which involves making inferences with data
    • Example: historical graph of MPG in some cars
  3. decision and action selection, which involves making decision from among various choices
    • Example: Google Maps routes to a destination; where it presents 3 possible routes based on different criteria
  4. action implementation, which involves doing the action.
    • Example: automatic stapling in a photocopier

The authors say that automation can be applied to each of these stages of human information processing

An important consideration in deciding upon the type and level of automation in any system design is the evaluation of the consequences for human operator performance in the resulting system.

— Parasuraman, Sheridan, & Wickens, pp. 290

Choosing an automation design without any regard for the strengths and limitations of the human operator or for the characteristics of the environment in which the operator works in (e.g., high stress) is not an effective strategy.  When choosing the degree of automation, it is important to consider the impacts it may have on the operator.

  • How would it affect the operator workload?
  • How would it affect the operator's understanding of the environment (in research we call this situation awareness)?
  • How would it affect the combined operator-machine performance?
  • Would operators over-trust the machine and be unable to overcome automation failures?

It is worth noting that NHTSA's current description of vehicle autonomy (figure) is NOT human-centered and is instead focused on the capabilities and tasks of the machine.

From (

From (

Citation:  Parasuraman, R., Sheridan, T. B., & Wickens, C. (2000). A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man, and Cybernetics, 30, 286-297.

Downloadable link here.


[Reprint] Human-Robot Interaction
This is a reprint of an article, authored by Arathi Sethumadhaan, and is part of the series “Research Digest,” originally published in Ergonomics in Design

Personal service robots are predicted to be the next big thing in technology (e.g., Jones & Schmidlin, 2011). The term personal service robot refers to a type of robot that will assist people in myriad household activities, such as caring for the elderly, gardening, or even assisting children with their homework. Jones and Schmidlin (2011) examined the factors that need to be taken into consideration in the design of personal service robots.

For example, a personal service robot must be able to do the following:

  • Understand users’ intentions and infer the ability of users to accomplish tasks (e.g., does a senior citizen want to get medicine from the cupboard, and, if so, can the senior citizen get the medicine without any help?).
  • Determine the appropriate time to interrupt users (e.g., stopping a person on her way to work to inform her that a trivial task is complete may not be an appropriate time to intercede).
  • Approach users in the appropriate direction (e. g., approaching from front or rear vs. left or right). This is based on the user group (e.g., women vs. men) and the circumstance (e.g., user is sitting vs. user is standing).
  • Position itself at an appropriate distance from users. This distance is dependent on the users’ attitudes toward robots.
  • Capture users’ attention by identifying receptive users, positioning itself appropriately, and speaking to users.

The physical appearance of a robot is another important element that designers need to take into account. Appearance plays a significant role in the capabilities that users perceive a robot to possess. For example, Lee, Lau, and Hong (2011) found that users expected more emotion and communication (e.g., speech) capabilities from human-like robots compared with machine-like robots.

Further, the appearance of a robot influenced the environment in which it is likely to be used. Specifically, human-like robots (which are expected to have more warmth) were preferred for social and service occupations that required interaction with humans compared with task-oriented occupations.

Like personal service robots, professional robots are becoming increasingly popular. These robots assist people with professional tasks in nonindustrial environments. For example, professional robots are used in urban search-and-rescue missions, with operators remotely in control. Designing robots for use in such complex environments brings a unique set of challenges.

For example, Jones, Johnson, and Schmidlin (2011) found that one of the problems involved with teleoperating urban search-and-rescue robots is that the robot gets stuck because operators lack the ability to accurately judge whether they could drive a robot through an aperture. In that situation, operators may have to jeopardize their lives to retrieve the robot.

The failure to make accurate judgment arises because driveability decisions are based solely on whether the robot is smaller or larger than the aperture and not on the ability to drive the robot through the aperture.

In summary, bear in mind the following points when designing your “R2D2”:

  • A personal service robot must be able to infer the user’s intentions and desires; must determine whether the user is able to complete the task without assistance; needs to decide when to interrupt the user; has to approach and position itself at a suitable distance from the user; and needs to be able to engage the user.
  • The appearance of robots should match users’ mental models. Humans expect human-like robots to have warmth capabilities (e.g., emotion, cognition) and prefer human-like robots in occupations requiring interactions with people. However, not all robots need to be human-like; machine-like robots are considered suitable for task-oriented, blue-collar occupations.
  • Teleoperating a robot successfully through an aperture is dependent not only on the robot’s width but also on a safety margin that is associated with the operator’s control of the robot. Therefore, robots used in urban search-and rescue missions must be designed to account for the safety margin that operators fail to consider when making driveability judgments. 


Jones, K. S., Johnson, B. R., & Schmidlin, E. A. (2011). Teleoperation through apertures: Passability versus driveability. Journal of Cognitive Engineering and Decision Making, 5, 10–28. content/5/1/10.full.pdf+html.

Jones, K. S., & Schmidlin, E. A. (2011). Human-robot interaction: Toward usable personal service robots. In Reviews of Human Factors and Ergonomics (vol. 7, pp. 100–148). Santa Monica, CA: Human Factors and Ergonomics Society. content/7/1/100.full.pdf+html.

Lee, S., Lau, I. Y., & Hong, Y. (2011). Effects of appearance and functions on likability and perceived occupational suitability of robots. Journal of Cognitive Engineering and Decision Making, 5, 232–250. http://edm.sagepub .com/content/5/2/232.full.pdf+html. 

Original article link:

What Sci-Fi Movies Can Tell Us about Future Autonomy

I gave a talk a few months ago to a department on campus.  It is based on work that Ewart de Visser PhD and I are doing on adaptive trust repair with autonomy.  That is a complex way of saying the possibility of giving machines an active role in managing human-machine trust.

The talk is based on a paper currently under review and is meant to be fun but also an attempt to seriously consider the shape of future autonomy based on fictional representations; sci-fi movies serve as data.  It is about 40 minutes long.

"Alexa, what should I wear today?"
Do you prefer “fashion victim” or “ensembly challenged”?
— Cher Horowitz, Clueless (1995)

I am bit of a style junkie and draw inspiration from the uber-talented fashion designers and celebrities I follow on Instagram.  What I really enjoy is designing my own outfits and accessories by drawing inspiration from the amazing pictures I see on Instagram,  in a cost-effective manner. 

Echo Look, the latest virtual assistant from Amazon that provides wardrobe recommendations grabbed my attention but for all the wrong reasons. 

Amazon introduced its latest Alexa-powered device, a gadget with a built-in camera that is being marketed as a way to photograph, organize and get recommendations on outfits. Of course, Amazon will then try to sell you clothing, too.

It reminded me of the scene in Clueless (1995) where Cher, the main character, uses an early "app" to help her decide what to wear.

Echo Look can serve as your personal stylist and provide recommendations if you are confused between two outfits. The recommendation is based on "style trends" and "what flatters" the user.   

But from where does the Echo Look draw its style trends?  Personally, I will work with a personal human stylist only after making sure he or she has a good grasp of the current trends and  who understand my needs (comfort and practicality is important), sensibilities, and most importantly my personality.

Fashion-wise, blindly following the current trends is not an effective strategy.  So, how can I trust a machine that does not know me? To gain trust, the machine should convey to me how it arrived at its decision. Or even better, present the raw data gathered and let me decide what fits me best.

In the automation literature, we refer to this as stages of automation (more in depth on this topic later).  What gets automated (deciding for the human versus simply gathering the data for the human) is an important design decision that affects how people perform and behave with automation.  I think that high level automation, simply deciding,  does not work in this context!

But Rich disagrees (in research we call this “individual differences”). In the process of writing this post, I found out that Rich is a big fan of Bonobos. Being, as he says, "style-impaired," he especially appreciated a new feature of the app that will suggest pairings of all the items he's purchased or with new items in the store.  

He shared some screenshots of the Bonobos app, which I thought was pretty cool. After you select an item, it will instantly create a complete outfit based on occasion and temperature.  Because it is a brand he trusts, and he is not knowledgeable about style, he rarely  question the decisions  (Ed: maybe I question why they keep pushing Henleys; I like collars). 

So what makes my opinion of Echo different from Rich's reaction to the decision aid in the Bonobos app?  


Experience comes from years of experimenting different things (and in the process creating some disastrous looks), and understanding trends but most importantly understanding your body and skin, and textures and colors that suit you the best. With experience, comes efficiency (in research we call this "expert versus novice differences"). If I am an expert, why would I need to rely on a machine to tell me what to wear or what looks good on me?

However, I am not dismissing the usefulness of Echo for everyone. For millennials who do most of their shopping on Amazon, Echo could provide a lot of value by putting together their outfits based on their shopping choices (Stitch Fix is a similar, popular concept). 


Even the simplest clothes can look stylish with the right shoes and statement jewelry. I genuinely enjoy the art of styling an outfit. This activity stimulates my right brain.  Why would I give up doing this activity? For those like Mark Zuckerberg who think that dressing in the same outfit will help them save their energy to do other important things in life (which he does!), Echo may do wonders. 


Now, do I dislike Echo and Rich likes his Bonobos app because I am a female and he is a male? So, statistically speaking, are men more fashion-impaired than women? Or to be politically correct, do women have more fashion wisdom than men? I dont know. What I do know is that some of my favorite fashion designers (e.g., Manish Malhotra, Prabal Gurung) are men. 

Automation Design

A major difference between Echo and the Bonobos app is in their level of automation. The Bonobos app provides recommendations on pairing outfits  that users already purchased (important point, users made the purchasing decision on their own) but users are empowered to use the data presented by the app to decide whether they want to follow the recommendations or not.  The Bonobos app also presents alternate outfits if the first choice is unsatisfactory.  This would be considered a form of “low decision automation” where it alleviates a moderate load of decision making but leaves some for the user.

Echo, on the other hand, tells users "yay" or "nay" as to why certain outfits are not flattering on them but givers users no information on how it arrived at the decision. The lack of transparency is a major drawback in how Echo is designed and a big reason as to why it wont work for me. It also could represent a much higher form of decision automation where it gives users no alternate options other than the binary yay or nay.

So, will I ever ask the question, "Alexa, what should I wear today?". I will if Alexa convinces me that she understands me and my personality (she should be my fashion-partner who evolves as I evolve), collaborates with me, clearly conveys her thought processes, and is nice to me (in research, we call it automation etiquette)! 

Style is a way to say who you are without having to speak.
— Rachel Zoe
Autonomy Potpourri: Evil smart houses, trucker hats, & farming

Upcoming Netflix movie: Evil smart house terrorizes street-smart grifter

I'm sure this movie will give people positive and accurate portrayals of AI/autonomy, and smart home technology; like Sharknado did for weather phenomena/marine life...

Monroe plays a victim who was a street-smart grifter that has been kidnapped and held captive in order to be part of a fatal experiment. The only thing standing in the way of her freedom is Tau, an advanced artificial intelligence developed by her captor, played by Skrein. Tau is armed with a battalion of drones that automate a futuristic smart house.

Trucker hat that alerts of sleepiness

I bet the main issue will be a problem of false alarms, leading to disuse.

Being a trucker means driving huge distances on demanding deadlines. And one of the biggest dangers in trucking is the threat of drivers falling asleep at the wheel. To celebrate 60 years of truck production in Brazil, Ford decided to try to help the problem by creating a hat that tracks head movements and alerts drivers in danger of snoozing.
Driverless tractors, combine harvesters and drones have grown a field of crops in Shropshire in a move that could change the face of farming. From sowing the seeds to tending and harvesting the crop, the robot crew farmed a field of barley without humans ever setting foot on the land in a world first. The autonomous vehicles followed a pre-determined path set by GPS to perform each task, while the field was monitored by scientists using self-driving drones.
Throwback Thursday: The Ironies of Automation
If I have seen further, it is by standing on the shoulders of giants
— Isaac Newton, 1675

Don't worry, our Throwback Thursday doesn’t involve embarrassing pictures of me or Arathi from 5 years ago.  Instead, it is more cerebral.  The social science behind automation and autonomy is long and rich, and despite being one of the earliest topics of study in engineering psychology, it has even more relevance today.

Instead of re-inventing the wheel, why don't we look at the past literature to see what is still relevant?

In an effort to honor that past but also inform the future, the inaugural "Throwback Thursday" post will highlight scientific literature from the past that is relevant to modern discussion of autonomy.

Both Arathi and I have taught graduate seminars in automation and autonomy so we have a rich treasure trove of literature from which to draw.  Don't worry: while some of the readings can be complex and academic, in deference to our potentially diverse readership, we will focus on key points and discuss their relevance today.

The Ironies of Automation

In this aptly titled paper, Bainbridge discusses, back in 1983(!), the ironic things that can happen when humans interact with automation.  The words of this paper ring especially true today when the design strategy of some companies is to consider the human as an error term to be eliminated:

The designer’s view of the human operator may be that the operator is unreliable and inefficient, so should be eliminated from the system.
— Bainbridge, pp. 775

But is this design strategy sustainable?  Bainbridge later wisely points out that:

The second irony is that the designer who tries to eliminate the operator still leaves the operator to do the tasks which the designer cannot think how to automate.
— Bainbridge, pp. 775

The paper then discusses, how, under such an approach, many unintended problems arise.  The ultimate irony, however, is that the implementation of very high levels of automation (including eliminating the driver in a self-driving car) will ultimately lead to a higher workload burden for the "passenger."

A more serious irony is that the automatic control system has been put in because it can do the job better than the operator, but yet the operator is being asked to monitor that it is working effectively.
— Bainbridge, pp. 775
[Reprint] Automation: Friend or Foe
This is a reprint of an article, authored by Arathi Sethumadhavan, which is series of articles originally published in Ergonomics in Design in April 2011.

With advancements in technology, automated systems have become an integral part of our society. Modern humans interact with a variety of automated systems every day, ranging from the timer in the microwave oven to the global positioning system in the car to the elevator button.

Just as automation plays a pivotal part in improving the quality of living, it also plays an integral role in reducing operator errors in safety-critical domains. For example, using a simulated air traffic control task, Rovira and Parasuraman (2010) showed that the conflict detection performance of air traffic service providers was higher with reliable automation compared with manual control.

Although automated systems offer several benefits when reliable, the consequences associated with their failure are severe. For example, Rovira and Parasuraman (2010) showed that when the primary task of conflict detection was automated, even highly reliable (but imperfect) automation resulted in serious negative effects on operator performance. Such performance decrements when working with automated systems can be explained by a phenomenon called automation-induced complacency, which refers to lower-thanoptimal monitoring of automation by operators (Parasuraman & Manzey, 2010).

High operator workload and high automation reliability contribute to complacency. Experts and novices as well as individuals and teams are prone to automation-induced complacency, and task training does not appear to completely eliminate its effects. However, performance decrements arising from automation incomplacency can be addressed by applying good design solutions.

In this issue of the Research Digest, John D. Lee, professor of industrial and systems engineering at the University of Wisconsin–Madison, and Ericka Rovira, assistant [Eds. now associate] professor of engineering psychology at West Point, provide automation design guidelines for practitioners based on their research and expertise in the area of human-automation interaction.

What factors need to be taken into consideration when designing automated systems?

Ericka Rovira

  • Determine the level of operator involvement. This should be the first step, as discussed in Rovira, McGarry, and Parasuraman (2007). How engaged should the operator be? Is the operator expected to take over control in the event of an automation failure?
  • Determine the degree of automation appropriate for the domain. The appropriate degree of automation is closely tied to the level of operator involvement. The level of automation in a programmable stopwatch can be very different from the degree of automation in a military reconnaissance task. In the latter task, the failure of the automated aid can have disastrous consequences. As a rule of thumb, as the degree of automation increases, operator involvement declines, and as a result, there is less opportunity for the operator to recover in the face of an automation error.
  • Design automated aids in such a way that operators have adequate time to respond to an automation failure.
  • Make the automation algorithm transparent so that operators are able to build a mental picture of how the automation is functioning. Providing operators with information on the uncertainties involved in the algorithm can help them engage in better information sampling and consequently help them respond quickly to automation failures.

John Lee

Make automation trustable. Appropriate trust and reliance depends on how well the capabilities of the automation are conveyed to the operator. Specific design considerations (Lee & See, 2004) include the following:

  • Design the automation for appropriate trust and not simply greater trust. • Show the past performance of the automation.
  • Illustrate the automation algorithms by revealing intermediate results in a way that is comprehensible to the operators.
  • Simplify the algorithms and operation of the automation to make it more understandable.
  • Show the purpose of the automation, design basis, and range of applications in a way that relates to operators’ goals.
  • Evaluate any anthropomorphizing of the automation to ensure appropriate trust.
  • Show how the context affects the performance of the automation and support operators’ assessment of situations relative to the capabilities of the automation.
  • Go beyond the individual operator working with the automation. That is, take into consideration the cultural differences and organizational structure when designing automated systems, because this can influence trust and reliance on automation.

In conclusion, whether automation is an operator’s friend or foe depends largely on how well practitioners are able to consider automation design principles that are paramount for effective human-automation interaction – some of which are outlined here – when designing these systems.

Original article link: