Dr. Nancy Cooke: Human-Autonomy Teaming, Synthetic Teammates, and the Future
Nancy J. Cooke is a professor of Human Systems Engineering at Arizona State University and is Science Director of the Cognitive Engineering Research Institute in Mesa, AZ. She also directs ASU’s Center for Human, Artificial Intelligence, and Robot Teaming and the Advanced Distributed Learning Partnership Lab.
She received her PhD in Cognitive Psychology from New Mexico State University in 1987. Dr. Cooke is currently Past President of the Human Factors and Ergonomics Society, chaired the National Academies Board on Human Systems Integration from 2012-2016, and served on the US Air Force Scientific Advisory board from 2008-2012. She is a member of the National Academies of Science, Engineering, and Medicine Committees on High-Performance Bolting Technology for Offshore Oil and Natural Gas Operations and the Decadal Survey of Social and Behavioral Sciences and Applications to National Security.
In 2014 Dr. Cooke received the Human Factors and Ergonomics Society’s Arnold M. Small President’s Distinguished Service Award. She is a fellow of the Human Factors and Ergonomics Society, the American Psychological Association, the Association for Psychological Science, and The International Ergonomics Association. Dr. Cooke was designated a National Associate of the National Research Council of the National Academies of Sciences, Engineering, and Medicine in 2016.
Dr. Cooke’s research interests include the study of individual and team cognition and its application to cyber and intelligence analysis, remotely-piloted aircraft systems, human-robot teaming, healthcare systems, and emergency response systems. Dr. Cooke specializes in the development, application, and evaluation of methodologies to elicit and assess individual and team cognition.
Tell us about your current ongoing projects, especially the synthetic teammate and human-autonomous vehicle teaming projects.
I am excited about both projects, as well as another one that is upcoming. I am involved in the synthetic teammate project, a large ongoing project started about 15 years ago, with the Air Force Research Lab (AFRL; Chris Myers, Jerry Ball and others) and former post docs, Jamie Gorman (Georgia Tech) and Nathan McNeese (Clemson) and current post doc, Mustafa Demir. Sandia Research Corporation (Steve Shope and Paul Jorgenson) is also involved. It is exciting to be working with so many bright, energetic, and dedicated people. In this project AFRL is developing a synthetic agent capable of serving as a full-fledged teammate that works with two human teammates to control a Remotely Piloted Aircraft System and take reconnaissance photos of ground targets. The team (including the synthetic pilot) interacts via text chat.
The USAF (United States Air Force) would like to eventually use synthetic agents as teammates for large scale team training exercises. Ultimately an individual should be able to have a team training experience over the internet without having to involve any other humans to serve as white forces for someone else’s training. In addition, our laboratory is interested in learning about human-autonomy teaming, and in particular, the importance of coordination. In other studies we have found an interesting curvilinear relation relating coordination stability to performance, wherein the best performance is associated with mid-level coordination stability (not too rigid or unpredictable). This project is funded by the Office of Naval Research.
We are also conducting another project with Subbarao Kambhampati “Rao”at ASU. In this project our team informs robot planning algorithms of Rao’s team by use of a human dyad working in a Minecraft setting. One person is inside the Minecraft structure representing a collapsed building and the other has limited view of the Minecraft environment, but does have a map that now is inaccurate in regard to the collapsed environment. The two humans work together to identify and mark on the map the location of victims. We are paying careful attention to only to the variables that affect the dyads’ interactions, but also to features of communication that are tied to higher levels of performance. This project is also funded by the Office of Naval Research.
Finally, I am very excited to be directing a new center at ASU called the Center for Human, Artificial Intelligence, and Robot Teaming or CHART. I am working with Spring Berman, a swarm roboticist, to develop a testbed in which to conduct studies of driverless cars interacting on the road with human-driven cars. Dr. Berman has a large floor mat that depicts a roadway with small robots that obey traffic signals and can avoid colliding with each other. We are adding to that robots that are remotely controlled by humans as they look at imagery from the robot’s camera. In this testbed we are excited to test all kinds of scenarios involving human-autonomous vehicle interactions.
You have co-authored the book "Stories of Modern Technology Failures and Cognitive Engineering Successes" with Dr. Frank Durso. What are some of the key points on human-autonomy interactions that you would like to share with our readers?
Too often automation is developed without consideration for the user. It is often thought that automation/autonomy will not require human intervention, but that is far from the truth. Humans are required to interact with autonomy at some level.
A lack of good Human Systems Integration from the beginning can cause unexpected consequences and brittleness in the system. The recent mistaken incoming missile message sent to Hawaii’s general public provides a great example of the potential effects of introducing a new interface with minimal understanding of the human task or preparation of the general public.
Can you paint us a scenario of humans and synthetic teammates working together in 50 years?
I am currently reading Four Futures by Peter Frase that paints four different scenarios of humans and AI in the future. Two of the scenarios are dark with robots in control and two are more optimistic. I tend toward the optimistic scenarios, but realize that this situation would be the result of thoughtful application of AI, coupled with checks to keep nefarious actors at bay. Robots and AI have already, and will continue to, take on jobs that are “dull, dirty, or dangerous” for humans. Humans need to retrain for other jobs (many that do not exist now) and teams of humans, AI and robots need to be more thoughtful composed based on the capabilities of each. I believe that this is the path toward a more positive outcome.