Human-Autonomy Sciences

We are psychological scientists / practitioners who are excited about the future of autonomy.  This blog will cover recent developments in human-autonomy sciences with a focus on the social science angle.

Kitchen robots potpourri


The World's First Home Robotic Chef Can Cook Over 100 Meals

This year, Moley, the first robotic kitchen will be launched by a London-based company, that has unlimited access to chefs and their recipes worldwide.  It is expected to cook and clean up after itself. But looks like, it does not completely eliminate human supervision.

The way this machine works is by specifying the number of portions, type of cuisine, dietary restrictions, calorie count, desired ingredients, cooking method, chef, etc. from the recipe library first. Then, with a single tap, you could choose your recipe, place the individual pre-packaged containers of measured, washed and cut ingredients (that you could order through Moley) on designated spots, and press “start” for the cooking process to begin.

Since the Moley kitchen could essentially cook any downloadable recipe on the internet, the food-robotics-AI startup expects to include a “share and sell” your own recipes feature, where consumers and professional chefs could access and sell their ideas via the “digital style library of recipes” database.

However, there are safety and quality concerns about having a robot-chef. What if the machine chops aimlessly and the owner is left without a meal is a concern raised in the article. Further, cooking involves the chef's personal touch and an engagement of all the five senses, which cannot be realized by a robot. 

Our Robot Overlords Are Now Delivering Pizza, And Cooking It On The Go

To solve the problem of cold pizzas, Zume Pizza, where robots and AI run the show, was started in Mountain View, California was started. 

A customer places an order on the app. A team of mostly robots assembles the 14-inch pies, each of which gets loaded par-baked — or partially baked.



There is only one human worker I the delivery truck  - to drive, slice and deliver to your doorstep. The human does not have to think about when to turn the ovens on and off or what route to take - because these are all decided by AI.  A few minutes prior to arriving at the scheduled delivery destination, the AI starts the oven to finish cooking the order.

Augmented reality kitchens keep novice chefs on track

Japan is not far behind either with regards to the use of robots in cooking. Scientists at Kyoto Sangyo University have developed a kitchen with ceiling-mounted cameras and projectors that overlay cooking instructions on the ingredients. This lets cooks concentrate on their task (e.g., slicing) without having to look up at a recipe book or a screen.

The upgrade from a clasping claw to a classic spinning spatula took a lot of programming but it was necessary. After all, you need the easiest to clean surface when dealing with raw meat — you really don’t want that stuff getting caught up in a device’s various nooks and crannies.

The developers of Flippy is working on a number of new features for the robot, including advanced computer imaging and AI that will help it adapt over time to things like a changing seasonal menu.


Flippy, the hamburger cooking robot, gets its first restaurant gig

Caliburger, a fast food chain based in California is using Flippy to flip hamburgers. Flippy is an industrial robotic arm with a classic spinning spatula.

Suppose you want to fillet a fish. Lay it down on a chopping board and the cameras will detect its outline and orientation so the projectors can overlay a virtual knife on the fish with a line indicating where to cut. Speech bubbles even appear to sprout from the fish’s mouth, guiding you through each step.

The kitchen also comes equipped with a small robot assistant named Phyno that sits on the countertop. When its cameras detect the chef has stopped touching the ingredients, Phyno asks whether that particular step in the recipe is complete. Users can answer “yes” to move on to the next step or “no” to have the robot repeat the instructions.




Robots Cooked and Served My Dinner






In the Chinese city of Kunshan, a small team of robot cooks and waiters serve dumplings and fried rice  at Tian Waike Restaurant.

“A robot can work for seven to eight years and more than ten hours a day,” Song Yugang, the owner of the company that designed the robots said. “Waiters and waitresses work for eight hours every day, nine at most. You need to provide accommodations and meals. But our robots consume three yuan [50 cents, or 30 pence] worth of electricity a day at most.”
AI potpourri: Passenger pickup and suicide prevention

GM just revealed a fully autonomous electric car — and it doesn't have a steering wheel

GM has announced their fourth generation of self-driving vehicles. Note that there is not a single mention of what the passenger is supposed to do in the event that the self-driving algorithm fails!

No driver. No pedals. No steering wheel. Just seats and screens and doors that can close themselves. That’s what riders will see when they get into one of General Motors’ Cruise self-driving electric vehicles, scheduled to hit the road in 2019.




A prominent social scientist, Dr. Peter Hancock aptly stated the following

Today’s new car, a partial robot itself built by robots in an automated factory, may for a time be content to sit in a parking spot and wait for its user’s call. But if people aren’t careful, its fully autonomous cousin may one day drive the joy of driving, or even an entire joy of living, out of human experience.


Would You Send Your Kids To School On A Self-Driving School Bus?

A Seattle-based design firm is working on a six-passenger vehicle picks up and drops off every child at their front door, ensuring their identity with facial recognition.

The vehicle’s AI changes its route based on traffic or other roadblocks, even rejiggering the order in which it drops kids off if, for instance, their parent is running late. And during the rest of the day, each Hannah vehicle can be used to deliver packages, food, or donations, earning school districts extra cash.

But questions remain. Will parents ever trust an autonomous vehicle enough to allow their children to ride in one with no human supervision? And will autonomous technology ever be advanced enough to supervise children, much less cheap enough for school districts to afford? Hannah is a kind of thought experiment: If autonomy is coming to every street, what does getting to school look like?

The researchers at the design firm are also investigating other issues such as how AI will address bullying in buses as well as bringing in extra money to the school by using the bus for food delivery for a service like Uber Eats. 

Canada will track suicide risk through social media with AI

The Canadian government is partnering with an AI firm to predict rises in regional suicide risk. Facebook has also recently launched initiatives to prevent suicides by analyzing posts that suggest suicidal thoughts.

The AI will analyze posts from 160,000 social media accounts and will look for suicide trends.

The AI company aims to be able to predict which areas of Canada might see an increase in suicidal behavior, which according to the contract document includes “ideation (i.e., thoughts), behaviors (i.e., suicide attempts, self-harm, suicide) and communications (i.e., suicidal threats, plans).” With that knowledge, the Canadian government could make sure more mental health resources are in the right places when needed.
Public views about AI and the Future

The Gallup organization has just released a survey of 3298 American adults about their thoughts on AI and the future.  The interactive website is filled with many great visualizations.  

The key point seems to be that, contrary to popular notions of the fear of AI, most American’s (77%) have a positive view of AI in the next decade.  Interestingly, this is despite most Americans view that AI will have a negative impact on their own employment and the economy (73% believe AI will eliminate jobs).

The other noteworthy point is that optimism about AI, while high, is expected to decrease (difference between current optimism and future optimism).  But this varies by sub-group:  The largest difference between future-current optimism is by middle-aged folks who's livelihood may be affected (green) while older folks seem to be unchanged (blue, orange):

 Image source:

Image source:

Changing views of self-driving cars...

I just saw a funny juxtaposition of headlines regarding self-driving cars.  Of most autonomous systems, self-driving cars probably represent the easiest to understand for the lay public.

The first headline, from a Reuters/Ipsos opinion poll:  Most Americans wary of self-driving cars.  

While 27 percent of respondents said they would feel comfortable riding in a self-driving car, poll data indicated that most people were far more trusting of humans than robots and artificial intelligence under a variety of scenarios.

The results are more interesting when viewed by age group.  It makes intuitive sense that millennials are most comfortable with baby boomers the least.  Millenials are less interested in driving and because of greater exposure to autonomous technology, may be more comfortable and trusting than other age groups.  It should be noted that that is not a correct view, however.  Their view of technology could be distorted or unrealistic.

 Image source:

Image source:

The next headline:  More Americans Willing To Ride In Self-Driving Cars.  The results of a survey from American Automobile Association (AAA) confirm Reuter's survey:  millennials and males are more willing to buy a self-driving car.  The headline refers to a decrease (78% to 63%), year over year, in the number of people who said they were afraid to ride in a self-driving car.

The crux of these observations seem to be trust:

AAA’s survey also offered insights as to why some motorists are reluctant to purchase advanced vehicle technology. Most trust their driving skills more than the technology (73 percent) — despite the fact that research shows more than 90 percent of crashes involve human error. Men in particular, are confident in their driving abilities with 8 in 10 considering their driving skills better than average.
AI potpourri: Reading, investing, diagnosis, and retail

A.I. Has Arrived in Investing. Humans Are Still Dominating

AI is taking a bigger role in investing. Large fund management companies like Fidelity and Vanguard say they use AI for a range of purposes.

An exchange-traded fund introduced in October uses A.I. algorithms to choose long-term stock holdings.

It is to early to say whether the E.T.F., A.I. Powered Equity, will be a trendsetter or merely a curiosity. Artificial intelligence continues to become more sophisticated and complex, but so do the markets. That leaves technology and investment authorities debating the role of A.I. in managing portfolios. Some say it will only ever be a tool, valuable but subordinate to its flesh-and-blood masters, while others envision it taking control and making decisions for many funds.

AI has an edge over the natural kind because of the inherent emotional and psychological weaknesses that encumber human reasoning.

While some people see huge potential in AI as an investment advisor, there are others who think that it cannot be relied on for heaving cognitive decision-making.  The following  is a quote from a portfolio manager.

“I’m a fan of automating everything possible, but having a human being push the last button is still a good thin. Hopefully, we all get better and better and smarter and smarter, but there’s something comforting about having an informed human being with sound judgment at the end of the process.”

AI models beat humans at reading comprehension, but they’ve still got a ways to go

AI models designed by Alibaba and Microsoft have surpassed humans in reading comprehension, which demonstrates that AI has the potential to understand and process the meaning of words with the same fluidity as humans. But there is still a long way to go.  Specifically, adding meaningless text into the passages, which a human would easily ignore, tended to confuse the AI.

“Technically it’s an accomplishment, but it’s not like we have to begin worshiping our robot overlords,” said Ernest Davis, a New York University professor of computer science and longtime AI researcher.

“When you read a passage, it doesn’t come out of the clear blue sky: It draws on a lot of what you know about the world,” Davis said. “We really need to deal much more deeply with the problem of extracting the meaning of a text in a rich sense. That problem is still not solved.”



5 ways the future of retail is already here

The retail industry is also starting to rely on AI to shape the way people shop. 

  1. Digital-price displays at grocery stores (e.g., Kroger) now allow retailers to make changes to their prices in one go. 
  2. Digital mirrors are used by retailers such as Sephora and Neiman Marcus to allow shoppers to get feedback on makeup and other items.
  3. Robotic shopping carts can now import your shopping list, guide you to each item in the store, help you to check out, follow you to your car for unloading groceries, and find its way back to a docking station.
  4. Technology is being used by companies like Stitch Fix and American Eagle to recommend outfits to their customers. 
  5. Robots are being used in stores to keep shelves well stocked to help shoppers find what they are looking for.  

Microsoft and Adaptive Biotechnologies announce partnership using AI to decode immune system; diagnose, treat disease

AI and the cloud have the power to transform healthcare – improving outcomes, providing better access and lowering costs. The Microsoft Healthcare NExT initiative was launched last year to maximize the ability of artificial intelligence and cloud computing to accelerate innovation in the healthcare industry, advance science through technology and turn the lifesaving potential of next discoveries into reality.

Each T-cell has a corresponding surface protein called a T-cell receptor (TCR), which has a genetic code that targets a specific signal of disease, or an antigens. Mapping TCRs to antigens is a massive challenge, requiring very deep AI technology and machine learning capabilities coupled with emerging research and techniques in computational biology applied to genomics and immunosequencing.

The result would provide a true breakthrough – a Sequencing the immune system can reveal what diseases the body currently is fighting or has ever fought.
Dr. Nancy Cooke: Human-Autonomy Teaming, Synthetic Teammates, and the Future

In our fifth post in a new series, we interview a thought leader in human-systems engineering, Dr Nancy Cooke.

About Dr. Nancy Cooke


Nancy J. Cooke is a professor of Human Systems Engineering at Arizona State University and is Science Director of the Cognitive Engineering Research Institute in Mesa, AZ. She also directs ASU’s Center for Human, Artificial Intelligence, and Robot Teaming and the Advanced Distributed Learning Partnership Lab.

She received her PhD in Cognitive Psychology from New Mexico State University in 1987.  Dr. Cooke is currently Past President of the Human Factors and Ergonomics Society, chaired the National Academies Board on Human Systems Integration from 2012-2016, and served on the US Air Force Scientific Advisory board from 2008-2012.  She is a member of the National Academies of Science, Engineering, and Medicine Committees on High-Performance Bolting Technology for Offshore Oil and Natural Gas Operations and the Decadal Survey of Social and Behavioral Sciences and Applications to National Security.

In 2014 Dr. Cooke received the Human Factors and Ergonomics Society’s Arnold M. Small President’s Distinguished Service Award. She is a fellow of the Human Factors and Ergonomics Society, the American Psychological Association, the Association for Psychological Science, and The International Ergonomics Association.  Dr. Cooke was designated a National Associate of the National Research Council of the National Academies of Sciences, Engineering, and Medicine in 2016.

Dr. Cooke’s research interests include the study of individual and team cognition and its application to cyber and intelligence analysis, remotely-piloted aircraft systems, human-robot teaming, healthcare systems, and emergency response systems. Dr. Cooke specializes in the development, application, and evaluation of methodologies to elicit and assess individual and team cognition.

Tell us about your current ongoing projects, especially the synthetic teammate and human-autonomous vehicle teaming projects.

I am excited about both projects, as well as another one that is upcoming.  I am involved in the synthetic teammate project, a large ongoing project started about 15 years ago, with the Air Force Research Lab (AFRL; Chris Myers, Jerry Ball and others) and former post docs, Jamie Gorman (Georgia Tech) and Nathan McNeese (Clemson) and current post doc, Mustafa Demir.  Sandia Research Corporation (Steve Shope and Paul Jorgenson) is also involved.   It is exciting to be working with so many bright, energetic, and dedicated people.  In this project AFRL is developing a synthetic agent capable of serving as a full-fledged teammate that works with two human teammates to control a Remotely Piloted Aircraft System and take reconnaissance photos of ground targets.  The team (including the synthetic pilot) interacts via text chat. 

The USAF (United States Air Force) would like to eventually use synthetic agents as teammates for large scale team training exercises.  Ultimately an individual should be able to have a team training experience over the internet without having to involve any other humans to serve as white forces for someone else’s training.  In addition, our laboratory is interested in learning about human-autonomy teaming, and in particular, the importance of coordination.  In other studies we have found an interesting curvilinear relation relating coordination stability to performance, wherein the best performance is associated with mid-level coordination stability (not too rigid or unpredictable).  This project is funded by the Office of Naval Research.

 Image source:

Image source:

We are also conducting another project with Subbarao Kambhampati “Rao”at ASU.  In this project our team informs robot planning algorithms of Rao’s team by use of a human dyad working in a Minecraft setting.  One person is inside the Minecraft structure representing a collapsed building and the other has limited view of the Minecraft environment, but does have a map that now is inaccurate in regard to the collapsed environment.  The two humans work together to identify and mark on the map the location of victims.  We are paying careful attention to only to the variables that affect the dyads’ interactions, but also to features of communication that are tied to higher levels of performance.  This project is also funded by the Office of Naval Research.

Finally, I am very excited to be directing a new center at ASU called the Center for Human, Artificial Intelligence, and Robot Teaming or CHART.  I am working with Spring Berman, a swarm roboticist, to develop a testbed in which to conduct studies of driverless cars interacting on the road with human-driven cars.  Dr. Berman has a large floor mat that depicts a roadway with small robots that obey traffic signals and can avoid colliding with each other.  We are adding to that robots that are remotely controlled by humans as they look at imagery from the robot’s camera.  In this testbed we are excited to test all kinds of scenarios involving human-autonomous vehicle interactions.

You have co-authored the book "Stories of Modern Technology Failures and Cognitive Engineering Successes" with Dr. Frank Durso. What are some of the key points on human-autonomy interactions that you would like to share with our readers?

Too often automation is developed without consideration for the user.  It is often thought that automation/autonomy will not require human intervention, but that is far from the truth.  Humans are required to interact with autonomy at some level.  

A lack of good Human Systems Integration from the beginning can cause unexpected consequences and brittleness in the system.  The recent mistaken incoming missile message sent to Hawaii’s general public provides a great example of the potential effects of introducing a new interface with minimal understanding of the human task or preparation of the general public.

Can you paint us a scenario of humans and synthetic teammates working together in 50 years?

I am currently reading Four Futures by Peter Frase that paints four different scenarios of humans and AI in the future.  Two of the scenarios are dark with robots in control and two are more optimistic.  I tend toward the optimistic scenarios, but realize that this situation would be the result of thoughtful application of AI, coupled with checks to keep nefarious actors at bay.  Robots and AI have already, and will continue to, take on jobs that are “dull, dirty, or dangerous” for humans.  Humans need to retrain for other jobs (many that do not exist now) and teams of humans, AI and robots need to be more thoughtful composed based on the capabilities of each.  I believe that this is the path toward a more positive outcome.

Interview: Dr. Frank Durso

In our fourth post in a new series, we interview a leading social science researcher and leader in aviation psychology, Dr. Frank Durso. Frank was also my academic advisor (a decade ago) and it was a pleasure to chat with him about his thoughts about the impact and future of automation in aviation.

About Dr. Frank Durso

Francis T. (Frank) Durso is Professor and Chair of the School of Psychology at the Georgia Institute of Technology where he directs the Cognitive Ergonomics Lab.  Frank received his Ph.D. from SUNY at Stony Brook and his B.S. from Carnegie-Mellon University.    While at the University of Oklahoma, he was a Regents Research recipient and founding director of their Human-Technology Interaction Center.


Frank is Past-President of the Human Factors and Ergonomics Society (HFES), the Southwestern Psychological Association, the American Psychological Association’s (APA) Division of Engineering Psychology, and founding President of the Oklahoma Psychological Society.  He is a sitting member of the National Research Council’s Board of Human Systems Integration.  He has served as advisor and panelist for the Transportation Research Board, the National Science Foundation, the APA, the Army Research Lab, and the Government Accountability Office. 

Frank was associate editor of the Journal of Experimental Psychology: Applied, senior editor of Wiley’s Handbook of Applied Cognition, co-editor of the APA Handbook of Human Systems Integration, and serve as founding editor of the HFES monograph series entitled User’s Guides to Methods in Human Factors and Ergonomics.   He has served on several editorial boards including Human Factors.  He co-authored Stories of Modern Technology Failures and Cognitive Engineering Successes.  He is a fellow of the HFES, APA, the Association for Psychological Science, and the Psychonomic Society.   He was awarded the Franklin V. Taylor award for outstanding achievements in applied experimental and engineering psychology from APA  

His research has been funded by the Federal Aviation Administration, the National Science Foundation, and the Center for Disease Control as well as various industries.  Most of Frank’s research has focused on cognition in dynamic environments, especially in transportation (primarily air traffic control) and healthcare.   He is a co-developer of the Pathfinder scaling algorithm, the SPAM method of assessing situation awareness, and the Threat-Strategy Interview procedure. His current research interests focus on cognitive factors underlying situation understanding and strategy selection.

For part of your career, you have been involved in air traffic control and have seen the use of automation evolve from paper-based flight strips to NexGen automation.  In your opinion, what is the biggest upcoming automation-related challenge in this domain?

As you know, people, including big thinkers like Paul Fitts in 1951, have given thought to how to divide up a task between a machine and a person.  While we people haven’t changed much, our silicon helpers have.  Quite a bit. They’re progressed to the point that autonomy, and the issues that accompany them are now both very real.  (I’ll get back to autonomy in your other question).  Short of just letting the machine do it, or just doing it yourself, the puzzle of how to divvy up a task remains although the answer to the puzzle changes.

When I first started doing research for the FAA in the early 90s,  there was talk of automation soon to be available that would detect conflicts and suggest ways to resolve the conflict, leaving the controller to choose among recommendations.  A deployed version of this was URET, an aid that the controller could use if he or she wanted.  In one mode, controllers were given a list like representation of flight data much like the paper strips did or a graphic representation of flight paths.  Either mode depicted conflicts up to 20 minutes out. 

I do worry that this new level of automation can take much of the agency away from the controller

When I toured facilities back then, I remember finding a controller who was using the aid when a level red conflict appeared.  I waited for him to make changes to resolve the conflict.  And waited.  He never did anything to either plane in conflict, and yet the conflict was resolved.  When I asked him about it, he told me “Things will probably change before I need to worry about it.”  He gave me two insights that stayed with me. One was that in dynamic environments, things change and the more dynamic the more likely is what you (or your electronic aid) expect and plan for are mere possibilities, not certainties.  This influenced much of my subsequent thinking about situation awareness, what it was, and how to measure it. 

 Next Generation Air Transport System (NextGen):

Next Generation Air Transport System (NextGen):

I also realized that day that I would never understand anything unless I understood the strategies that people used.  I didn’t do anything with that realization back then, thinking it would be like trying to nail jello to a wall.  I’m fascinated by strategy research today, but then I was afraid the jello and my career in aviation human factors would both be a mess lying at my feet.

Our big worries with automation that does the thinking for us were things like, will controllers use the technology?  Today we’d call that technology acceptance.  will the smart automation change the job from that of controlling air traffic to managing it?  Of course, when people are put into a situation where they merely observe, while the automation does the work, there’s the risk that the human will not truly be engaged and situation awareness would suffer.  That’s a real concern especially if you ever expect the human to again take over the task.

Now there are initiatives and technologies in the FAA that eliminate or at least reduce conflictions by optimizing the aircraft sequence and leave to the controller the task of getting the aircraft to fall in line with that optimization.  Imagine that the computer optimizes the spacing and timing of planes landing at an airport.  The planes are not, of course, naturally in this optimized pattern, so the computer presents to the controller “plane bubbles” Those plane bubbles are optimized.  All the controller has to do is get the plane into that bubble and conflicts will be reduced and landings would be optimized.  This notion of having the computer do the heavy cognitive lifting of solving conflict and optimization and then presenting those “targets” to the controller can be used in a variety of circumstances.  Now the “controller” is not even a manager, but in some ways the controller is being kept in the game and should therefore show pretty good situation awareness. 

Now I worry that situation awareness be very local—tied to a specific, perhaps meaningless piece of the overall picture.  This time global SA levels may be a conern; they may have little or no understanding of the big picture of all those planes landing, even if he or she does have good SA of getting a particular plane in queue. 

For some reason, I no longer worry about technology acceptance as I did in 1997.  Twenty years later, I do worry that this new level of automation can take much of the agency away from the controller—so much of what makes the job interesting and fun.  Retention of controllers might suffer and those that stay will be less satisfied with their work, which produces other job consequences.

As an end to this answer, I note that much has changed in the last quarter of a century, but we still seem to be following a rather static list of machines do this and people do that.  Instead, I think the industry needs to adopt the adaptive allocation of tasks that human factors professionals have studied.  The question is not really when should the computer sequence flights, but when should that responsibility be handed over to the human.  Or when should the computer, detecting a tired controller perhaps, rest responsibility for separation from him or her. 

You are on the Board of Human-Systems Integration for the National Academies of Science and Engineering. What is purpose of the Board and what is your role?

how they interact within and with complex systems ...must be addressed if we are to solve today’s societal challenges

The National Academies of Science, Engineering, and Medicine do their operational work through seven programs governed by the rules of the National Research Council.  One of the programs, the Division of Behavioral, Social Science, and Education contains the Board of Human Systems Integration or BOHSI.  Established by President Lincoln, the Academies is not a government agency.  A consequence of that for the Boards is that financing is through sponsors. 

The original Committee on Human Factors was founded in 1980 bty the Army, Navy, and Air Force.  Since then, BOHSI has been sponsored by a myriad of agencies including NASA, NIOSH, FAA, and Veteran’s Health.  I’m proud to say APA Division 21 and the Human Factors and Ergonomics Society, two organizations I’ve led in the past are also sponsors.

BOHSI’s mandate is to provide an independent voice on the HSI  issues that interest the nation.  We provide  theoretical and methodological perspectives on people-organization-technology-environment systems.  The board itself currently comprises 16 members, including National Academy members, academics, business leaders, and industry professional.  They were invited from a usually (very) long list of nominations.  A visit to the webpage will show the caliber of the members.   

The issues BOHSI is asked to address are myriad.  Decision makers, leaders, and scholars from other disciplines are becoming increasingly aware of the fact that people and how they interact within and with complex systems is a critical feature that must be addressed if we are to solve today’s societal challenges.  We’ve looked at remote controlled aviation systems, at self-escape from mining, and how to establish safety cultures in academic labs, to mention a few.

BOHSI addresses these problems in a variety of ways.  The most extensive efforts result in reports like those currently on the webpage: Integrating Social and Behavioral Science within the Weather Enterprise; Personnel Selection in the Pattern Evidence Domain of Forensic Science; and CMV Driver Fatigue, Long Term Health, and Highway Safety.  These reports are generated by committees appointed by BOHSI.  A member of the board or two often sits on these working committees, but the majority of the committee is made up of national experts on the specific topic representing various scientific, policy, and operational perspectives.  The hallmark of these reports is that that provide an independent assessment and recommendation for the sponsor and the nation.

As a social scientist studying autonomy, what do you see as the biggest unresolved issue?

As technology advances at an accelerating rate, real autonomy becomes a real and exciting possibility.  The issues that accompany truly independent automated agents are exciting as well.   I think there are a number of questions of interest and there are lots of smart people looking into them.  For example, there’s the critical question of trust.  Why did Luke trust R2-D2?  (Did R2 trust Luke?)  And technology acceptance continues to be with us:  Why will elderly folk allow a robot to assist with this task, but not that one.

The issues that accompany truly independent automated agents are exciting as well....Why did Luke trust R2-D2?  (Did R2 trust Luke?)

But I think the biggest issues with autonomy is getting a handle on when responsibility, control, or both switch from one member of the technology-person pair to the other.  How can the autonomous car hand over control to the driver?  Will the driver have the SA to receive it?  How does this handshaking occur if each system does not have an understanding of the state of the other?  We don’t really understand the answer to these questions regarding two humans let alone between a human and an automaton. 

There are indeed ways we can inform the human of the automation’s state, but we can also inform the automaton of the human’s state.  Advances in machine learning allows the automaton to learn how the human prefers to interact with it.  Advances in augmented cognition can allow us to feed information about the physiological information about the operator to the automaton.  If the car knew the driver was stressed (cortisol levels) or tired (eye closures) it might decide to not hand over control.   

I should mention here that this kind of separation of responsibilities between machine and human is quite different than the static lists I discussed in my first answer regarding the FAA technology.  There, the computer had certain tasks and the controller had others; here any particular task belongs to either particular agent, depending on the situation.

I think future work has to really investigate the system properties of the human and technology, and not (just) each alone.

David Bruemmer: Thoughts on the Future of AI & Robotics

In our third post in a new series, we interview a leader in robotics technology, Mr. David Bruemmer. We talk to David about the future of autonomous robotic technologies and the ethics associated with automation use. 

About David Bruemmer


Mr. Bruemmer is Founder and CEO of Adaptive Motion Group which provides Smart Mobility solutions based on accurate positioning and autonomous vehicles. Previously, Mr. Bruemmer co-founded 5D Robotics, supplying innovative solutions for a variety of automotive, industrial and military applications.

Mr. Bruemmer has led large scale robotics programs for the Army and Navy, the Department of Energy, and the Defense Advanced Research Projects Agency. He has patented robotic technologies for landmine detection, urban search and rescue, decontamination of radioactive environments, air and ground teaming, facility security and a variety of autonomous mapping solutions.

Mr. Bruemmer has authored over 60 publications and has been awarded 20 patents in robotics and positioning. He recently won the South by Southwest Pitch competition and is a recipient of the R&D 100 Award and the Stoel Reeves Innovation award. Mr. Bruemmer led robotics research at the Idaho National Lab for a diverse, multi-million dollar R&D portfolio. Between 1999 and 2000, Mr. Bruemmer served as a consultant to the Defense Advanced Research Projects Agency (DARPA), where he worked to coordinate development of autonomous robotics technologies across several offices and programs.

You have been working on developing autonomous robotics technologies for a long time now, during your tenure with Idaho National Lab and now as the CEO of Adaptive Motion Group. How has the field evolved and how excited are you about the future?

I think there is a large amount of marketing and spin especially in the autonomous driving arena

There seems to be waves of optimism about AI followed by disappointment as the somewhat inflated goals meet the realities of trying to deploy robotics and AI. Machine learning has come a long way but the growth has been linear and I really do not feel that deep learning is necessarily a “fundamentally new” machine learning tool.

I think there is a large amount of marketing and spin especially in the autonomous driving arena. I have been sad to see that in the past several years, some of the new cadre of self-driving companies seem to have overlooked many of the hard lessons we learned in the military and energy sectors regarding the perils of “full autonomy” and the need for what I call “context sensitive shared control”.

Reliability continues to be the hard nut to crack and I believe that for a significant shift in the level of reliability of overall automation we need to focus more energy on positioning. Positioning is sometimes considered to be a “solved problem” as various programs and projects have offered lidar mapping, RTK GPS and camera based localization. These work in various constrained circumstances but often fail outside of the bounds were they were intended.

I think that even after the past twenty years of progress we need a more flexible, resilient means of ensuring accurate positioning. I would also like to point out that machine learning and AI is not a cure-all. If it was we wouldn’t have the increasing death toll on our roads or the worsening congestion. When I look at AI I see a great deal of potential but most of it still unrealized. This is either cause for enthusiasm or pessimism depending on your perspective.

There are quite a few unfortunate events associated with automation use. For example, there is the story of a family who got lost in Death Valley due to overreliance on their GPS. Do you think of human-machine interaction issues during design?

Yes I do. I think that many are overlooking the real issue: that our increasingly naïve dependence on AI is harmful not only from a cultural and societal perspective, but that it also hurts system performance. Some applications and environments allow for a very high degree of  autonomy. However there are many other tasks and environment where we need to give up on this notion of fully removing the human from the drivers seat or the processing loop and instead focus on the rich opportunity for context sensitive shared control where the human and machine work as team mates balancing the task allocation as needed.

our increasingly naïve dependence on AI is harmful not only from a cultural and societal perspective, but that it also hurts system performance

Part of the problem is that those who make the products want you to believe their system is perfect. It’s an ego thing on the part of the developers and a marketing problem to boot. For the teamwork between human and robot to be effective both human and machine need an accurate understanding of eachothers’ limitations.

Quite frankly, the goal of many AI companies is to make you overlook these limits. So supposedly GPS works all the time and we provide the user no sense of “confidence” or what in psychology we call a “feeling of knowing.” This breeds a strangely unfortunate slew of problems from getting horribly lost in urban jungles or real jungles.

If we were more honest about the limitations and we put more energy into communicating the need for help and more data then things could work a whole lot better. But we almost never design the system to acknowledge its own limitations.

There is some ethical debate about robots or AI making some human occupations obsolete (e.g., long-haul trucking, medical diagnosis). How does ethics factor into your decisions when developing new technologies?

the role of government is to protect the right of every human to be prioritized over machines and over profits

The great thing about my emphasis on shared control is that I never need to base my business model or my technology on the idea of removing the human or eliminating the human labor.

Having said that I do of course believe that better AI and robotics means increased safety and efficiency which in turn can lead to reduced human labor. I think this is a good thing as long as it is coupled with a society that cares for the individual.

Corporations should not be permitted to act with impunity and I believe the role of government is to protect the right of every human to be prioritized over machines and over profits. This mindset has less to do with robotics and more to do with politics so I will leave off there. I do always try to emphasize that robotics should not ever be about the robots, but rather about the people they work with.

The Year of the Algorithm. AI Potpourri part 2:
 “We have to grade indecent images for different sentencing, and that has to be done by human beings right now, but machine learning takes that away from humans,” he said.

”You can imagine that doing that for year-on-year is very disturbing.”

But as the next story shows, these AI tools are not advanced enough to replace human content moderators.

[WSJ] The Worst Job in Technology: Staring at Human Depravity to Keep It Off Facebook

Humans, still, are the first line of defense. Facebook, YouTube and other companies are racing to develop algorithms and artificial-intelligence tools, but much of that technology is years away from replacing people, says Eric Gilbert, a computer scientist at the University of Michigan. 
Earlier this month, after a public outcry over disturbing and potentially exploitative YouTube content involving children, CEO Susan Wojcicki said the company would increase its number of human moderators to more than 10,000 in 2018, in an attempt to rein in unsavory content on the web’s biggest video platform.

But guidelines and screenshots obtained by BuzzFeed News, as well as interviews with 10 current and former “raters” — contract workers who train YouTube’s search algorithms — offer insight into the flaws in YouTube’s system.
But algorithms, unlike humans, are susceptible to a specific type of problem called an “adversarial example.” These are specially designed optical illusions that fool computers into doing things like mistake a picture of a panda for one of a gibbon. They can be images, sounds, or paragraphs of text. Think of them as hallucinations for algorithms.
From the ridiculous to the chilling, algorithmic bias — social prejudices embedded in the AIs that play an increasingly large role in society — has been exposed for years. But it seems in 2017 we reached a tipping point in public awareness.

he New York City Council recently passed what may be the US’ first AI transparency bill, requiring government bodies to make public the algorithms behind its decision making. Researchers have launched new institutes to study AI prejudice (along with the ACLU) while Cathy O’Neil, author of Weapons of Math Destruction, launched an algorithmic auditing consultancy called ORCAA.
The Year of the Algorithm. AI potpourri, part I: Astronomer, Factory Worker, Musician, and more

2017 seems to have been a watershed year for the use and application of AI and algorithms.  This is part 1 of a two part post highlighting the use (and possible regulation) of AI. 

[NYTimes] An 8th Planet Is Found Orbiting a Distant Star, With A.I.’s Help

NASA announced the discovery of a new exoplanet orbiting a distant star some 2,500 light years away from here called Kepler 90.

The new exoplanet was detected with the help of an artificial intelligence researcher at Google using a machine learning technique called neural networking.

The technology, which is loosely inspired by the human brain, is designed to recognize patterns and classify images.
In many factories, workers look over parts coming off an assembly line for defects.

Andrew Ng, co-founder of some of Alphabet Inc, launches a new venture with iPhone assembler Foxconn to bring AI and so-called machine learning onto the factory floor.

He said he understands that his firm’s technology is likely to displace factory workers but that the firm is already working on how to train workers for higher-skilled, higher paying factory work involving computers.
Bing is working on a system to help users get to the information they are looking for even if they aren’t exactly sure how to find it. For example, let’s say you are trying to turn on Bluetooth on a new device. The new system could prompt users to provide more information, such as the type of gadget or operating system they are using.

Another new, AI-driven advance in Bing is aimed at getting people multiple viewpoints on a search query that might be more subjective.

Microsoft also announced plans to release a tool that highlights action items in email and gives you options for responding quickly on the go.
Researchers at MIT want to get rid of subjective feelings in treatment by using a facial recognition algorithm that can detect your pain levels by studying your face.

Trained on thousands of videos of people wincing in pain, the algorithm creates a baseline for each patient based on common pain indicators – generally, movements around the nose and mouth are telltale signs.

So far, the algorithm is 85% successful at weeding out the fakers. Meaning that people trying to fake pain to get prescription painkillers will soon be out of business.
In the city (London) that spawned David Bowie, Pink Floyd, and the Spice Girls, two college professors are working on an artificial intelligence capable of making its own music. And it’s already played its first show.

The race is on to see whether A.I. can add something meaningful to this cultural activity.

The pair invited a number of musicians to come together for a show called “Partnerships,” a reference to the relationship between human and machine. The show featured a mix of compositions, all performed by humans, with varying levels of input from the A.I. Some compositions took the computer’s work as a starting point, some used the project as inspiration, while others directly played the generated work as it stood.
Artificial intelligence could one day scan the music videos we watch to come up with predictive music discovery options based on the emotions of the performer.

Consumers of the future will rely on computer software to serve them music discovery options. YouTube Red and the YouTube Music app do a good job of serving up new and different options for music discovery, but it’s dragged down by its inability to actually identify what’s playing on the screen. Sure, Google knows which videos you gave a thumbs up to, watched 50 times on repeat, shared on social media, and commented on, but it doesn’t have the visual cues to tell it why.
Macys, CVS, Starbucks, and Sephora turn to AI

If you are scrambling to find last minute gifts, AI/machine learning is here to help!  All the major retailers are now turning to AI to learn what you want.  Big data about retail purchases are being fed into machine learning algorithms to learn things about you.  Here are some examples.  By the way, have you wondered, "what exactly is machine learning?"  Then see the end of this post for an easily digestible video.

[Forbes] Macy's Teams With IBM Watson For AI-Powered Mobile Shopping Assistant

Macy’s is set to launch an in-store shopping assistant powered by artificial intelligence thanks to a new tie-up with IBM Watson via developer partner and intelligent engagement platform, Satisfi.

Macy’s On Call, as it’s called, is a cognitive mobile web tool that will help shoppers get information as they navigate 10 of the retail company’s stores around the US during this pilot stage.

Customers are able to input questions in natural language regarding things like where specific products, departments, and brands are located, to what services and facilities can be found in a particular store. In return, they receive customised relevant responses. The initiative is based on the idea that consumers are increasingly likely to turn to their smartphones than they are a store associate for help when out at physical retail.
If you always have a caramel macchiato on Mondays, but Tuesdays call for the straight stuff, a double espresso, then Starbucks Corporation (SBUX - Get Report) is ready to know every nuance of your coffee habit. There will be no coffee secrets between you, if you’re a Rewards member, and Starbucks.

The chain’s regulars will find their every java wish ready to be fulfilled and, the food and drink items you haven’t yet thought about presented to you as what you’re most likely to want next.

So targeted is the technology behind this program that, if the weather is sunny, you’ll get a different suggestion than if the day is rainy.
Patients tend to be at their local CVS much more frequently than at the doctor. People are also increasingly using fitness trackers like FitBits, smartwatches, and even Bluetooth-enabled scales that are all collecting data patients can choose to share with a provider. All that data isn’t worth much though unless it is carefully interpreted — something Watson can do much more efficiently than a team of people.

A drop in activity levels, a sudden change in weight, or prescriptions that aren’t being filled are the kinds of things that might be flagged by the system. Certain changes could even indicate a developing sickness before someone feels ill — and certainly before someone decides to visit the doctor.

[AdWeek] Sephora Mastered In-Store Sales By Investing in Data and Cutting-Edge Technology

I love Sephora.  As the article aptly states "Sephora isn’t your mother’s makeup company; it’s your modern tech company". I have personally tried the Color IQ, which is their in-store program that scans faces to find out the right shade of foundation and other products for different skin tones. Sephora has an amazing Beauty Insider program that provides it a lot of rich data about their consumers and now the company is leveraging AI to allow customers to virtually try on make-up and spice up their online presence.

Sephora’s innovation lab in San Francisco is tooling with an artificial intelligence feature dubbed Virtual Artist within its mobile app that uses facial recognition to virtually try on makeup products.

[CGP Grey] How do machines learn?

The science behind machine/deep learning neural networks is quite interesting.  For example, the discussion, in the video, about us not knowing what is exactly is being learned is interesting to me (the hidden layer).  But you don't have time for that!  Here is an easily understood video:

What's coming up in 2018, and happy holidays!

Just a short note to let our dear readers know that posting volume will be a bit lighter as we travel for the holidays.  But here is what's coming up!

  • More interviews of notable experts (including an expert in self-driving vehicles, and an expert in human-autonomy teaming)
  • More Throwback Thursdays covering classic automation and autonomy literature
  • NEW: Movie Club; where Arathi and I "review" a particular movie's treatment of automation/autonomy/AI

Thanks for reading!  Tell your friends!!

Robot potpourri: Concierge, security guard, and VIP greeter
Connie will work side-by-side with Hilton’s Team Members to assist with visitor requests, personalize the guest experience and empower travelers with more information to help them plan their trips.

The more guests interact with Connie, the more it learns, adapts and improves its recommendations. The hotel will also have access to a log of the questions asked and Connie’s answers, which can enable improvements to guests’ experiences before, during and after their stays.

Connie is powered by Watson, a cognitive computing technology platform that represents a new era in computing where systems understand the world in the way that humans do - through senses, learning and experience.









After backlash, animal shelter fires security robot, “effective immediately

The Society for the Prevention of Cruelty to Animals (SPCA) based in San Francisco has been asked to halt the use of their security robot, which they had started using  after experiencing a lot of car break-ins, theft, and vandalism. SPCA also reported that they have seen a decline in the crimes after adopting the robot.  However, some tagged the robot as the "anti-homeless" robot, whose aim was to dislodge homeless campers and whose appearance was considered creepy.

Mitra: The ‘Made in India’ robot that stole the show at GES Hyderabad

The Global Entrepreneurship Summit last year was inagurated with Modi and Trump pressing a button on a robot developed by a startup based in Bangalore, India. 

Variations of the robot are envisioned to be used for customer assistance and therefore projected to increase sales via smart conversations as well as a party photographer, DJ, and live tweeter.

Mitra features a facial recognition technology, allowing the robot to quickly identify the person and deliver the customised services.

The humanoid also understands multiple languages. At the moment, Mitra supports Kannada and English but is soon going to add support for Hindi as well.
Can Robots Address Unethical Issues in Fashion?

The fashion industry is one that is rife with ethical issues at the high end (haute couture, impossible body standards of models) to the low end (fast fashion, manufacturing).  Can robots solve these issues?

[NY Times] Fashion Finds a More Perfect Model: The Robot

This article mainly discusses how fashion is embracing the look of robots.  But could robots soon replace fashion models?

Fashion has been especially quick to seize on the notion that robots are slicker, more perfect versions of ourselves. In the last few months alone, androids have filtered into the glossies and stalked the runways of designers as audacious as Thom Browne and Rick Owens, and of inventive newcomers like David Koma, who riffed on fembot imagery in his fall 2015 collection for Mugler, sending out models in frocks that were patterned with soldering dots and faux computer circuitry.

In a Steven Klein photo shoot in the current Vogue, drones hover overhead, seeming to spy on a party of human models cavorting in a field. For the March issue of W magazine, he portrayed the designer Jason Wu wrapped in the arms of a tin man.

[Reuters] Meet Lulu Hashimoto, the 'living doll' fashion model

Not far behind is Japan, where a doll with the motion of a human is co-existing with humans, is active in the fashion scene, and is being idoloized.

Meet Lulu Hashimoto, a “living doll” and the latest trend in Tokyo’s fashion modeling scene.

Lulu’s ability to blur the line between reality and fiction has mesmerized fans on social media, where the Lulu Twitter and Instagram accounts have drawn tens of thousands of followers.

While popular among fans of Japanese subculture, Lulu is now turning heads at the annual Miss iD beauty pageant where she is among the 134 semi-finalists chosen from around 4,000 entrants.
While automation does take away human jobs, the current frenzy over cheap clothing has created a whole host of unethical labor issues—like the ones that recently caused a factory fire in India killing 13 people—and robots could potentially avert that.

Robots in apparel manufacturing may be good, or they may be bad. They may give us cheap clothes and U.S. jobs (at managerial and administrative level), or they may detrimentally impact the economies of developing nations.
Siri and Alexa Say #MeToo to Sexual Harassment

The number of prominent celebrities and politicians being taken down for sexual harassment really seems to represent a major change in how society views sexual harassment.  No longer whispered or swept under the rug, harassment is being called-out and harassers are being held accountable for their words and actions.  

So, if AI will soon be collaborators, partners, and team mates, shouldn't they also be given the same treatment?  This story in VentureBeat talks about a campaign by Randy Painter to consider how voice assistants behave when harassed:

We have a unique opportunity to develop AI in a way that creates a kinder world. If we as a society want to move past a place where sexual harassment is permitted, it’s time for Apple and Amazon to reprogram their bots to push back against sexual harassment

I've never harassed Siri so I wasn't aware of the responses she gives when one attempts to harass her:

Siri responds to her harassers with coy remarks that sometimes even express gratitude. When they called Siri a “slut,” she responded with a simple “Now, now.” And when the same person told Siri, “You’re hot,” Siri responded with “I’m just well put together. Um… thanks. Is there something I can help you with?”

In our interview last week with Dr. Julie Carpenter, she addressed this somewhat:

Another ethical question rising from romantic human-AI interaction is, “Will a person who is accustomed to the imbalanced power dynamic of a human-robot relationship transfer their behaviors into their human-human relationships?” The implication there is that (1) the person treats the robot in a way we would find distasteful in human-human dynamics, and (2) that our social behaviors with robots will be something we apply as a model to human-human interactions.

This is fascinating because there is existing and ongoing research examining how humans respond and behave with AI/autonomy that exhibits different levels of politeness.  For example, autonomy that is rude, impatient, and intrusive were considered less trustworthy by human operators. If humans  expect autonomy to have a certain etiquette, isn't it fair to expect at least basic decency from humans towards autonomy?

Citation: Parasuraman R., & Miller C. (2004). Trust and etiquette in high-criticality automated systems. Communications of the Association for Computing Machinery, 47(4), 51–55. 


Dr. Mica Endsley: Current Challenges and Future Opportunities In Human-Autonomy Research

The social science research that we cover in this blog is carried out by a multitude of talented scientists across the world; each studying a different facet of the problem. In our second post in a new series, we interview one the leaders in the study of the human factors of autonomy, Dr. Mica Endsley.

About Dr. Mica Endsley


Situation Awareness Analysis and Measurement provides a comprehensive overview of different approaches to the measurement of situation awareness in experimental and applied settings. This book directly tackles the problem of ensuring that system designs and training programs are effective at promoting situation awareness.


Dr. Mica Endsley is President of SA Technologies, a cognitive engineering firm specializing in the analysis, design, measurement and training of situation awareness in advanced systems, including the next generation of systems for aviation, air traffic control, health care, power grid operations, transportation, military operations, homeland security, and cyber. 

From 2013 to 2015, she served as Chief Scientist of the U.S. Air Force, reporting to the Chief of Staff and Secretary of the Air Force, providing guidance and direction on research and development to support Air Force future operations and providing assessments on a wide range of scientific and technical issues affecting the Air Force mission.

She has also held the position of Visiting Associate Professor at MIT in the Department of Aeronautics and Astronautics and Associate Professor of Industrial Engineering at Texas Tech University. Dr. Endsley received a Ph.D. in Industrial and Systems Engineering from the University of Southern California.

Dr. Endsley is a recognized world leader in the design, development and evaluation of systems to support human situation awareness (SA) and decision-making. She is the author of over 200 scientific articles and reports on situation awareness and decision-making, automation, cognitive engineering, and human system integration. She is co-author of Analysis and Measurement of Situation Awareness and Designing for Situation Awareness. Dr. Endsley received the Human Factors and Ergonomics Society Jack Kraft Innovator Award for her work in situation awareness.

She is a fellow in the Human Factors and Ergonomics Society, its Past-President, was co-founder of the Cognitive Engineering and Decision Making Technical Group of HFES, and served on its Executive Council.  Dr. Endsley has received numerous awards for teaching and research, is a Certified Professional Ergonomist and a Registered Professional Engineer. She is the founder and former Editor-in-Chief of the Journal of Cognitive Engineering and Decision Making and serves on the editorial board for three major journals. 

What were the human-automation challenges you encountered in your role as the Chief Scientist for the Air Force?

 An RQ-4 Global Hawk soars through the sky to record intelligence, surveillence and reconnaissance data.   Image Source.

An RQ-4 Global Hawk soars through the sky to record intelligence, surveillence and reconnaissance data.  Image Source.

Autonomous systems are being developed or are under consideration for a wide range of operational missions. This includes:

  1. Manned aircraft, as more automation is added to both on-board and supporting functions such as mission planning, information/network management, vehicle health management and failure detection
  2. Unmanned aircraft are currently being used for surveillance missions and are being considered for a much wider range of activities where:
    1. people would be at high levels of risk (e.g., near to hostilities),
    2. communications links for direct control are unreliable due o jamming or other interference effects,
    3. where speed of operations is useful (e.g., re-tasking sensors based on observed target features), or
    4. to undertake new forms of warfare that may be enabled by intelligent, but expendable, systems, or closely coordinated flights of RPAs [remotely piloted aircraft] (e.g., swarms)
  3. Space operations can also benefit from autonomous systems that provide a means to build resilient space networks that can reconfigure themselves in the face of attacks, preserving essential functions under duress. It also provides a mechanism for significantly reducing the extensive manpower requirements for manual control of satellites and generation of space situation awareness through real-time surveillance and analysis of the enormous number of objects in orbit around the Earth.
  4. Cyber operations can benefit from autonomy due to the rapidity of cyber-attacks, and the sheer volume of attacks that could potentially occur. Autonomous software can react in milliseconds to protect critical systems and mission components. In addition, the ever-increasing volume of novel cyber threats creates a need for autonomous defensive cyber solutions, including cyber vulnerability detection and mitigation; compromise detection and repair (self-healing); real-time response to threats; network and mission mapping; and anomaly resolution.
  5. ISR [intelligence, surveillance, and reconnaissance] and Command and Control operations will also see increased use of autonomous systems to assist with integrating information across multiple sensors, platforms and sources, and to provide assistance in mission planning, re-planning, monitoring, and coordination activities.

Many common challenges exist for people to work in collaboration with these autonomous systems across all of these future applications. These include:

the more reliable and robust that automation is, the less likely that human operators overseeing the automation will be aware of critical information and able to take over manual control when needed...I have labeled this the Automation Conundrum
  1. Difficulties in creating autonomy software that is robust enough to function without human intervention and oversight are significant. Creating systems that can accurately not only sense but also understand (recognize and categorize) objects detected, and their relationship to each other and broader system goals, has proven to be significantly challenging for automation, especially when unexpected (i.e., not designed for) objects, events, or situations are encountered. This capability is required for intelligent decision-making, particularly in adversarial situations where uncertainty is high, and many novel situations may be encountered.
  2. A lowering of human situation awareness when using automation often leads to out-of-the-loop performance decrements. People are both slow to detect that a problem has occurred with the automation, or with the system being controlled by the automation, and then slow to come up to speed in diagnosing the problem to intervene appropriately, leading to accidents. Substantial research on this problem shows that as more automation is added to a system, and the more reliable and robust that automation is, the less likely human operators are in overseeing the automation and taking over manual control when needed. I have labeled this the Automation Conundrum.
  3. Increases in cognitive workload are often required in order to interact with the greater complexity associated with automation. Workload can often increase as understanding and interacting with automation increases demands.
  4. Increased time to make decisions can be found when decision aids are provided, often without the desired increase in decision accuracy. Evidence shows that people actually take-in system assessments and recommendations that they then combine with their own knowledge and understanding of the situation. A faulty decision aid can lead to people being more likely to make a mistake due to decision biasing by the aid. And the time required to make a decision can actually increase, as it is an additional source of information to take into account.

Challenges occur when people working with automation develop a level of trust that is inappropriately calibrated to the reliability and functionality of the system in various circumstances. In order for people to operate effectively with autonomous systems, they will need to be able to determine how much to trust the autonomy to perform its tasks.

This trust is a function of not just the overall reliability of the system, but also a situationally determined assessment of how well it performs particular tasks in particular situations. For this, people need to develop informed trust – an accurate assessment of when and how much autonomy should be employed, and when to intervene.

Given that it is unlikely that autonomy in the foreseeable future will work perfectly for all functions and operations, and that human interaction with autonomy will continue to be needed at some level, these factors work to create the need for a new approach to the design of autonomous systems that will allow them to serve as an effective teammate with the people who will need to depend on them to do their jobs.

What does the autonomous future look like for you? Is it good, bad or ugly?

The future with autonomous systems may be good, bad, or very ugly, depending on how successful we are in designing and implementing effective human-autonomy collaboration and coordination.

The ugly scenario will occur only if decision makers forget about the power of people to be creative and innovative, and try to supplant them with autonomous systems in a failed belief in its superiority

In the bad scenario, if we continue to develop autonomous systems that are brittle, and that fail to provide the people who must work with automation with the needed situation awareness to be able to effective in their roles, then the true advantages of both people and autonomy will be compromised. 

The ugly scenario will occur only if decision makers forget about the power of people to be creative and innovative, and try to supplant them with autonomous systems in a failed belief in its superiority. Nothing in the past 40 years of automation research has justified such an action, and such a move would be truly disastrous in the long run.

In a successful vision of the future, autonomous systems will be designed to serve as part of a collaborative team with people. Flexible autonomy will allow the control of tasks, functions, sub-systems, and even entire vehicles to pass back and forth over time between people and the autonomous system, as needed to succeed under changing circumstances. Many functions will be supported at varying levels of autonomy, from fully manual, to recommendations for decision aiding, to human-on-the-loop supervisory control of an autonomous system, to one that operates fully autonomously with no human intervention at all.

People will be able to make informed choices about where and when to invoke autonomy based on considerations of trust, the ability to verify its operations, the level of risk and risk mitigation available for a particular operation, the operational need for the autonomy, and the degree to which the system supports the needed partnership with the human.

In certain limited cases, the system may allow the autonomy to take over automatically from the human, when timelines are very short for example, or when loss of lives are imminent. However, human decision making for the exercise of force with weapon systems is a fundamental requirement, in keeping with Department of Defense directives.

The development of autonomy that provides sufficient robustness, span of control, ease of interaction, and automation transparency is critical to achieving this vision. In addition, a high level of shared situation awareness between the human and the autonomy will be critical. Shared situation awareness is needed to ensure that the autonomy and the human operator are able to align their goals, track function allocation and re-allocation over time, communicate decisions and courses of action, and align their respective tasks to achieve coordinated actions.

Critical situation awareness requirements that communicate not just status information, but also comprehension and projections associated with the situation (the higher levels of situation awareness), must be built into future two-way communications between the human and the autonomy.

This new paradigm is a significant departure from the past in that it will directly support high levels of shared situation awareness between human operators and autonomous systems, creating situationally relevant informed trust, ease of interaction and control, and manageable workload levels needed for mission success. By focusing on human-autonomy teaming, we can create successful systems that get the best benefits of autonomous software along with the innovation of empowered operators.

Throwback Thursday: The ‘problem ’ with automation: inappropriate feedback and interaction, not ‘over-automation’

Today's Throwback article is from Donald Norman.  If that name sounds familiar, it is the same Dr. Norman who authored the widely influential, "The Design of Everyday Things."

In this 1990 paper published in the Philosophical Transactions of the Royal Society, Dr. Norman argued that much of the criticism of automation at the time (and today) is not due to the automation itself (or even over-automation) but due to its poor design; namely the lack of inadequate feedback to the user.  

This is a bit different than the concept of the out-of-the-loop (OOTL) scenario that we've talk about before; there is a subtle emphasis difference.  Yes, lack of feedback contributes to OOTL, but here, feedback is discussed more as an opaqueness of automation status and operations, not that it is carrying out a task that you previously performed.

He first starts off with a statement that should sound familiar if you've read our past Throwback posts:

The problem, I suggest, is that the automation is at an intermediate level of intelligence, powerful enough to take over control that used to be done by people, but not powerful enough to handle all abnormalities.
— pp. 137

The obvious solution, then is to make the automation even more intelligent (i.e., a higher level of automation):

To solve this problem, the automation should either be made less intelligent or more so, but the current level is quite inappropriate.
— pp. 137

If a higher level of automation is what is meant by "more intelligent," then we already  know that this is also not a viable solution (the research to show that was done after the publication of this paper).  However, this point is merely a setup to further the idea that problems with automation are caused not by the mere presence of automation, but by its lack of feedback.  Intelligence means giving just the right feedback at the right time for the task.

He provides aviation case studies that imply that the use of automation lead to out-of-the-loop performance issues (see previous post).  He next directs us through a thought experiment to help drive home his point:

Consider two thought experiments. In the first, imagine a captain of a plane who turns control over to the autopilot, as in the case studies of the loss of engine power and the fuel leak. In the second thought experiment, imagine that the captain turns control over to the first officer, who flies the plane ‘by hand’. In both of these situations, as far as the captain is concerned, the control has been automated: by an autopilot in one situation and by the first officer in the other.
— pp. 141

The implication is that when control is handed over to any entity (automation or a co-pilot), feedback is critical.   Norman cites the widely influential work of Hutchins who found that informal chatter, in addition to lots of other incidental verbal interaction, is crucial to what is essentially situation awareness in human-human teams (although Norman invokes the concept of mental models).  Humans do this, automation does not.  Back then, we did not know how to do it and we probably still do not know how to do it.  The temptation is to provide as much feedback as possible:

We do have a good example of how not to inform people of possible difficulties: overuse of alarms. One of the problems of modern automation is the unintelligent use of alarms, each individual instrument having a single threshold condition that it uses to sound a buzzer or flash a message to the operator, warning of problems.
— pp. 143

This is the current state of automation feedback.  If you have spent any time in a hospital, alerts are omnipresent and overlapping (cf. Seagull & Sanderson, 2001).  Norman ends with some advice about the design of future automation:

What is needed is continual feedback about the state of the system...This means designing systems that are informative, yet non-intrusive, so the interactions are done normally and continually, where the amount and form of feedback adapts to the interactive style of the participants and the nature of the problem.
— pp. 143

Information visualization and presentation research (e.g., sonification; or the work of Edward Tufte) tackles part of the problem. These techniques are an attempt to provide constant, non-intrusive information.   Adaptive automation, or automation that scales its level based on physiological indicators, is another attempt to get closer to Norman's vision but, in my opinion, may be insufficient and more disruptive as they do not address feedback.

To conclude, Norman's human action cycle is completely consistent with, and probably heavily informs, his thoughts on automation.

Further reading:  

Reference: Norman, D. A. (1990). The 'problem' with automation: inappropriate feedback and interaction, not'over-automation'. Philosophical Transactions of the Royal Society of London B: Biological Sciences327(1241), 585-593.

AI potpourri: AI gets a job at NASA, finds serial killers, stops suicide, selects embryos, and interviews you!

[The New Yorker] The Serial-Killer Detector

This article discusses how Thomas Hargrove, a retired journalist who had access to a large collection of murder records created an algorithm that was able to find crime patterns.

He began trying to write an algorithm that could return the victims of a convicted killer. As a test case, he chose Gary Ridgway, the Green River Killer, who, starting in the early eighties, murdered at least forty-eight women in Seattle, and left them beside the Green River.
Facebook’s new “proactive detection” artificial intelligence technology will scan all posts for patterns of suicidal thoughts, and when necessary send mental health resources to the user at risk or their friends, or contact local first-responders. By using AI to flag worrisome posts to human moderators instead of waiting for user reports, Facebook can decrease how long it takes to send help.

It’s also dedicating more moderators to suicide prevention, training them to deal with the cases 24/7, and now has 80 local partners like, National Suicide Prevention Lifeline and Forefront from which to provide resources to at-risk users and their networks.

Misses and false alarms should be factored in when designing the automation algorithm. Too many misses have catastrophic consequences in a high-risk situation. Facebook's AI is an example of an automated system where having misses far outweigh the nuisance of having false alarms. 

[GCN] NASA’s newest employee isn’t human

This article talks about the newest employee at NASA Shared Services Center, Washington, who is a bot. Washington is a rules-based bot and follows a set of rules. NASA expects that future bots will have higher-order cognitive processing abilities.

One of the newest employees at the NASA Shared Services Center can copy and paste text, open emails, move folders and many other tasks. That might sound routine, but the new hire, Washington, isn’t a person — it’s a bot.

Much like a human employee, however, Washington has its own computer, its own email account, its own permissions within applications and its own role within the organization.

The bots, which can run 24/7, can help NASA by taking on time-consuming, manual tasks and allowing its humans to engage in higher level work.
Scientists are using artificial intelligence (AI) to help predict which embryos will result in IVF success.

AI is able to recognise and quantify 24 image characteristics of embryos that are invisible to the human eye. These include the size of the embryo, texture of the image and biological characteristics such as the number and homogeneity of cells.

[New York Post] AI already reads your resume – now it’s going to interview you, too

This article discusses how AI is being used by companies to improve their recruiting process. 

Marriott International Inc. announced the launch of Marriott Careers chatbot for Facebook Messenger, a computer program designed to simulate conversation with job seekers. The virtual assistant aims to create a more personalized, efficient experience for applicants.

“Once you apply for a job, the system sends you updates. If it isn’t available, when another job meets your specific qualifications, you’ll receive a direct message on your digital device,” says Rodriguez, Executive vice president and global chief human resources officer for Marriott. “Generation Z, which is starting to graduate from college, has a strong preference to communicate with companies this way. It’s the wave of the future.”

Unilever is also using AI to narrow down candidates based on their speech, facial expressions and body language.

“Hey Siri, how are my crops doing?” Autonomy in Agriculture Potpourri

Modern agriculture is only possible with the use of advanced technology.  In an upcoming interview, we will learn about what the future of agriculture looks like with highly advanced autonomous systems and how farmers are reacting and coping.

Until then, here are some interesting stories about autonomous systems and agriculture.

[U.S. Department of Agriculture] Smart Phones: The Latest Tool for Sustainable Farming

It is nice to see AI being used to help meet the food demands of a growing world population. For example, the U.S. Department of Agriculture has developed two apps, “LandInfo” and “LandCover,” available on the Google Play Store.

With LandInfo, users can collect and share soil and land-cover information as well as gain access to global climate data. The app also provides some useful feedback, including how much water the soil can store for plants to use, average monthly temperature and precipitation, and growing season length.

LandCover simplifies data collecting for use in land-cover inventories and monitoring. The app automatically generates basic indicators of these cover types on the phone and stores the data on servers that are accessible to users worldwide.





[BBC News] Tell me phone, what's destroying my crops?

AI is also being used in India to help farmers. Drought, crop failure, and lack of accessibility to modern technology make it hard for Indian farmers.   In fact, an estimated 200,000 farmers have ended their lives in the last two decades due to debt.  A group of researchers from Berlin have developed an app called Plantix to help farmers detect crop diseases and nutrient deficiency in their crops.

The farmer photographs the damaged crop and the app identifies the likely pest or disease by applying machine learning to its growing database of images.

Not only can Plantix recognise a range of crop diseases, such as potassium deficiency in a tomato plant, rust on wheat, or nutrient deficiency in a banana plant, but it is also able to analyse the results, draw conclusions, and offer advice.

[Western Farm Press] Smartphones and apps taking agriculture by storm

AI has also given farmers a lot of convenience. They can now perform tasks such as starting or stopping center pivot irrigation systems from the convenience of their home. 

Before I might have to go out in the rain at 2 a.m. to turn off a center pivot or check to make sure it was operating,” says Schmeeckle. “Now I can turn a pivot on or off with my smartphone. I even started one while we were 300 miles away on vacation this summer, and it was still running when I got home.”
Through the IoT, sensors can be deployed wherever you want–on the ground, in water, or in vehicles–to collect data on target inputs such as soil moisture and crop health. The collected data are stored on a server or cloud system wirelessly, and can be easily accessed by farmers via the Internet with tablets and mobile phones. Depending on the context, farmers can choose to manually control connected devices or fully automate processes for any required actions. For example, to water crops, a farmer can deploy soil moisture sensors to automatically kickstart irrigation when the water-stress level reaches a given threshold.

[MIT Technology Review] Six ways drones are revolutionizing agriculture

The market for drone-powered solutions in agriculture is estimated at $32.4 billion. Applications include soil and field analysis, planting, crop spraying, crop monitoring, irrigation, and health assessment,

Agricultural producers must embrace revolutionary strategies for producing food, increasing productivity, and making sustainability a priority. Drones are part of the solution, along with closer collaboration between governments, technology leaders, and industry.
Lettuce Bot is a machine that can “thin” a field of lettuce in the time it takes about 20 workers to do the job by hand.

After a lettuce field is planted, growers typically hire a crew of farmworkers who use hoes to remove excess plants to give space for others to grow into full lettuce heads. The Lettuce Bot uses video cameras and visual-recognition software to identify which lettuce plants to eliminate with a squirt of concentrated fertilizer that kills the unwanted buds while enriching the soil.
Dr. Julie Carpenter: Human-Robot/AI Relationships

The social science research that we cover in this blog is carried out by a multitude of talented scientists across the world; each studying a different facet of the problem. As the first post in a new series, we interview one the pioneers in the study of human-AI relationships, Dr. Julie Carpenter.


Dr. Carpenter’s first book, Culture and human-robot interaction in militarized spaces: A war story (RoutledgeAmazon) expands on her research with U.S. military Explosive Ordnance Disposal personnel and their everyday interactions with field robots.

About Dr. Julie Carpenter

Julie Carpenter has over 15 years of experience in human-centered design and human-AI interaction research, teaching, and writing. Her principal research is about how culture influences human perception of AI and robotic systems and the associated human factors such as user trust and decision-making in human-robot cooperative interactions in natural use-case environments.

Dr. Carpenter earned her PhD and an MS from the University of Washington, an MS from Rensselaer Polytechnic Institute, and a BA from the University of Wisconsin-Madison. She is also currently a Research Fellow in the Ethics + Emerging Sciences group at California Polytechnic State University. 

Dr. Carpenter’s first book, Culture and human-robot interaction in militarized spaces: A war story (RoutledgeAmazon) expands on her research with U.S. military Explosive Ordnance Disposal personnel and their everyday interactions with field robots. The findings from this research have applicability across a range of human-robot and human-AI cooperative scenarios, products, and situations. She regularly updates her website with information about her current work at

You have done a lot of work on the emotional attachment that humans have towards robots. Can you tell us more about your work?

At its heart, my work is human-centered and culture-centered. I tend to approach things in a very interdisciplinary way, and my body of published work reflects my long-term interest in how people use technology to communicate, from film to AI.

...there were relatively few people looking at AI as the vector for human emotion when I began in this vein

The medium or technologies I focus on changes and evolves. I began in film theory, then a lot of my work was about Web-based human interactions, and more recently it has been how people interact with robots and other forms of non-Web AI, like autonomous cars, textbots, or IoT agents such as Alexa.

But my lens for looking at things has always been rooted in a sort of anthropological interest in people and technology. Specifically, human emotional attachment to and through the technological medium interests me because there are so many nuanced possible pitfalls for the human, psychologically, ethically, emotionally, even physically.

Yet when it comes to scholarly study about topics like affection, friendship, love and their influence and connectedness with other complicated topics like trust, cooperative teamwork, and decision-making, there were relatively few people looking at AI as the vector for human emotion when I began in this vein. David Levy is one person who pioneered this discussion, of course, as are Clifford Nass and Byron Reeves.

As a film theory undergraduate student, I was drawn to how people use stories to explore technology, as we do in science fiction. Looking back, I can see where even then I was influenced by not only the idea of science fiction and science fiction films, but particularly ones that were of my own era as cultural touchstones and became the basis for a great deal of my early scholarly work. 

So, movies like Blade Runner were something I wrote whole papers about years before there was even a hint that we would enter an era when robots would become a reality in a very specific and rapid time for development in the 2000s. But back then I was looking at things as ideas connected specifically to that movie director’s body of work, or the audience for the movie, or culture at that time.

 Blade Runner (1982).   Image source

Blade Runner (1982).  Image source

Now I look at a movie like that as an exploration of human-robot possibilities, a reflection and influencer of popular cultural ideas, and also an inspiration to people like me, makers and researchers who have a say in developing real world AI. I find those sort of storytelling influences fascinating because they often set up peoples’ real world expectations of their interactions with technology, and even helps form the communication model.

Storytelling’s influence on culture is a very rich set of artifacts for exploration, and I manage to reference that idea a great deal in the way I situate research in the larger culture it is part of, however that may be defined for the scope of that work.

What do you think about media portrayals of human-robot/autonomy relationships in movies (e.g., the new Bladerunner; the movie Her)?

Cultures around the world use science fiction to explore what it means to be human...

I love science fiction stories, and as I mentioned. I frequently use science fiction as a framework for discussing our expectations of interactions with AI and robots, because research shows it definitely can influence peoples’ expectations about how to interact with AI, at least initially.

Personally, Blade Runner definitely inspired me in many ways, going back to when I was studying film theory as an undergrad and never predicted I’d be working in a field called human-robot interaction someday. I know a lot of roboticists who cite other scifi as personal inspiration, too, such as Astro Boy. Storytelling captures our imagination and prompts questions, and it is a wonderful creative springboard for discussion, as well as entertainment.

A pitfall I am less a fan of is using Isaac Asimov’s Three Laws to discuss ethics and AI. Asimov wrote the Laws purposefully allowing for ethical pitfalls so he could keep writing stories; the Laws create plot points in their fallibility. If you want to use Three Laws (or four, if you count the Zeroth Law) to frame a discussion of ethical AI, then you have to acknowledge it is fictional, fallible, and very purposefully incomplete in conception—it isn’t a real world solution for development or policy-making, except perhaps as an example of what loopholes might be in a framework like the Three Laws if they were used in the real world.

Science fiction can be a cultural touchstone and a thought exercise for framing complicated human-AI interactions, but sometimes it is used for shorthand to communicate complicated issues in a way that disregards too much nuance of the issues being discussed. I’m an Asimov fan, but I think the Laws are sometimes relied upon too much in a scientific discussion or popular news framing of ethical problems for AI.

Having said that, I personally enjoy a wide range of AI representations in fiction, from the dystopic to the sympathetic predictions. The ethical dilemmas of the Terminator or Her are both entertaining for me to contemplate in the safety of my everyday life. Considering the more far-reaching implications of the ideas they are conveying is a more serious endeavor for me, of course. How we tell stories reflects our beliefs, and also pushes those beliefs and ideas further, questioning our suppositions, and in that way also has the potential to influence new ideas about how we interact with AI.

 Her (2013).   Image source

Her (2013).  Image source

There is a rich history of stories we tell about AI that pre-dates the genre we call science fiction. Scifi is a relatively new genre label, in itself, but the idea of humans interacting with artificial life has been around forever, in various forms. All sorts of tales about humans interfering with the natural order of things to create a humanlike life outside the body--sometimes via magic spells or religious intervention--exist around the world. These AI characters take the form of golems, zombies, statues, puppets, dolls, and so on. Historically, this is a set of ideas that has universal fascination.

Cultures around the world use science fiction to explore what it means to be human, and what it means for our creation of and interactions with entities that are similar to us in some ways, often as if AI was a sociological Other.

I recently read the news of a man in China marrying the robot he created. SciFi movies are certainly becoming a reality. What are the ethical implications of human-automation romantic relationships?

We are currently in an era where we are really just beginning discussions of emerging ethics in this domain earnestly because of the enormous progress of AI and robotics over the last decade in particular.

Right now, a romantic feeling for AI is considered aberrant behavior, so it carries a very different significance than it will when AI and robots are accepted as objects that can carry a great deal of meaning for people in different situations, whether it’s as caregiver or mentor or helper or companion or romantic interest.

In other words, I don’t think we can make shorthand generalizations about a “type” of person that marries a robot or other AI very successfully as a static model, because the way we regard human-robot relationships will change as robots become part of our everyday realities and we learn to live with them and negotiate what different robots might mean to us in different ways.

I think that to an extent, eventually we will see society normalize human-robot romantic relationships as a culturally accepted option for some people. We are still going through a process of discovery about our interactions with robots now, but we do see patterns of human-robot interaction strikingly different from our interactions with other objects, and one emerging pattern is that in some conditions we treat AI and robots in socially meaningful ways that sometimes includes emotional attachment and/or affection from the person to the AI or robot.

The ethical pitfalls of a human-robot romantic relationship can come from the development end, the user end, and society’s perceptions of that relationship. From the development end, some ethical concerns are the development of the AI, and the human biases and influences we are teaching AI that learns from us, whether it is through direct programming or neural networks.  Robot hacking and privacy concerns are thorny nests of ethical issues, too.

Say someone has a romantic or other affection for AI used in their home, and interacts with it that way, accordingly. In that case, who has access to what the robot or AI hears you say, watches what you do, the information it gathers about your everyday life and your preferences for everything from dish detergent to sexual activities?  What if that data was hacked, and someone tried to use the gathered information to manipulate you? These are major technical and ethical issues.

From the user end, one ethical concern is whether people who become emotionally attached to AI have a real self-awareness of the lack of truly humanlike reciprocity in a human-AI relationship with the current technology, and whether they lack a root understanding that the AI is not anywhere near humanlike intelligence, although sometimes those are the very traits of AI that can attract someone to it romantically.

Furthermore, society does not treat AI or robots like people when it comes to things like legal status, so similar ethical concerns are reflected in the ways the people around the user that reports to be romantically interested in AI recognize that for someone else to declare oneself in a committed, persistent, affectionate relationship with an AI form also acknowledges involvement in an imbalanced power dynamic.

Another ethical question rising from romantic human-AI interaction is, “Will a person who is accustomed to the imbalanced power dynamic of a human-robot relationship transfer their behaviors into their human-human relationships?” The implication there is that (1) the person treats the robot in a way we would find distasteful in human-human dynamics, and (2) that our social behaviors with robots will be something we apply as a model to human-human interactions.

 Blade Runner 2049 (2017).   Image source

Blade Runner 2049 (2017).  Image source

We are currently in an era where we are really just beginning discussions of emerging ethics in this domain earnestly because of the enormous progress of AI and robotics over the last decade in particular. It is only the beginning of a time when we formalize some of our decisions about these ethical concerns as law and policies, and how we establish less formal ways of negotiating our interactions with AI via societal norms.

I’m looking forward to watching how we integrate AI technologies like robots and autonomous cars with our everyday lives because I think there are a lot of potential good that will come from our using them. Our path to integrating AI into our lives is already fascinating.