You can subscribe to updates via this blog's RSS feed. The feed address is:
You can enter it into your favorite feed reader or other news aggregation program. This blog is also a channel in Apple News.
We are psychological scientists / practitioners who are excited about the future of autonomy. This blog will cover recent developments in human-autonomy sciences with a focus on the social science angle.
You can subscribe to updates via this blog's RSS feed. The feed address is:
You can enter it into your favorite feed reader or other news aggregation program. This blog is also a channel in Apple News.
My friend, journalist Maggie Jackson, recently sent me an interesting article in the Times Magazine about one of the new complexities in the relationship between humans and AI:
The source of these issues is that AI decision making is hidden, but also in many ways non-deterministic--we don't know what it will come up with and how it did! We discuss this a bit in our recently published paper.
Maggie Jackson, will be leading a discussion at the Google I/O developer conference on building healthy technologies. In that session, many existing and new issues regarding human-ai/technology issues will be discussed.
My colleagues Ewart de Visser and Tyler Shaw recently published a theoretical paper discussing how the field of human factors might need to adapt to study human-autonomy issues:
Modern interactions with technology are increasingly moving away from simple human use of computers as tools to the establishment of human relationshipswith autonomous entities that carry out actions on our behalf. In a recent commentary, Peter Hancock (Hancock, 2017) issued a stark warning to the field of human factors that attention must be focused on the appropriate design of a new class of technology: highly autonomous systems. In this article, we heed the warning and propose a human-centered approach directly aimed at ensuring that future human-autonomy interactions remain focused on the user’s needs and preferences. By adapting literature from industrial psychology, we propose a framework to infuse a unique human-like ability, building and actively repairing trust, into autonomous systems. We conclude by proposing a model to guide the design of future autonomy and a research agenda to explore current challenges in repairing trust between humans and autonomous systems.
This paper is a call to practitioners to re-cast our connection to technology as akin to a relationship between two humans rather than between a human and their tools. To that end, designing autonomy with trust repair abilities will ensure future technology maintains and repairs relationships with their human partners.
You have no doubt heard about the unfortunate fatal accident involving a self-driving car killing a pedestrian (NYT).
This horrible event might be the "stock market correction" of the self-driving car world that was sorely needed to re-calibrate the public's unrealistic expectations about the capability of these systems.
In the latest news, the Tempe police have released video footage that shows the front and in-vehicle camera view just before impact.
My first impression of the video was that it seemed like something the car should have detected and avoided. In such a visually challenging condition as illustrated in the video, a human driver would have great difficulty seeing the pedestrian in the shadowed area. Humans have inferior vision and reaction time and speed compared to computers (cf Fitts' list, 1951).
One interesting narrative thread that has come out of the coverage, and is evident in the Twitter comments for the video, is that the idea that the "Fatal Uber crash [was] likely 'unavoidable' for any kind of driver." People seem to be understanding of the difficulty of the situation and thus their trust in these autonomous systems is likely to only be somewhat negatively affected. But should it be more affected? Autonomous vehicles, with the megaflops of computing power and advance sensors were never expected to be "any kind of driver"--they were supposed to be much better.
But the car, outfitted with radar-based sensors, should have "seen" it. I'm certainly not blaming the engineers. Determining the threshold for signal (pedestrian) versus noise is probably an active area and one that they were testing.
Continuing story and thoughts...
This year, Moley, the first robotic kitchen will be launched by a London-based company, that has unlimited access to chefs and their recipes worldwide. It is expected to cook and clean up after itself. But looks like, it does not completely eliminate human supervision.
However, there are safety and quality concerns about having a robot-chef. What if the machine chops aimlessly and the owner is left without a meal is a concern raised in the article. Further, cooking involves the chef's personal touch and an engagement of all the five senses, which cannot be realized by a robot.
To solve the problem of cold pizzas, Zume Pizza, where robots and AI run the show, was started in Mountain View, California was started.
There is only one human worker I the delivery truck - to drive, slice and deliver to your doorstep. The human does not have to think about when to turn the ovens on and off or what route to take - because these are all decided by AI. A few minutes prior to arriving at the scheduled delivery destination, the AI starts the oven to finish cooking the order.
Japan is not far behind either with regards to the use of robots in cooking. Scientists at Kyoto Sangyo University have developed a kitchen with ceiling-mounted cameras and projectors that overlay cooking instructions on the ingredients. This lets cooks concentrate on their task (e.g., slicing) without having to look up at a recipe book or a screen.
Caliburger, a fast food chain based in California is using Flippy to flip hamburgers. Flippy is an industrial robotic arm with a classic spinning spatula.
In the Chinese city of Kunshan, a small team of robot cooks and waiters serve dumplings and fried rice at Tian Waike Restaurant.
GM has announced their fourth generation of self-driving vehicles. Note that there is not a single mention of what the passenger is supposed to do in the event that the self-driving algorithm fails!
A prominent social scientist, Dr. Peter Hancock aptly stated the following.
A Seattle-based design firm is working on a six-passenger vehicle picks up and drops off every child at their front door, ensuring their identity with facial recognition.
The researchers at the design firm are also investigating other issues such as how AI will address bullying in buses as well as bringing in extra money to the school by using the bus for food delivery for a service like Uber Eats.
The Canadian government is partnering with an AI firm to predict rises in regional suicide risk. Facebook has also recently launched initiatives to prevent suicides by analyzing posts that suggest suicidal thoughts.
The Gallup organization has just released a survey of 3298 American adults about their thoughts on AI and the future. The interactive website is filled with many great visualizations.
The key point seems to be that, contrary to popular notions of the fear of AI, most American’s (77%) have a positive view of AI in the next decade. Interestingly, this is despite most Americans view that AI will have a negative impact on their own employment and the economy (73% believe AI will eliminate jobs).
The other noteworthy point is that optimism about AI, while high, is expected to decrease (difference between current optimism and future optimism). But this varies by sub-group: The largest difference between future-current optimism is by middle-aged folks who's livelihood may be affected (green) while older folks seem to be unchanged (blue, orange):
I just saw a funny juxtaposition of headlines regarding self-driving cars. Of most autonomous systems, self-driving cars probably represent the easiest to understand for the lay public.
The first headline, from a Reuters/Ipsos opinion poll: Most Americans wary of self-driving cars.
The results are more interesting when viewed by age group. It makes intuitive sense that millennials are most comfortable with baby boomers the least. Millenials are less interested in driving and because of greater exposure to autonomous technology, may be more comfortable and trusting than other age groups. It should be noted that that is not a correct view, however. Their view of technology could be distorted or unrealistic.
The next headline: More Americans Willing To Ride In Self-Driving Cars. The results of a survey from American Automobile Association (AAA) confirm Reuter's survey: millennials and males are more willing to buy a self-driving car. The headline refers to a decrease (78% to 63%), year over year, in the number of people who said they were afraid to ride in a self-driving car.
The crux of these observations seem to be trust:
AI is taking a bigger role in investing. Large fund management companies like Fidelity and Vanguard say they use AI for a range of purposes.
While some people see huge potential in AI as an investment advisor, there are others who think that it cannot be relied on for heaving cognitive decision-making. The following is a quote from a portfolio manager.
AI models designed by Alibaba and Microsoft have surpassed humans in reading comprehension, which demonstrates that AI has the potential to understand and process the meaning of words with the same fluidity as humans. But there is still a long way to go. Specifically, adding meaningless text into the passages, which a human would easily ignore, tended to confuse the AI.
The retail industry is also starting to rely on AI to shape the way people shop.
Nancy J. Cooke is a professor of Human Systems Engineering at Arizona State University and is Science Director of the Cognitive Engineering Research Institute in Mesa, AZ. She also directs ASU’s Center for Human, Artificial Intelligence, and Robot Teaming and the Advanced Distributed Learning Partnership Lab.
She received her PhD in Cognitive Psychology from New Mexico State University in 1987. Dr. Cooke is currently Past President of the Human Factors and Ergonomics Society, chaired the National Academies Board on Human Systems Integration from 2012-2016, and served on the US Air Force Scientific Advisory board from 2008-2012. She is a member of the National Academies of Science, Engineering, and Medicine Committees on High-Performance Bolting Technology for Offshore Oil and Natural Gas Operations and the Decadal Survey of Social and Behavioral Sciences and Applications to National Security.
In 2014 Dr. Cooke received the Human Factors and Ergonomics Society’s Arnold M. Small President’s Distinguished Service Award. She is a fellow of the Human Factors and Ergonomics Society, the American Psychological Association, the Association for Psychological Science, and The International Ergonomics Association. Dr. Cooke was designated a National Associate of the National Research Council of the National Academies of Sciences, Engineering, and Medicine in 2016.
Dr. Cooke’s research interests include the study of individual and team cognition and its application to cyber and intelligence analysis, remotely-piloted aircraft systems, human-robot teaming, healthcare systems, and emergency response systems. Dr. Cooke specializes in the development, application, and evaluation of methodologies to elicit and assess individual and team cognition.
I am excited about both projects, as well as another one that is upcoming. I am involved in the synthetic teammate project, a large ongoing project started about 15 years ago, with the Air Force Research Lab (AFRL; Chris Myers, Jerry Ball and others) and former post docs, Jamie Gorman (Georgia Tech) and Nathan McNeese (Clemson) and current post doc, Mustafa Demir. Sandia Research Corporation (Steve Shope and Paul Jorgenson) is also involved. It is exciting to be working with so many bright, energetic, and dedicated people. In this project AFRL is developing a synthetic agent capable of serving as a full-fledged teammate that works with two human teammates to control a Remotely Piloted Aircraft System and take reconnaissance photos of ground targets. The team (including the synthetic pilot) interacts via text chat.
The USAF (United States Air Force) would like to eventually use synthetic agents as teammates for large scale team training exercises. Ultimately an individual should be able to have a team training experience over the internet without having to involve any other humans to serve as white forces for someone else’s training. In addition, our laboratory is interested in learning about human-autonomy teaming, and in particular, the importance of coordination. In other studies we have found an interesting curvilinear relation relating coordination stability to performance, wherein the best performance is associated with mid-level coordination stability (not too rigid or unpredictable). This project is funded by the Office of Naval Research.
We are also conducting another project with Subbarao Kambhampati “Rao”at ASU. In this project our team informs robot planning algorithms of Rao’s team by use of a human dyad working in a Minecraft setting. One person is inside the Minecraft structure representing a collapsed building and the other has limited view of the Minecraft environment, but does have a map that now is inaccurate in regard to the collapsed environment. The two humans work together to identify and mark on the map the location of victims. We are paying careful attention to only to the variables that affect the dyads’ interactions, but also to features of communication that are tied to higher levels of performance. This project is also funded by the Office of Naval Research.
Finally, I am very excited to be directing a new center at ASU called the Center for Human, Artificial Intelligence, and Robot Teaming or CHART. I am working with Spring Berman, a swarm roboticist, to develop a testbed in which to conduct studies of driverless cars interacting on the road with human-driven cars. Dr. Berman has a large floor mat that depicts a roadway with small robots that obey traffic signals and can avoid colliding with each other. We are adding to that robots that are remotely controlled by humans as they look at imagery from the robot’s camera. In this testbed we are excited to test all kinds of scenarios involving human-autonomous vehicle interactions.
Too often automation is developed without consideration for the user. It is often thought that automation/autonomy will not require human intervention, but that is far from the truth. Humans are required to interact with autonomy at some level.
A lack of good Human Systems Integration from the beginning can cause unexpected consequences and brittleness in the system. The recent mistaken incoming missile message sent to Hawaii’s general public provides a great example of the potential effects of introducing a new interface with minimal understanding of the human task or preparation of the general public.
I am currently reading Four Futures by Peter Frase that paints four different scenarios of humans and AI in the future. Two of the scenarios are dark with robots in control and two are more optimistic. I tend toward the optimistic scenarios, but realize that this situation would be the result of thoughtful application of AI, coupled with checks to keep nefarious actors at bay. Robots and AI have already, and will continue to, take on jobs that are “dull, dirty, or dangerous” for humans. Humans need to retrain for other jobs (many that do not exist now) and teams of humans, AI and robots need to be more thoughtful composed based on the capabilities of each. I believe that this is the path toward a more positive outcome.
In our fourth post in a new series, we interview a leading social science researcher and leader in aviation psychology, Dr. Frank Durso. Frank was also my academic advisor (a decade ago) and it was a pleasure to chat with him about his thoughts about the impact and future of automation in aviation.
Francis T. (Frank) Durso is Professor and Chair of the School of Psychology at the Georgia Institute of Technology where he directs the Cognitive Ergonomics Lab. Frank received his Ph.D. from SUNY at Stony Brook and his B.S. from Carnegie-Mellon University. While at the University of Oklahoma, he was a Regents Research recipient and founding director of their Human-Technology Interaction Center.
Frank is Past-President of the Human Factors and Ergonomics Society (HFES), the Southwestern Psychological Association, the American Psychological Association’s (APA) Division of Engineering Psychology, and founding President of the Oklahoma Psychological Society. He is a sitting member of the National Research Council’s Board of Human Systems Integration. He has served as advisor and panelist for the Transportation Research Board, the National Science Foundation, the APA, the Army Research Lab, and the Government Accountability Office.
Frank was associate editor of the Journal of Experimental Psychology: Applied, senior editor of Wiley’s Handbook of Applied Cognition, co-editor of the APA Handbook of Human Systems Integration, and serve as founding editor of the HFES monograph series entitled User’s Guides to Methods in Human Factors and Ergonomics. He has served on several editorial boards including Human Factors. He co-authored Stories of Modern Technology Failures and Cognitive Engineering Successes. He is a fellow of the HFES, APA, the Association for Psychological Science, and the Psychonomic Society. He was awarded the Franklin V. Taylor award for outstanding achievements in applied experimental and engineering psychology from APA
His research has been funded by the Federal Aviation Administration, the National Science Foundation, and the Center for Disease Control as well as various industries. Most of Frank’s research has focused on cognition in dynamic environments, especially in transportation (primarily air traffic control) and healthcare. He is a co-developer of the Pathfinder scaling algorithm, the SPAM method of assessing situation awareness, and the Threat-Strategy Interview procedure. His current research interests focus on cognitive factors underlying situation understanding and strategy selection.
As you know, people, including big thinkers like Paul Fitts in 1951, have given thought to how to divide up a task between a machine and a person. While we people haven’t changed much, our silicon helpers have. Quite a bit. They’re progressed to the point that autonomy, and the issues that accompany them are now both very real. (I’ll get back to autonomy in your other question). Short of just letting the machine do it, or just doing it yourself, the puzzle of how to divvy up a task remains although the answer to the puzzle changes.
When I first started doing research for the FAA in the early 90s, there was talk of automation soon to be available that would detect conflicts and suggest ways to resolve the conflict, leaving the controller to choose among recommendations. A deployed version of this was URET, an aid that the controller could use if he or she wanted. In one mode, controllers were given a list like representation of flight data much like the paper strips did or a graphic representation of flight paths. Either mode depicted conflicts up to 20 minutes out.
When I toured facilities back then, I remember finding a controller who was using the aid when a level red conflict appeared. I waited for him to make changes to resolve the conflict. And waited. He never did anything to either plane in conflict, and yet the conflict was resolved. When I asked him about it, he told me “Things will probably change before I need to worry about it.” He gave me two insights that stayed with me. One was that in dynamic environments, things change and the more dynamic the more likely is what you (or your electronic aid) expect and plan for are mere possibilities, not certainties. This influenced much of my subsequent thinking about situation awareness, what it was, and how to measure it.
I also realized that day that I would never understand anything unless I understood the strategies that people used. I didn’t do anything with that realization back then, thinking it would be like trying to nail jello to a wall. I’m fascinated by strategy research today, but then I was afraid the jello and my career in aviation human factors would both be a mess lying at my feet.
Our big worries with automation that does the thinking for us were things like, will controllers use the technology? Today we’d call that technology acceptance. will the smart automation change the job from that of controlling air traffic to managing it? Of course, when people are put into a situation where they merely observe, while the automation does the work, there’s the risk that the human will not truly be engaged and situation awareness would suffer. That’s a real concern especially if you ever expect the human to again take over the task.
Now there are initiatives and technologies in the FAA that eliminate or at least reduce conflictions by optimizing the aircraft sequence and leave to the controller the task of getting the aircraft to fall in line with that optimization. Imagine that the computer optimizes the spacing and timing of planes landing at an airport. The planes are not, of course, naturally in this optimized pattern, so the computer presents to the controller “plane bubbles” Those plane bubbles are optimized. All the controller has to do is get the plane into that bubble and conflicts will be reduced and landings would be optimized. This notion of having the computer do the heavy cognitive lifting of solving conflict and optimization and then presenting those “targets” to the controller can be used in a variety of circumstances. Now the “controller” is not even a manager, but in some ways the controller is being kept in the game and should therefore show pretty good situation awareness.
Now I worry that situation awareness be very local—tied to a specific, perhaps meaningless piece of the overall picture. This time global SA levels may be a conern; they may have little or no understanding of the big picture of all those planes landing, even if he or she does have good SA of getting a particular plane in queue.
For some reason, I no longer worry about technology acceptance as I did in 1997. Twenty years later, I do worry that this new level of automation can take much of the agency away from the controller—so much of what makes the job interesting and fun. Retention of controllers might suffer and those that stay will be less satisfied with their work, which produces other job consequences.
As an end to this answer, I note that much has changed in the last quarter of a century, but we still seem to be following a rather static list of machines do this and people do that. Instead, I think the industry needs to adopt the adaptive allocation of tasks that human factors professionals have studied. The question is not really when should the computer sequence flights, but when should that responsibility be handed over to the human. Or when should the computer, detecting a tired controller perhaps, rest responsibility for separation from him or her.
The National Academies of Science, Engineering, and Medicine do their operational work through seven programs governed by the rules of the National Research Council. One of the programs, the Division of Behavioral, Social Science, and Education contains the Board of Human Systems Integration or BOHSI. Established by President Lincoln, the Academies is not a government agency. A consequence of that for the Boards is that financing is through sponsors.
The original Committee on Human Factors was founded in 1980 bty the Army, Navy, and Air Force. Since then, BOHSI has been sponsored by a myriad of agencies including NASA, NIOSH, FAA, and Veteran’s Health. I’m proud to say APA Division 21 and the Human Factors and Ergonomics Society, two organizations I’ve led in the past are also sponsors.
BOHSI’s mandate is to provide an independent voice on the HSI issues that interest the nation. We provide theoretical and methodological perspectives on people-organization-technology-environment systems. The board itself currently comprises 16 members, including National Academy members, academics, business leaders, and industry professional. They were invited from a usually (very) long list of nominations. A visit to the webpage will show the caliber of the members. http://sites.nationalacademies.org/DBASSE/BOHSI/Members/index.htm
The issues BOHSI is asked to address are myriad. Decision makers, leaders, and scholars from other disciplines are becoming increasingly aware of the fact that people and how they interact within and with complex systems is a critical feature that must be addressed if we are to solve today’s societal challenges. We’ve looked at remote controlled aviation systems, at self-escape from mining, and how to establish safety cultures in academic labs, to mention a few.
BOHSI addresses these problems in a variety of ways. The most extensive efforts result in reports like those currently on the webpage: Integrating Social and Behavioral Science within the Weather Enterprise; Personnel Selection in the Pattern Evidence Domain of Forensic Science; and CMV Driver Fatigue, Long Term Health, and Highway Safety. These reports are generated by committees appointed by BOHSI. A member of the board or two often sits on these working committees, but the majority of the committee is made up of national experts on the specific topic representing various scientific, policy, and operational perspectives. The hallmark of these reports is that that provide an independent assessment and recommendation for the sponsor and the nation.
As technology advances at an accelerating rate, real autonomy becomes a real and exciting possibility. The issues that accompany truly independent automated agents are exciting as well. I think there are a number of questions of interest and there are lots of smart people looking into them. For example, there’s the critical question of trust. Why did Luke trust R2-D2? (Did R2 trust Luke?) And technology acceptance continues to be with us: Why will elderly folk allow a robot to assist with this task, but not that one.
But I think the biggest issues with autonomy is getting a handle on when responsibility, control, or both switch from one member of the technology-person pair to the other. How can the autonomous car hand over control to the driver? Will the driver have the SA to receive it? How does this handshaking occur if each system does not have an understanding of the state of the other? We don’t really understand the answer to these questions regarding two humans let alone between a human and an automaton.
There are indeed ways we can inform the human of the automation’s state, but we can also inform the automaton of the human’s state. Advances in machine learning allows the automaton to learn how the human prefers to interact with it. Advances in augmented cognition can allow us to feed information about the physiological information about the operator to the automaton. If the car knew the driver was stressed (cortisol levels) or tired (eye closures) it might decide to not hand over control.
I should mention here that this kind of separation of responsibilities between machine and human is quite different than the static lists I discussed in my first answer regarding the FAA technology. There, the computer had certain tasks and the controller had others; here any particular task belongs to either particular agent, depending on the situation.
I think future work has to really investigate the system properties of the human and technology, and not (just) each alone.
Mr. Bruemmer is Founder and CEO of Adaptive Motion Group which provides Smart Mobility solutions based on accurate positioning and autonomous vehicles. Previously, Mr. Bruemmer co-founded 5D Robotics, supplying innovative solutions for a variety of automotive, industrial and military applications.
Mr. Bruemmer has led large scale robotics programs for the Army and Navy, the Department of Energy, and the Defense Advanced Research Projects Agency. He has patented robotic technologies for landmine detection, urban search and rescue, decontamination of radioactive environments, air and ground teaming, facility security and a variety of autonomous mapping solutions.
Mr. Bruemmer has authored over 60 publications and has been awarded 20 patents in robotics and positioning. He recently won the South by Southwest Pitch competition and is a recipient of the R&D 100 Award and the Stoel Reeves Innovation award. Mr. Bruemmer led robotics research at the Idaho National Lab for a diverse, multi-million dollar R&D portfolio. Between 1999 and 2000, Mr. Bruemmer served as a consultant to the Defense Advanced Research Projects Agency (DARPA), where he worked to coordinate development of autonomous robotics technologies across several offices and programs.
There seems to be waves of optimism about AI followed by disappointment as the somewhat inflated goals meet the realities of trying to deploy robotics and AI. Machine learning has come a long way but the growth has been linear and I really do not feel that deep learning is necessarily a “fundamentally new” machine learning tool.
I think there is a large amount of marketing and spin especially in the autonomous driving arena. I have been sad to see that in the past several years, some of the new cadre of self-driving companies seem to have overlooked many of the hard lessons we learned in the military and energy sectors regarding the perils of “full autonomy” and the need for what I call “context sensitive shared control”.
Reliability continues to be the hard nut to crack and I believe that for a significant shift in the level of reliability of overall automation we need to focus more energy on positioning. Positioning is sometimes considered to be a “solved problem” as various programs and projects have offered lidar mapping, RTK GPS and camera based localization. These work in various constrained circumstances but often fail outside of the bounds were they were intended.
I think that even after the past twenty years of progress we need a more flexible, resilient means of ensuring accurate positioning. I would also like to point out that machine learning and AI is not a cure-all. If it was we wouldn’t have the increasing death toll on our roads or the worsening congestion. When I look at AI I see a great deal of potential but most of it still unrealized. This is either cause for enthusiasm or pessimism depending on your perspective.
Yes I do. I think that many are overlooking the real issue: that our increasingly naïve dependence on AI is harmful not only from a cultural and societal perspective, but that it also hurts system performance. Some applications and environments allow for a very high degree of autonomy. However there are many other tasks and environment where we need to give up on this notion of fully removing the human from the drivers seat or the processing loop and instead focus on the rich opportunity for context sensitive shared control where the human and machine work as team mates balancing the task allocation as needed.
Part of the problem is that those who make the products want you to believe their system is perfect. It’s an ego thing on the part of the developers and a marketing problem to boot. For the teamwork between human and robot to be effective both human and machine need an accurate understanding of eachothers’ limitations.
Quite frankly, the goal of many AI companies is to make you overlook these limits. So supposedly GPS works all the time and we provide the user no sense of “confidence” or what in psychology we call a “feeling of knowing.” This breeds a strangely unfortunate slew of problems from getting horribly lost in urban jungles or real jungles.
If we were more honest about the limitations and we put more energy into communicating the need for help and more data then things could work a whole lot better. But we almost never design the system to acknowledge its own limitations.
The great thing about my emphasis on shared control is that I never need to base my business model or my technology on the idea of removing the human or eliminating the human labor.
Having said that I do of course believe that better AI and robotics means increased safety and efficiency which in turn can lead to reduced human labor. I think this is a good thing as long as it is coupled with a society that cares for the individual.
Corporations should not be permitted to act with impunity and I believe the role of government is to protect the right of every human to be prioritized over machines and over profits. This mindset has less to do with robotics and more to do with politics so I will leave off there. I do always try to emphasize that robotics should not ever be about the robots, but rather about the people they work with.
But as the next story shows, these AI tools are not advanced enough to replace human content moderators.
2017 seems to have been a watershed year for the use and application of AI and algorithms. This is part 1 of a two part post highlighting the use (and possible regulation) of AI.
If you are scrambling to find last minute gifts, AI/machine learning is here to help! All the major retailers are now turning to AI to learn what you want. Big data about retail purchases are being fed into machine learning algorithms to learn things about you. Here are some examples. By the way, have you wondered, "what exactly is machine learning?" Then see the end of this post for an easily digestible video.
I love Sephora. As the article aptly states "Sephora isn’t your mother’s makeup company; it’s your modern tech company". I have personally tried the Color IQ, which is their in-store program that scans faces to find out the right shade of foundation and other products for different skin tones. Sephora has an amazing Beauty Insider program that provides it a lot of rich data about their consumers and now the company is leveraging AI to allow customers to virtually try on make-up and spice up their online presence.
The science behind machine/deep learning neural networks is quite interesting. For example, the discussion, in the video, about us not knowing what is exactly is being learned is interesting to me (the hidden layer). But you don't have time for that! Here is an easily understood video:
Just a short note to let our dear readers know that posting volume will be a bit lighter as we travel for the holidays. But here is what's coming up!
Thanks for reading! Tell your friends!!
The Society for the Prevention of Cruelty to Animals (SPCA) based in San Francisco has been asked to halt the use of their security robot, which they had started using after experiencing a lot of car break-ins, theft, and vandalism. SPCA also reported that they have seen a decline in the crimes after adopting the robot. However, some tagged the robot as the "anti-homeless" robot, whose aim was to dislodge homeless campers and whose appearance was considered creepy.
The Global Entrepreneurship Summit last year was inagurated with Modi and Trump pressing a button on a robot developed by a startup based in Bangalore, India.
Variations of the robot are envisioned to be used for customer assistance and therefore projected to increase sales via smart conversations as well as a party photographer, DJ, and live tweeter.
The fashion industry is one that is rife with ethical issues at the high end (haute couture, impossible body standards of models) to the low end (fast fashion, manufacturing). Can robots solve these issues?
This article mainly discusses how fashion is embracing the look of robots. But could robots soon replace fashion models?
Not far behind is Japan, where a doll with the motion of a human is co-existing with humans, is active in the fashion scene, and is being idoloized.
The number of prominent celebrities and politicians being taken down for sexual harassment really seems to represent a major change in how society views sexual harassment. No longer whispered or swept under the rug, harassment is being called-out and harassers are being held accountable for their words and actions.
So, if AI will soon be collaborators, partners, and team mates, shouldn't they also be given the same treatment? This story in VentureBeat talks about a campaign by Randy Painter to consider how voice assistants behave when harassed:
I've never harassed Siri so I wasn't aware of the responses she gives when one attempts to harass her:
In our interview last week with Dr. Julie Carpenter, she addressed this somewhat:
This is fascinating because there is existing and ongoing research examining how humans respond and behave with AI/autonomy that exhibits different levels of politeness. For example, autonomy that is rude, impatient, and intrusive were considered less trustworthy by human operators. If humans expect autonomy to have a certain etiquette, isn't it fair to expect at least basic decency from humans towards autonomy?
The social science research that we cover in this blog is carried out by a multitude of talented scientists across the world; each studying a different facet of the problem. In our second post in a new series, we interview one the leaders in the study of the human factors of autonomy, Dr. Mica Endsley.
Dr. Mica Endsley is President of SA Technologies, a cognitive engineering firm specializing in the analysis, design, measurement and training of situation awareness in advanced systems, including the next generation of systems for aviation, air traffic control, health care, power grid operations, transportation, military operations, homeland security, and cyber.
From 2013 to 2015, she served as Chief Scientist of the U.S. Air Force, reporting to the Chief of Staff and Secretary of the Air Force, providing guidance and direction on research and development to support Air Force future operations and providing assessments on a wide range of scientific and technical issues affecting the Air Force mission.
She has also held the position of Visiting Associate Professor at MIT in the Department of Aeronautics and Astronautics and Associate Professor of Industrial Engineering at Texas Tech University. Dr. Endsley received a Ph.D. in Industrial and Systems Engineering from the University of Southern California.
Dr. Endsley is a recognized world leader in the design, development and evaluation of systems to support human situation awareness (SA) and decision-making. She is the author of over 200 scientific articles and reports on situation awareness and decision-making, automation, cognitive engineering, and human system integration. She is co-author of Analysis and Measurement of Situation Awareness and Designing for Situation Awareness. Dr. Endsley received the Human Factors and Ergonomics Society Jack Kraft Innovator Award for her work in situation awareness.
She is a fellow in the Human Factors and Ergonomics Society, its Past-President, was co-founder of the Cognitive Engineering and Decision Making Technical Group of HFES, and served on its Executive Council. Dr. Endsley has received numerous awards for teaching and research, is a Certified Professional Ergonomist and a Registered Professional Engineer. She is the founder and former Editor-in-Chief of the Journal of Cognitive Engineering and Decision Making and serves on the editorial board for three major journals.
Autonomous systems are being developed or are under consideration for a wide range of operational missions. This includes:
Many common challenges exist for people to work in collaboration with these autonomous systems across all of these future applications. These include:
Challenges occur when people working with automation develop a level of trust that is inappropriately calibrated to the reliability and functionality of the system in various circumstances. In order for people to operate effectively with autonomous systems, they will need to be able to determine how much to trust the autonomy to perform its tasks.
This trust is a function of not just the overall reliability of the system, but also a situationally determined assessment of how well it performs particular tasks in particular situations. For this, people need to develop informed trust – an accurate assessment of when and how much autonomy should be employed, and when to intervene.
Given that it is unlikely that autonomy in the foreseeable future will work perfectly for all functions and operations, and that human interaction with autonomy will continue to be needed at some level, these factors work to create the need for a new approach to the design of autonomous systems that will allow them to serve as an effective teammate with the people who will need to depend on them to do their jobs.
The future with autonomous systems may be good, bad, or very ugly, depending on how successful we are in designing and implementing effective human-autonomy collaboration and coordination.
In the bad scenario, if we continue to develop autonomous systems that are brittle, and that fail to provide the people who must work with automation with the needed situation awareness to be able to effective in their roles, then the true advantages of both people and autonomy will be compromised.
The ugly scenario will occur only if decision makers forget about the power of people to be creative and innovative, and try to supplant them with autonomous systems in a failed belief in its superiority. Nothing in the past 40 years of automation research has justified such an action, and such a move would be truly disastrous in the long run.
In a successful vision of the future, autonomous systems will be designed to serve as part of a collaborative team with people. Flexible autonomy will allow the control of tasks, functions, sub-systems, and even entire vehicles to pass back and forth over time between people and the autonomous system, as needed to succeed under changing circumstances. Many functions will be supported at varying levels of autonomy, from fully manual, to recommendations for decision aiding, to human-on-the-loop supervisory control of an autonomous system, to one that operates fully autonomously with no human intervention at all.
People will be able to make informed choices about where and when to invoke autonomy based on considerations of trust, the ability to verify its operations, the level of risk and risk mitigation available for a particular operation, the operational need for the autonomy, and the degree to which the system supports the needed partnership with the human.
In certain limited cases, the system may allow the autonomy to take over automatically from the human, when timelines are very short for example, or when loss of lives are imminent. However, human decision making for the exercise of force with weapon systems is a fundamental requirement, in keeping with Department of Defense directives.
The development of autonomy that provides sufficient robustness, span of control, ease of interaction, and automation transparency is critical to achieving this vision. In addition, a high level of shared situation awareness between the human and the autonomy will be critical. Shared situation awareness is needed to ensure that the autonomy and the human operator are able to align their goals, track function allocation and re-allocation over time, communicate decisions and courses of action, and align their respective tasks to achieve coordinated actions.
Critical situation awareness requirements that communicate not just status information, but also comprehension and projections associated with the situation (the higher levels of situation awareness), must be built into future two-way communications between the human and the autonomy.
This new paradigm is a significant departure from the past in that it will directly support high levels of shared situation awareness between human operators and autonomous systems, creating situationally relevant informed trust, ease of interaction and control, and manageable workload levels needed for mission success. By focusing on human-autonomy teaming, we can create successful systems that get the best benefits of autonomous software along with the innovation of empowered operators.