Posts in Interview
Maggie Jackson: Technology, distraction, digital health, and the future

About Maggie Jackson

 Photo credit: Karen Smul

Photo credit: Karen Smul

Maggie Jackson is an award-winning author and journalist known for her writings on technology’s impact on humanity. Her acclaimed book Distracted: Reclaiming Our Focus in a World of Lost Attention was compared by Fast Company magazine to Silent Spring for its prescient warnings of a looming crisis in attention. The book, with a foreword by Bill McKibben, will be published in a new updated edition in September.

Jackson’s articles have appeared in The New York Times, The Wall Street Journal, Los Angeles Times, and on National Public Radio, among many other publications, and her work and comments have been featured in media worldwide. Her essays appear in numerous anthologies, including The State of the American Mind: Sixteen Leading Critics on the New Anti-Intellectualism (Templeton, 2015) and The Digital Divide (Penguin, 2010). 

A former Boston Globe contributing columnist, Jackson is the recipient of Media Awards from the Work-Life Council of the Conference Board; the Massachusetts Psychological Association; and the Women’s Press Club of New York. She was a finalist for the Hillman Prize, one of journalism’s highest honors for social justice reporting and has served as a Visiting Fellow at the Bard Graduate Center, an affiliate of the Institute of the Future in Palo Alto, and a University of Maryland Journalism Fellow in Child and Family Policy. A graduate of Yale University and the London School of Economics with highest honors, Jackson lives with her family in New York and Rhode Island.


How can technology facilitate a healthy work-life balance? 

I believe that the crucial question today is improving the balance between digital and non-digital worlds

Over the last 20 years, technology has changed human experience of time and space radically. Distance no longer matters much, nor duration, as devices allow us to fling our bodies and thoughts around the globe near-instantly. While on a business trip, a parent can skype a bedtime story with a child at home. The boss can reach a worker who’s hiking on a remote mountaintop. Technology has broken down cultural and physical boundaries and walls – making home, work, and relationships portable. That’s old news now, and yet we’re still coming to grips with the deep impact of such changes. 

For instance, it’s becoming more apparent that the anywhere-anytime culture isn’t simply a matter of carrying our work or home lives around with us and attending to them as we wish. It’s not that simple by far. First, today’s devices are designed to be insistent, intrusive systems of delivery, so any single object of our focus – an email, a text, a news alert – is in competition with others at every minute. We now inhabit spaces of overlapping, often-conflicting commitments and so have trouble choosing the nature and pace of our focus. 

The overall result, I believe, is a life of continual negotiation of roles and attentional priorities. Constant checking behavior (polls suggest Americans check their phones on average up to 150 times a day) is a visible symptom of the need to rewrite work-life balance dozens of times a day. The “fear of missing out” that partly drives always-on connectivity also is a symptom of the necessity of continually renegotiating the fabric of life on- and off-line. 

Because this trend toward boundary-less living is so tech-driven, I believe that the crucial question today is improving the balance between digital and non-digital worlds. After that, work-life balance will follow. 

We need to save time for uninterrupted social presence, the kind that nurtures deeper relationships. We urgently need space in our lives where we are not mechanically poked, prodded and managed, ie when we are in touch with and able to manage our inner lives. (Even a silent phone in “off” mode undercuts both focus and cognitive ability, according to research by Adrian Ward at the University of Texas/Austin.) 

One solution would be to think more deliberately about boundaries in all parts of our life, but especially in the digital sphere. Too often lines of division are seen as a confinement, a kind of archaic Industrial Age habit. But boundaries demarcate; think of a job description, a child’s bedtime, or the invention of the weekend, a ritual that boosts well-being even among the jobless. Boundaries are systems of prioritization, safety zones, structures for depth, and crucial tools for providing structure in a digital age. A family that turns off its cell phones at dinner is creating opportunities for the kind of in-depth bonding that rarely is forged online.

Technology can help facilitate creative boundary-making – think of the new Apple and Google product designs that prompt offline time. But our devices cannot do the work of inventing and managing the boundaries that are crucial for human flourishing. 


Can you tell us about your new book?

My new book will draw from my research into how technology is changing our ideas of what it means to “know” something and what it means to be smart

I have a couple of book projects on the front burner. My most recent book, Distracted: Reclaiming Our Focus in a World of Lost Attention, explores the fragmentation of focus and the science of attention in the digital age. One of the first books to warn of our current crisis of inattention, it’s been compared by Fast Company magazine to Rachel Carson’s Silent Spring, and will be published in a new updated edition in September. 

After I finished that book, I realized that attention, as crucial a human faculty as it is, is nevertheless a vehicle, a means to whatever goals we are pursuing. And I began to see that if we have a moment’s focus, the crucial next stepping stone to human flourishing is to be able to think well, especially in a digital age. Those musings have led me on a multi-year journey into the nature of deliberation and contemplation, and in particular to the realization that uncertainty is the overlooked gateway or keystone to good thinking in an age of snap judgement. 

image.png

We think of uncertainty as something to avoid, particularly in an age that quite narrowly defines productivity and efficiency and good thinking as quick, automatic, machine-like, neat, packaged, and outcome-oriented. Of course humans need to pursue resolution, yet the uncertainty that we scorn is a key trigger to deep thought and itself a space of possibilities. Without giving uncertainty its due, humans don’t have choices. When we open ourselves to speculation or a new point of view, we create a space where deeper thinking can unfold. 

My new book will draw from my research into how technology is changing our ideas of what it means to “know” something and what it means to be smart. As well, I am drawing from new research on the upsides of uncertainty in numerous domains, including medicine, business, education, philosophy and of course psychology/cognitive science. It’s even a topic of conversation and interest in the HCI world, Rich Pak and others have told me. 

I believe that today more and more people are retreating political, psychologically, and culturally into narrow-mindedness, but I am heartened by the possibility that we can envision uncertainty as a new language for critical thinking. 


What does the future of human relationships with technology look like: good, bad, or ugly?

The essential question is: will our technologies help us flourish? The potential – the wondrous abundance, the speed of delivery, the possibility for augmenting the human or inspiring new art forms – is certainly there. But I would argue that at the moment we aren’t for the most part using these tools wisely, mostly because we aren’t doing enough to understand technology’s costs, benefits, and implications.

I’ve been thinking a lot about one of technology’s main characteristics: instantaneity. When information is instant, answers begin to seem so, too. After a brief dose of online searching, people become significantly less willing to struggle with complex problems; their “need for cognition” drops even as they begin to overestimate their ability to know. (The findings echo the well-documented “automation effect,” in which humans stop trying to get better at their jobs when working closely with machines, such as automated cockpits.) In other experiments, people on average ranked themselves far better at locating information than at thinking through a problem themselves.

Overall, the instantaneity that is so commonplace today may shift our ideas about what human cognition can be. I see signs that people have less faith in their own mental capacities, as well as less desire to do the hard work of deliberation. Their faith increasingly instead lies with technology. These trends will affect a broad range of future activities, such as whether or not people can manage a driverless car gone awry or even think it’s their role to do so; whether or not they any longer recognize the value of “inefficient” cognitive states of mind such as daydreaming, or whether or not they have the tenacity to push beyond the surface understanding of a problem on their own. Socially, similar risks are raised by instant access to relationships – whether to a friend on social media or to a companion robot that’s always beside a child or elder. Suddenly the awkwardness of depth need no longer trouble us as humans! 

These are the kinds of questions that we urgently need to be asking across society in order to harness technology’s powers well.We need to ask better questions about the unintended consequences and the costs/benefits of instantaneity, or of gaining knowledge from essentially template-based formats. We need to be vigilant in understanding how humans may be changed when technology becomes their nursemaid, coach, teacher, companion. 

image.png

Recently, an interview with the singer Taylor Goldsmith of the LA rock band Dawes caught my eye. The theme of the band’s latest album, Passwords, is hacking, surveillance and espionage. “I recognize what modern technology serves,” he told the New York Times. “I’m just saying, ‘let’s have more of a conversation about it.’” 

Well, there is a growing global conversation about technology’s effects on humanity, as well there should be. But we need to do far moreto truly understand and so better shape our relations with technology. That should mean far more robust schooling of children in information literacy, the market-driven nature of the Net, and in general critical thinking skills. That should mean training developers to become more accountable to users, perhaps by trying to visualize more completely the unintended consequences of their creations. It certainly must mean becoming more measured in our own personal attitudes; we all too often still gravitate to exclusively dystopian or utopian viewpoints on technology. 

Will we have good, bad, or ugly future relations to technology? At best, we’ll have all of the above. But at the moment, I believe that we are allowing technology in its present forms to do far more to diminish human capabilities than to augment them. By better understanding technology, we can avert this frightening scenario.

Dr. Nancy Cooke: Human-Autonomy Teaming, Synthetic Teammates, and the Future

In our fifth post in a new series, we interview a thought leader in human-systems engineering, Dr Nancy Cooke.

About Dr. Nancy Cooke

Cooke_Nancy_1116b.jpg

Nancy J. Cooke is a professor of Human Systems Engineering at Arizona State University and is Science Director of the Cognitive Engineering Research Institute in Mesa, AZ. She also directs ASU’s Center for Human, Artificial Intelligence, and Robot Teaming and the Advanced Distributed Learning Partnership Lab.

She received her PhD in Cognitive Psychology from New Mexico State University in 1987.  Dr. Cooke is currently Past President of the Human Factors and Ergonomics Society, chaired the National Academies Board on Human Systems Integration from 2012-2016, and served on the US Air Force Scientific Advisory board from 2008-2012.  She is a member of the National Academies of Science, Engineering, and Medicine Committees on High-Performance Bolting Technology for Offshore Oil and Natural Gas Operations and the Decadal Survey of Social and Behavioral Sciences and Applications to National Security.

In 2014 Dr. Cooke received the Human Factors and Ergonomics Society’s Arnold M. Small President’s Distinguished Service Award. She is a fellow of the Human Factors and Ergonomics Society, the American Psychological Association, the Association for Psychological Science, and The International Ergonomics Association.  Dr. Cooke was designated a National Associate of the National Research Council of the National Academies of Sciences, Engineering, and Medicine in 2016.

Dr. Cooke’s research interests include the study of individual and team cognition and its application to cyber and intelligence analysis, remotely-piloted aircraft systems, human-robot teaming, healthcare systems, and emergency response systems. Dr. Cooke specializes in the development, application, and evaluation of methodologies to elicit and assess individual and team cognition.


Tell us about your current ongoing projects, especially the synthetic teammate and human-autonomous vehicle teaming projects.

I am excited about both projects, as well as another one that is upcoming.  I am involved in the synthetic teammate project, a large ongoing project started about 15 years ago, with the Air Force Research Lab (AFRL; Chris Myers, Jerry Ball and others) and former post docs, Jamie Gorman (Georgia Tech) and Nathan McNeese (Clemson) and current post doc, Mustafa Demir.  Sandia Research Corporation (Steve Shope and Paul Jorgenson) is also involved.   It is exciting to be working with so many bright, energetic, and dedicated people.  In this project AFRL is developing a synthetic agent capable of serving as a full-fledged teammate that works with two human teammates to control a Remotely Piloted Aircraft System and take reconnaissance photos of ground targets.  The team (including the synthetic pilot) interacts via text chat. 

The USAF (United States Air Force) would like to eventually use synthetic agents as teammates for large scale team training exercises.  Ultimately an individual should be able to have a team training experience over the internet without having to involve any other humans to serve as white forces for someone else’s training.  In addition, our laboratory is interested in learning about human-autonomy teaming, and in particular, the importance of coordination.  In other studies we have found an interesting curvilinear relation relating coordination stability to performance, wherein the best performance is associated with mid-level coordination stability (not too rigid or unpredictable).  This project is funded by the Office of Naval Research.

 Image source: https://mojang.com/2016/12/shouldnt-you-be-on-minecraftnet-right-now/

Image source: https://mojang.com/2016/12/shouldnt-you-be-on-minecraftnet-right-now/

We are also conducting another project with Subbarao Kambhampati “Rao”at ASU.  In this project our team informs robot planning algorithms of Rao’s team by use of a human dyad working in a Minecraft setting.  One person is inside the Minecraft structure representing a collapsed building and the other has limited view of the Minecraft environment, but does have a map that now is inaccurate in regard to the collapsed environment.  The two humans work together to identify and mark on the map the location of victims.  We are paying careful attention to only to the variables that affect the dyads’ interactions, but also to features of communication that are tied to higher levels of performance.  This project is also funded by the Office of Naval Research.

Finally, I am very excited to be directing a new center at ASU called the Center for Human, Artificial Intelligence, and Robot Teaming or CHART.  I am working with Spring Berman, a swarm roboticist, to develop a testbed in which to conduct studies of driverless cars interacting on the road with human-driven cars.  Dr. Berman has a large floor mat that depicts a roadway with small robots that obey traffic signals and can avoid colliding with each other.  We are adding to that robots that are remotely controlled by humans as they look at imagery from the robot’s camera.  In this testbed we are excited to test all kinds of scenarios involving human-autonomous vehicle interactions.


You have co-authored the book "Stories of Modern Technology Failures and Cognitive Engineering Successes" with Dr. Frank Durso. What are some of the key points on human-autonomy interactions that you would like to share with our readers?

Too often automation is developed without consideration for the user.  It is often thought that automation/autonomy will not require human intervention, but that is far from the truth.  Humans are required to interact with autonomy at some level.  

A lack of good Human Systems Integration from the beginning can cause unexpected consequences and brittleness in the system.  The recent mistaken incoming missile message sent to Hawaii’s general public provides a great example of the potential effects of introducing a new interface with minimal understanding of the human task or preparation of the general public.


Can you paint us a scenario of humans and synthetic teammates working together in 50 years?

I am currently reading Four Futures by Peter Frase that paints four different scenarios of humans and AI in the future.  Two of the scenarios are dark with robots in control and two are more optimistic.  I tend toward the optimistic scenarios, but realize that this situation would be the result of thoughtful application of AI, coupled with checks to keep nefarious actors at bay.  Robots and AI have already, and will continue to, take on jobs that are “dull, dirty, or dangerous” for humans.  Humans need to retrain for other jobs (many that do not exist now) and teams of humans, AI and robots need to be more thoughtful composed based on the capabilities of each.  I believe that this is the path toward a more positive outcome.

Interview: Dr. Frank Durso

In our fourth post in a new series, we interview a leading social science researcher and leader in aviation psychology, Dr. Frank Durso. Frank was also my academic advisor (a decade ago) and it was a pleasure to chat with him about his thoughts about the impact and future of automation in aviation.

About Dr. Frank Durso

Francis T. (Frank) Durso is Professor and Chair of the School of Psychology at the Georgia Institute of Technology where he directs the Cognitive Ergonomics Lab.  Frank received his Ph.D. from SUNY at Stony Brook and his B.S. from Carnegie-Mellon University.    While at the University of Oklahoma, he was a Regents Research recipient and founding director of their Human-Technology Interaction Center.

frank_durso.jpg

Frank is Past-President of the Human Factors and Ergonomics Society (HFES), the Southwestern Psychological Association, the American Psychological Association’s (APA) Division of Engineering Psychology, and founding President of the Oklahoma Psychological Society.  He is a sitting member of the National Research Council’s Board of Human Systems Integration.  He has served as advisor and panelist for the Transportation Research Board, the National Science Foundation, the APA, the Army Research Lab, and the Government Accountability Office. 

Frank was associate editor of the Journal of Experimental Psychology: Applied, senior editor of Wiley’s Handbook of Applied Cognition, co-editor of the APA Handbook of Human Systems Integration, and serve as founding editor of the HFES monograph series entitled User’s Guides to Methods in Human Factors and Ergonomics.   He has served on several editorial boards including Human Factors.  He co-authored Stories of Modern Technology Failures and Cognitive Engineering Successes.  He is a fellow of the HFES, APA, the Association for Psychological Science, and the Psychonomic Society.   He was awarded the Franklin V. Taylor award for outstanding achievements in applied experimental and engineering psychology from APA  

His research has been funded by the Federal Aviation Administration, the National Science Foundation, and the Center for Disease Control as well as various industries.  Most of Frank’s research has focused on cognition in dynamic environments, especially in transportation (primarily air traffic control) and healthcare.   He is a co-developer of the Pathfinder scaling algorithm, the SPAM method of assessing situation awareness, and the Threat-Strategy Interview procedure. His current research interests focus on cognitive factors underlying situation understanding and strategy selection.


For part of your career, you have been involved in air traffic control and have seen the use of automation evolve from paper-based flight strips to NexGen automation.  In your opinion, what is the biggest upcoming automation-related challenge in this domain?

As you know, people, including big thinkers like Paul Fitts in 1951, have given thought to how to divide up a task between a machine and a person.  While we people haven’t changed much, our silicon helpers have.  Quite a bit. They’re progressed to the point that autonomy, and the issues that accompany them are now both very real.  (I’ll get back to autonomy in your other question).  Short of just letting the machine do it, or just doing it yourself, the puzzle of how to divvy up a task remains although the answer to the puzzle changes.

When I first started doing research for the FAA in the early 90s,  there was talk of automation soon to be available that would detect conflicts and suggest ways to resolve the conflict, leaving the controller to choose among recommendations.  A deployed version of this was URET, an aid that the controller could use if he or she wanted.  In one mode, controllers were given a list like representation of flight data much like the paper strips did or a graphic representation of flight paths.  Either mode depicted conflicts up to 20 minutes out. 

I do worry that this new level of automation can take much of the agency away from the controller

When I toured facilities back then, I remember finding a controller who was using the aid when a level red conflict appeared.  I waited for him to make changes to resolve the conflict.  And waited.  He never did anything to either plane in conflict, and yet the conflict was resolved.  When I asked him about it, he told me “Things will probably change before I need to worry about it.”  He gave me two insights that stayed with me. One was that in dynamic environments, things change and the more dynamic the more likely is what you (or your electronic aid) expect and plan for are mere possibilities, not certainties.  This influenced much of my subsequent thinking about situation awareness, what it was, and how to measure it. 

 Next Generation Air Transport System (NextGen): https://www.nasa.gov/topics/aeronautics/features/8q_nextgen.html

Next Generation Air Transport System (NextGen): https://www.nasa.gov/topics/aeronautics/features/8q_nextgen.html

I also realized that day that I would never understand anything unless I understood the strategies that people used.  I didn’t do anything with that realization back then, thinking it would be like trying to nail jello to a wall.  I’m fascinated by strategy research today, but then I was afraid the jello and my career in aviation human factors would both be a mess lying at my feet.

Our big worries with automation that does the thinking for us were things like, will controllers use the technology?  Today we’d call that technology acceptance.  will the smart automation change the job from that of controlling air traffic to managing it?  Of course, when people are put into a situation where they merely observe, while the automation does the work, there’s the risk that the human will not truly be engaged and situation awareness would suffer.  That’s a real concern especially if you ever expect the human to again take over the task.

Now there are initiatives and technologies in the FAA that eliminate or at least reduce conflictions by optimizing the aircraft sequence and leave to the controller the task of getting the aircraft to fall in line with that optimization.  Imagine that the computer optimizes the spacing and timing of planes landing at an airport.  The planes are not, of course, naturally in this optimized pattern, so the computer presents to the controller “plane bubbles” Those plane bubbles are optimized.  All the controller has to do is get the plane into that bubble and conflicts will be reduced and landings would be optimized.  This notion of having the computer do the heavy cognitive lifting of solving conflict and optimization and then presenting those “targets” to the controller can be used in a variety of circumstances.  Now the “controller” is not even a manager, but in some ways the controller is being kept in the game and should therefore show pretty good situation awareness. 

Now I worry that situation awareness be very local—tied to a specific, perhaps meaningless piece of the overall picture.  This time global SA levels may be a conern; they may have little or no understanding of the big picture of all those planes landing, even if he or she does have good SA of getting a particular plane in queue. 

For some reason, I no longer worry about technology acceptance as I did in 1997.  Twenty years later, I do worry that this new level of automation can take much of the agency away from the controller—so much of what makes the job interesting and fun.  Retention of controllers might suffer and those that stay will be less satisfied with their work, which produces other job consequences.

As an end to this answer, I note that much has changed in the last quarter of a century, but we still seem to be following a rather static list of machines do this and people do that.  Instead, I think the industry needs to adopt the adaptive allocation of tasks that human factors professionals have studied.  The question is not really when should the computer sequence flights, but when should that responsibility be handed over to the human.  Or when should the computer, detecting a tired controller perhaps, rest responsibility for separation from him or her. 


You are on the Board of Human-Systems Integration for the National Academies of Science and Engineering. What is purpose of the Board and what is your role?

how they interact within and with complex systems ...must be addressed if we are to solve today’s societal challenges

The National Academies of Science, Engineering, and Medicine do their operational work through seven programs governed by the rules of the National Research Council.  One of the programs, the Division of Behavioral, Social Science, and Education contains the Board of Human Systems Integration or BOHSI.  Established by President Lincoln, the Academies is not a government agency.  A consequence of that for the Boards is that financing is through sponsors. 

The original Committee on Human Factors was founded in 1980 bty the Army, Navy, and Air Force.  Since then, BOHSI has been sponsored by a myriad of agencies including NASA, NIOSH, FAA, and Veteran’s Health.  I’m proud to say APA Division 21 and the Human Factors and Ergonomics Society, two organizations I’ve led in the past are also sponsors.

BOHSI’s mandate is to provide an independent voice on the HSI  issues that interest the nation.  We provide  theoretical and methodological perspectives on people-organization-technology-environment systems.  The board itself currently comprises 16 members, including National Academy members, academics, business leaders, and industry professional.  They were invited from a usually (very) long list of nominations.  A visit to the webpage will show the caliber of the members. http://sites.nationalacademies.org/DBASSE/BOHSI/Members/index.htm   

The issues BOHSI is asked to address are myriad.  Decision makers, leaders, and scholars from other disciplines are becoming increasingly aware of the fact that people and how they interact within and with complex systems is a critical feature that must be addressed if we are to solve today’s societal challenges.  We’ve looked at remote controlled aviation systems, at self-escape from mining, and how to establish safety cultures in academic labs, to mention a few.

BOHSI addresses these problems in a variety of ways.  The most extensive efforts result in reports like those currently on the webpage: Integrating Social and Behavioral Science within the Weather Enterprise; Personnel Selection in the Pattern Evidence Domain of Forensic Science; and CMV Driver Fatigue, Long Term Health, and Highway Safety.  These reports are generated by committees appointed by BOHSI.  A member of the board or two often sits on these working committees, but the majority of the committee is made up of national experts on the specific topic representing various scientific, policy, and operational perspectives.  The hallmark of these reports is that that provide an independent assessment and recommendation for the sponsor and the nation.


As a social scientist studying autonomy, what do you see as the biggest unresolved issue?

As technology advances at an accelerating rate, real autonomy becomes a real and exciting possibility.  The issues that accompany truly independent automated agents are exciting as well.   I think there are a number of questions of interest and there are lots of smart people looking into them.  For example, there’s the critical question of trust.  Why did Luke trust R2-D2?  (Did R2 trust Luke?)  And technology acceptance continues to be with us:  Why will elderly folk allow a robot to assist with this task, but not that one.

The issues that accompany truly independent automated agents are exciting as well....Why did Luke trust R2-D2?  (Did R2 trust Luke?)

But I think the biggest issues with autonomy is getting a handle on when responsibility, control, or both switch from one member of the technology-person pair to the other.  How can the autonomous car hand over control to the driver?  Will the driver have the SA to receive it?  How does this handshaking occur if each system does not have an understanding of the state of the other?  We don’t really understand the answer to these questions regarding two humans let alone between a human and an automaton. 

There are indeed ways we can inform the human of the automation’s state, but we can also inform the automaton of the human’s state.  Advances in machine learning allows the automaton to learn how the human prefers to interact with it.  Advances in augmented cognition can allow us to feed information about the physiological information about the operator to the automaton.  If the car knew the driver was stressed (cortisol levels) or tired (eye closures) it might decide to not hand over control.   

I should mention here that this kind of separation of responsibilities between machine and human is quite different than the static lists I discussed in my first answer regarding the FAA technology.  There, the computer had certain tasks and the controller had others; here any particular task belongs to either particular agent, depending on the situation.

I think future work has to really investigate the system properties of the human and technology, and not (just) each alone.

David Bruemmer: Thoughts on the Future of AI & Robotics

In our third post in a new series, we interview a leader in robotics technology, Mr. David Bruemmer. We talk to David about the future of autonomous robotic technologies and the ethics associated with automation use. 

About David Bruemmer

BruemmerHeadShot.gif

Mr. Bruemmer is Founder and CEO of Adaptive Motion Group which provides Smart Mobility solutions based on accurate positioning and autonomous vehicles. Previously, Mr. Bruemmer co-founded 5D Robotics, supplying innovative solutions for a variety of automotive, industrial and military applications.

Mr. Bruemmer has led large scale robotics programs for the Army and Navy, the Department of Energy, and the Defense Advanced Research Projects Agency. He has patented robotic technologies for landmine detection, urban search and rescue, decontamination of radioactive environments, air and ground teaming, facility security and a variety of autonomous mapping solutions.

Mr. Bruemmer has authored over 60 publications and has been awarded 20 patents in robotics and positioning. He recently won the South by Southwest Pitch competition and is a recipient of the R&D 100 Award and the Stoel Reeves Innovation award. Mr. Bruemmer led robotics research at the Idaho National Lab for a diverse, multi-million dollar R&D portfolio. Between 1999 and 2000, Mr. Bruemmer served as a consultant to the Defense Advanced Research Projects Agency (DARPA), where he worked to coordinate development of autonomous robotics technologies across several offices and programs.


You have been working on developing autonomous robotics technologies for a long time now, during your tenure with Idaho National Lab and now as the CEO of Adaptive Motion Group. How has the field evolved and how excited are you about the future?

I think there is a large amount of marketing and spin especially in the autonomous driving arena

There seems to be waves of optimism about AI followed by disappointment as the somewhat inflated goals meet the realities of trying to deploy robotics and AI. Machine learning has come a long way but the growth has been linear and I really do not feel that deep learning is necessarily a “fundamentally new” machine learning tool.

I think there is a large amount of marketing and spin especially in the autonomous driving arena. I have been sad to see that in the past several years, some of the new cadre of self-driving companies seem to have overlooked many of the hard lessons we learned in the military and energy sectors regarding the perils of “full autonomy” and the need for what I call “context sensitive shared control”.

Reliability continues to be the hard nut to crack and I believe that for a significant shift in the level of reliability of overall automation we need to focus more energy on positioning. Positioning is sometimes considered to be a “solved problem” as various programs and projects have offered lidar mapping, RTK GPS and camera based localization. These work in various constrained circumstances but often fail outside of the bounds were they were intended.

I think that even after the past twenty years of progress we need a more flexible, resilient means of ensuring accurate positioning. I would also like to point out that machine learning and AI is not a cure-all. If it was we wouldn’t have the increasing death toll on our roads or the worsening congestion. When I look at AI I see a great deal of potential but most of it still unrealized. This is either cause for enthusiasm or pessimism depending on your perspective.


There are quite a few unfortunate events associated with automation use. For example, there is the story of a family who got lost in Death Valley due to overreliance on their GPS. Do you think of human-machine interaction issues during design?

Yes I do. I think that many are overlooking the real issue: that our increasingly naïve dependence on AI is harmful not only from a cultural and societal perspective, but that it also hurts system performance. Some applications and environments allow for a very high degree of  autonomy. However there are many other tasks and environment where we need to give up on this notion of fully removing the human from the drivers seat or the processing loop and instead focus on the rich opportunity for context sensitive shared control where the human and machine work as team mates balancing the task allocation as needed.

our increasingly naïve dependence on AI is harmful not only from a cultural and societal perspective, but that it also hurts system performance

Part of the problem is that those who make the products want you to believe their system is perfect. It’s an ego thing on the part of the developers and a marketing problem to boot. For the teamwork between human and robot to be effective both human and machine need an accurate understanding of eachothers’ limitations.

Quite frankly, the goal of many AI companies is to make you overlook these limits. So supposedly GPS works all the time and we provide the user no sense of “confidence” or what in psychology we call a “feeling of knowing.” This breeds a strangely unfortunate slew of problems from getting horribly lost in urban jungles or real jungles.

If we were more honest about the limitations and we put more energy into communicating the need for help and more data then things could work a whole lot better. But we almost never design the system to acknowledge its own limitations.


There is some ethical debate about robots or AI making some human occupations obsolete (e.g., long-haul trucking, medical diagnosis). How does ethics factor into your decisions when developing new technologies?

the role of government is to protect the right of every human to be prioritized over machines and over profits

The great thing about my emphasis on shared control is that I never need to base my business model or my technology on the idea of removing the human or eliminating the human labor.

Having said that I do of course believe that better AI and robotics means increased safety and efficiency which in turn can lead to reduced human labor. I think this is a good thing as long as it is coupled with a society that cares for the individual.

Corporations should not be permitted to act with impunity and I believe the role of government is to protect the right of every human to be prioritized over machines and over profits. This mindset has less to do with robotics and more to do with politics so I will leave off there. I do always try to emphasize that robotics should not ever be about the robots, but rather about the people they work with.

Dr. Mica Endsley: Current Challenges and Future Opportunities In Human-Autonomy Research

The social science research that we cover in this blog is carried out by a multitude of talented scientists across the world; each studying a different facet of the problem. In our second post in a new series, we interview one the leaders in the study of the human factors of autonomy, Dr. Mica Endsley.

About Dr. Mica Endsley

51JDTjtfmLL._SX331_BO1,204,203,200_.jpg

Situation Awareness Analysis and Measurement provides a comprehensive overview of different approaches to the measurement of situation awareness in experimental and applied settings. This book directly tackles the problem of ensuring that system designs and training programs are effective at promoting situation awareness.

 

Dr. Mica Endsley is President of SA Technologies, a cognitive engineering firm specializing in the analysis, design, measurement and training of situation awareness in advanced systems, including the next generation of systems for aviation, air traffic control, health care, power grid operations, transportation, military operations, homeland security, and cyber. 

From 2013 to 2015, she served as Chief Scientist of the U.S. Air Force, reporting to the Chief of Staff and Secretary of the Air Force, providing guidance and direction on research and development to support Air Force future operations and providing assessments on a wide range of scientific and technical issues affecting the Air Force mission.

She has also held the position of Visiting Associate Professor at MIT in the Department of Aeronautics and Astronautics and Associate Professor of Industrial Engineering at Texas Tech University. Dr. Endsley received a Ph.D. in Industrial and Systems Engineering from the University of Southern California.

Dr. Endsley is a recognized world leader in the design, development and evaluation of systems to support human situation awareness (SA) and decision-making. She is the author of over 200 scientific articles and reports on situation awareness and decision-making, automation, cognitive engineering, and human system integration. She is co-author of Analysis and Measurement of Situation Awareness and Designing for Situation Awareness. Dr. Endsley received the Human Factors and Ergonomics Society Jack Kraft Innovator Award for her work in situation awareness.

She is a fellow in the Human Factors and Ergonomics Society, its Past-President, was co-founder of the Cognitive Engineering and Decision Making Technical Group of HFES, and served on its Executive Council.  Dr. Endsley has received numerous awards for teaching and research, is a Certified Professional Ergonomist and a Registered Professional Engineer. She is the founder and former Editor-in-Chief of the Journal of Cognitive Engineering and Decision Making and serves on the editorial board for three major journals. 


What were the human-automation challenges you encountered in your role as the Chief Scientist for the Air Force?

 An RQ-4 Global Hawk soars through the sky to record intelligence, surveillence and reconnaissance data.   Image Source.

An RQ-4 Global Hawk soars through the sky to record intelligence, surveillence and reconnaissance data.  Image Source.

Autonomous systems are being developed or are under consideration for a wide range of operational missions. This includes:

  1. Manned aircraft, as more automation is added to both on-board and supporting functions such as mission planning, information/network management, vehicle health management and failure detection
  2. Unmanned aircraft are currently being used for surveillance missions and are being considered for a much wider range of activities where:
    1. people would be at high levels of risk (e.g., near to hostilities),
    2. communications links for direct control are unreliable due o jamming or other interference effects,
    3. where speed of operations is useful (e.g., re-tasking sensors based on observed target features), or
    4. to undertake new forms of warfare that may be enabled by intelligent, but expendable, systems, or closely coordinated flights of RPAs [remotely piloted aircraft] (e.g., swarms)
  3. Space operations can also benefit from autonomous systems that provide a means to build resilient space networks that can reconfigure themselves in the face of attacks, preserving essential functions under duress. It also provides a mechanism for significantly reducing the extensive manpower requirements for manual control of satellites and generation of space situation awareness through real-time surveillance and analysis of the enormous number of objects in orbit around the Earth.
  4. Cyber operations can benefit from autonomy due to the rapidity of cyber-attacks, and the sheer volume of attacks that could potentially occur. Autonomous software can react in milliseconds to protect critical systems and mission components. In addition, the ever-increasing volume of novel cyber threats creates a need for autonomous defensive cyber solutions, including cyber vulnerability detection and mitigation; compromise detection and repair (self-healing); real-time response to threats; network and mission mapping; and anomaly resolution.
  5. ISR [intelligence, surveillance, and reconnaissance] and Command and Control operations will also see increased use of autonomous systems to assist with integrating information across multiple sensors, platforms and sources, and to provide assistance in mission planning, re-planning, monitoring, and coordination activities.

Many common challenges exist for people to work in collaboration with these autonomous systems across all of these future applications. These include:

the more reliable and robust that automation is, the less likely that human operators overseeing the automation will be aware of critical information and able to take over manual control when needed...I have labeled this the Automation Conundrum
  1. Difficulties in creating autonomy software that is robust enough to function without human intervention and oversight are significant. Creating systems that can accurately not only sense but also understand (recognize and categorize) objects detected, and their relationship to each other and broader system goals, has proven to be significantly challenging for automation, especially when unexpected (i.e., not designed for) objects, events, or situations are encountered. This capability is required for intelligent decision-making, particularly in adversarial situations where uncertainty is high, and many novel situations may be encountered.
  2. A lowering of human situation awareness when using automation often leads to out-of-the-loop performance decrements. People are both slow to detect that a problem has occurred with the automation, or with the system being controlled by the automation, and then slow to come up to speed in diagnosing the problem to intervene appropriately, leading to accidents. Substantial research on this problem shows that as more automation is added to a system, and the more reliable and robust that automation is, the less likely human operators are in overseeing the automation and taking over manual control when needed. I have labeled this the Automation Conundrum.
  3. Increases in cognitive workload are often required in order to interact with the greater complexity associated with automation. Workload can often increase as understanding and interacting with automation increases demands.
  4. Increased time to make decisions can be found when decision aids are provided, often without the desired increase in decision accuracy. Evidence shows that people actually take-in system assessments and recommendations that they then combine with their own knowledge and understanding of the situation. A faulty decision aid can lead to people being more likely to make a mistake due to decision biasing by the aid. And the time required to make a decision can actually increase, as it is an additional source of information to take into account.

Challenges occur when people working with automation develop a level of trust that is inappropriately calibrated to the reliability and functionality of the system in various circumstances. In order for people to operate effectively with autonomous systems, they will need to be able to determine how much to trust the autonomy to perform its tasks.

This trust is a function of not just the overall reliability of the system, but also a situationally determined assessment of how well it performs particular tasks in particular situations. For this, people need to develop informed trust – an accurate assessment of when and how much autonomy should be employed, and when to intervene.

Given that it is unlikely that autonomy in the foreseeable future will work perfectly for all functions and operations, and that human interaction with autonomy will continue to be needed at some level, these factors work to create the need for a new approach to the design of autonomous systems that will allow them to serve as an effective teammate with the people who will need to depend on them to do their jobs.


What does the autonomous future look like for you? Is it good, bad or ugly?

The future with autonomous systems may be good, bad, or very ugly, depending on how successful we are in designing and implementing effective human-autonomy collaboration and coordination.

The ugly scenario will occur only if decision makers forget about the power of people to be creative and innovative, and try to supplant them with autonomous systems in a failed belief in its superiority

In the bad scenario, if we continue to develop autonomous systems that are brittle, and that fail to provide the people who must work with automation with the needed situation awareness to be able to effective in their roles, then the true advantages of both people and autonomy will be compromised. 

The ugly scenario will occur only if decision makers forget about the power of people to be creative and innovative, and try to supplant them with autonomous systems in a failed belief in its superiority. Nothing in the past 40 years of automation research has justified such an action, and such a move would be truly disastrous in the long run.

In a successful vision of the future, autonomous systems will be designed to serve as part of a collaborative team with people. Flexible autonomy will allow the control of tasks, functions, sub-systems, and even entire vehicles to pass back and forth over time between people and the autonomous system, as needed to succeed under changing circumstances. Many functions will be supported at varying levels of autonomy, from fully manual, to recommendations for decision aiding, to human-on-the-loop supervisory control of an autonomous system, to one that operates fully autonomously with no human intervention at all.

People will be able to make informed choices about where and when to invoke autonomy based on considerations of trust, the ability to verify its operations, the level of risk and risk mitigation available for a particular operation, the operational need for the autonomy, and the degree to which the system supports the needed partnership with the human.

In certain limited cases, the system may allow the autonomy to take over automatically from the human, when timelines are very short for example, or when loss of lives are imminent. However, human decision making for the exercise of force with weapon systems is a fundamental requirement, in keeping with Department of Defense directives.

The development of autonomy that provides sufficient robustness, span of control, ease of interaction, and automation transparency is critical to achieving this vision. In addition, a high level of shared situation awareness between the human and the autonomy will be critical. Shared situation awareness is needed to ensure that the autonomy and the human operator are able to align their goals, track function allocation and re-allocation over time, communicate decisions and courses of action, and align their respective tasks to achieve coordinated actions.

Critical situation awareness requirements that communicate not just status information, but also comprehension and projections associated with the situation (the higher levels of situation awareness), must be built into future two-way communications between the human and the autonomy.

This new paradigm is a significant departure from the past in that it will directly support high levels of shared situation awareness between human operators and autonomous systems, creating situationally relevant informed trust, ease of interaction and control, and manageable workload levels needed for mission success. By focusing on human-autonomy teaming, we can create successful systems that get the best benefits of autonomous software along with the innovation of empowered operators.


Dr. Julie Carpenter: Human-Robot/AI Relationships

The social science research that we cover in this blog is carried out by a multitude of talented scientists across the world; each studying a different facet of the problem. As the first post in a new series, we interview one the pioneers in the study of human-AI relationships, Dr. Julie Carpenter.

9781472443113.jpg

Dr. Carpenter’s first book, Culture and human-robot interaction in militarized spaces: A war story (RoutledgeAmazon) expands on her research with U.S. military Explosive Ordnance Disposal personnel and their everyday interactions with field robots.

About Dr. Julie Carpenter

Julie Carpenter has over 15 years of experience in human-centered design and human-AI interaction research, teaching, and writing. Her principal research is about how culture influences human perception of AI and robotic systems and the associated human factors such as user trust and decision-making in human-robot cooperative interactions in natural use-case environments.

Dr. Carpenter earned her PhD and an MS from the University of Washington, an MS from Rensselaer Polytechnic Institute, and a BA from the University of Wisconsin-Madison. She is also currently a Research Fellow in the Ethics + Emerging Sciences group at California Polytechnic State University. 

Dr. Carpenter’s first book, Culture and human-robot interaction in militarized spaces: A war story (RoutledgeAmazon) expands on her research with U.S. military Explosive Ordnance Disposal personnel and their everyday interactions with field robots. The findings from this research have applicability across a range of human-robot and human-AI cooperative scenarios, products, and situations. She regularly updates her website with information about her current work at jgcarpenter.com.


You have done a lot of work on the emotional attachment that humans have towards robots. Can you tell us more about your work?

At its heart, my work is human-centered and culture-centered. I tend to approach things in a very interdisciplinary way, and my body of published work reflects my long-term interest in how people use technology to communicate, from film to AI.

...there were relatively few people looking at AI as the vector for human emotion when I began in this vein

The medium or technologies I focus on changes and evolves. I began in film theory, then a lot of my work was about Web-based human interactions, and more recently it has been how people interact with robots and other forms of non-Web AI, like autonomous cars, textbots, or IoT agents such as Alexa.

But my lens for looking at things has always been rooted in a sort of anthropological interest in people and technology. Specifically, human emotional attachment to and through the technological medium interests me because there are so many nuanced possible pitfalls for the human, psychologically, ethically, emotionally, even physically.

Yet when it comes to scholarly study about topics like affection, friendship, love and their influence and connectedness with other complicated topics like trust, cooperative teamwork, and decision-making, there were relatively few people looking at AI as the vector for human emotion when I began in this vein. David Levy is one person who pioneered this discussion, of course, as are Clifford Nass and Byron Reeves.

As a film theory undergraduate student, I was drawn to how people use stories to explore technology, as we do in science fiction. Looking back, I can see where even then I was influenced by not only the idea of science fiction and science fiction films, but particularly ones that were of my own era as cultural touchstones and became the basis for a great deal of my early scholarly work. 

So, movies like Blade Runner were something I wrote whole papers about years before there was even a hint that we would enter an era when robots would become a reality in a very specific and rapid time for development in the 2000s. But back then I was looking at things as ideas connected specifically to that movie director’s body of work, or the audience for the movie, or culture at that time.

 Blade Runner (1982).   Image source

Blade Runner (1982).  Image source

Now I look at a movie like that as an exploration of human-robot possibilities, a reflection and influencer of popular cultural ideas, and also an inspiration to people like me, makers and researchers who have a say in developing real world AI. I find those sort of storytelling influences fascinating because they often set up peoples’ real world expectations of their interactions with technology, and even helps form the communication model.

Storytelling’s influence on culture is a very rich set of artifacts for exploration, and I manage to reference that idea a great deal in the way I situate research in the larger culture it is part of, however that may be defined for the scope of that work.


What do you think about media portrayals of human-robot/autonomy relationships in movies (e.g., the new Bladerunner; the movie Her)?

Cultures around the world use science fiction to explore what it means to be human...

I love science fiction stories, and as I mentioned. I frequently use science fiction as a framework for discussing our expectations of interactions with AI and robots, because research shows it definitely can influence peoples’ expectations about how to interact with AI, at least initially.

Personally, Blade Runner definitely inspired me in many ways, going back to when I was studying film theory as an undergrad and never predicted I’d be working in a field called human-robot interaction someday. I know a lot of roboticists who cite other scifi as personal inspiration, too, such as Astro Boy. Storytelling captures our imagination and prompts questions, and it is a wonderful creative springboard for discussion, as well as entertainment.

A pitfall I am less a fan of is using Isaac Asimov’s Three Laws to discuss ethics and AI. Asimov wrote the Laws purposefully allowing for ethical pitfalls so he could keep writing stories; the Laws create plot points in their fallibility. If you want to use Three Laws (or four, if you count the Zeroth Law) to frame a discussion of ethical AI, then you have to acknowledge it is fictional, fallible, and very purposefully incomplete in conception—it isn’t a real world solution for development or policy-making, except perhaps as an example of what loopholes might be in a framework like the Three Laws if they were used in the real world.

Science fiction can be a cultural touchstone and a thought exercise for framing complicated human-AI interactions, but sometimes it is used for shorthand to communicate complicated issues in a way that disregards too much nuance of the issues being discussed. I’m an Asimov fan, but I think the Laws are sometimes relied upon too much in a scientific discussion or popular news framing of ethical problems for AI.

Having said that, I personally enjoy a wide range of AI representations in fiction, from the dystopic to the sympathetic predictions. The ethical dilemmas of the Terminator or Her are both entertaining for me to contemplate in the safety of my everyday life. Considering the more far-reaching implications of the ideas they are conveying is a more serious endeavor for me, of course. How we tell stories reflects our beliefs, and also pushes those beliefs and ideas further, questioning our suppositions, and in that way also has the potential to influence new ideas about how we interact with AI.

 Her (2013).   Image source

Her (2013).  Image source

There is a rich history of stories we tell about AI that pre-dates the genre we call science fiction. Scifi is a relatively new genre label, in itself, but the idea of humans interacting with artificial life has been around forever, in various forms. All sorts of tales about humans interfering with the natural order of things to create a humanlike life outside the body--sometimes via magic spells or religious intervention--exist around the world. These AI characters take the form of golems, zombies, statues, puppets, dolls, and so on. Historically, this is a set of ideas that has universal fascination.

Cultures around the world use science fiction to explore what it means to be human, and what it means for our creation of and interactions with entities that are similar to us in some ways, often as if AI was a sociological Other.


I recently read the news of a man in China marrying the robot he created. SciFi movies are certainly becoming a reality. What are the ethical implications of human-automation romantic relationships?

We are currently in an era where we are really just beginning discussions of emerging ethics in this domain earnestly because of the enormous progress of AI and robotics over the last decade in particular.

Right now, a romantic feeling for AI is considered aberrant behavior, so it carries a very different significance than it will when AI and robots are accepted as objects that can carry a great deal of meaning for people in different situations, whether it’s as caregiver or mentor or helper or companion or romantic interest.

In other words, I don’t think we can make shorthand generalizations about a “type” of person that marries a robot or other AI very successfully as a static model, because the way we regard human-robot relationships will change as robots become part of our everyday realities and we learn to live with them and negotiate what different robots might mean to us in different ways.

I think that to an extent, eventually we will see society normalize human-robot romantic relationships as a culturally accepted option for some people. We are still going through a process of discovery about our interactions with robots now, but we do see patterns of human-robot interaction strikingly different from our interactions with other objects, and one emerging pattern is that in some conditions we treat AI and robots in socially meaningful ways that sometimes includes emotional attachment and/or affection from the person to the AI or robot.

The ethical pitfalls of a human-robot romantic relationship can come from the development end, the user end, and society’s perceptions of that relationship. From the development end, some ethical concerns are the development of the AI, and the human biases and influences we are teaching AI that learns from us, whether it is through direct programming or neural networks.  Robot hacking and privacy concerns are thorny nests of ethical issues, too.

Say someone has a romantic or other affection for AI used in their home, and interacts with it that way, accordingly. In that case, who has access to what the robot or AI hears you say, watches what you do, the information it gathers about your everyday life and your preferences for everything from dish detergent to sexual activities?  What if that data was hacked, and someone tried to use the gathered information to manipulate you? These are major technical and ethical issues.

From the user end, one ethical concern is whether people who become emotionally attached to AI have a real self-awareness of the lack of truly humanlike reciprocity in a human-AI relationship with the current technology, and whether they lack a root understanding that the AI is not anywhere near humanlike intelligence, although sometimes those are the very traits of AI that can attract someone to it romantically.

Furthermore, society does not treat AI or robots like people when it comes to things like legal status, so similar ethical concerns are reflected in the ways the people around the user that reports to be romantically interested in AI recognize that for someone else to declare oneself in a committed, persistent, affectionate relationship with an AI form also acknowledges involvement in an imbalanced power dynamic.

Another ethical question rising from romantic human-AI interaction is, “Will a person who is accustomed to the imbalanced power dynamic of a human-robot relationship transfer their behaviors into their human-human relationships?” The implication there is that (1) the person treats the robot in a way we would find distasteful in human-human dynamics, and (2) that our social behaviors with robots will be something we apply as a model to human-human interactions.

 Blade Runner 2049 (2017).   Image source

Blade Runner 2049 (2017).  Image source

We are currently in an era where we are really just beginning discussions of emerging ethics in this domain earnestly because of the enormous progress of AI and robotics over the last decade in particular. It is only the beginning of a time when we formalize some of our decisions about these ethical concerns as law and policies, and how we establish less formal ways of negotiating our interactions with AI via societal norms.

I’m looking forward to watching how we integrate AI technologies like robots and autonomous cars with our everyday lives because I think there are a lot of potential good that will come from our using them. Our path to integrating AI into our lives is already fascinating.