Posts in Richard Pak
Maggie Jackson: Technology, distraction, digital health, and the future

About Maggie Jackson

Photo credit: Karen Smul

Photo credit: Karen Smul

Maggie Jackson is an award-winning author and journalist known for her writings on technology’s impact on humanity. Her acclaimed book Distracted: Reclaiming Our Focus in a World of Lost Attention was compared by Fast Company magazine to Silent Spring for its prescient warnings of a looming crisis in attention. The book, with a foreword by Bill McKibben, will be published in a new updated edition in September.

Jackson’s articles have appeared in The New York Times, The Wall Street Journal, Los Angeles Times, and on National Public Radio, among many other publications, and her work and comments have been featured in media worldwide. Her essays appear in numerous anthologies, including The State of the American Mind: Sixteen Leading Critics on the New Anti-Intellectualism (Templeton, 2015) and The Digital Divide (Penguin, 2010). 

A former Boston Globe contributing columnist, Jackson is the recipient of Media Awards from the Work-Life Council of the Conference Board; the Massachusetts Psychological Association; and the Women’s Press Club of New York. She was a finalist for the Hillman Prize, one of journalism’s highest honors for social justice reporting and has served as a Visiting Fellow at the Bard Graduate Center, an affiliate of the Institute of the Future in Palo Alto, and a University of Maryland Journalism Fellow in Child and Family Policy. A graduate of Yale University and the London School of Economics with highest honors, Jackson lives with her family in New York and Rhode Island.

How can technology facilitate a healthy work-life balance? 

I believe that the crucial question today is improving the balance between digital and non-digital worlds

Over the last 20 years, technology has changed human experience of time and space radically. Distance no longer matters much, nor duration, as devices allow us to fling our bodies and thoughts around the globe near-instantly. While on a business trip, a parent can skype a bedtime story with a child at home. The boss can reach a worker who’s hiking on a remote mountaintop. Technology has broken down cultural and physical boundaries and walls – making home, work, and relationships portable. That’s old news now, and yet we’re still coming to grips with the deep impact of such changes. 

For instance, it’s becoming more apparent that the anywhere-anytime culture isn’t simply a matter of carrying our work or home lives around with us and attending to them as we wish. It’s not that simple by far. First, today’s devices are designed to be insistent, intrusive systems of delivery, so any single object of our focus – an email, a text, a news alert – is in competition with others at every minute. We now inhabit spaces of overlapping, often-conflicting commitments and so have trouble choosing the nature and pace of our focus. 

The overall result, I believe, is a life of continual negotiation of roles and attentional priorities. Constant checking behavior (polls suggest Americans check their phones on average up to 150 times a day) is a visible symptom of the need to rewrite work-life balance dozens of times a day. The “fear of missing out” that partly drives always-on connectivity also is a symptom of the necessity of continually renegotiating the fabric of life on- and off-line. 

Because this trend toward boundary-less living is so tech-driven, I believe that the crucial question today is improving the balance between digital and non-digital worlds. After that, work-life balance will follow. 

We need to save time for uninterrupted social presence, the kind that nurtures deeper relationships. We urgently need space in our lives where we are not mechanically poked, prodded and managed, ie when we are in touch with and able to manage our inner lives. (Even a silent phone in “off” mode undercuts both focus and cognitive ability, according to research by Adrian Ward at the University of Texas/Austin.) 

One solution would be to think more deliberately about boundaries in all parts of our life, but especially in the digital sphere. Too often lines of division are seen as a confinement, a kind of archaic Industrial Age habit. But boundaries demarcate; think of a job description, a child’s bedtime, or the invention of the weekend, a ritual that boosts well-being even among the jobless. Boundaries are systems of prioritization, safety zones, structures for depth, and crucial tools for providing structure in a digital age. A family that turns off its cell phones at dinner is creating opportunities for the kind of in-depth bonding that rarely is forged online.

Technology can help facilitate creative boundary-making – think of the new Apple and Google product designs that prompt offline time. But our devices cannot do the work of inventing and managing the boundaries that are crucial for human flourishing. 

Can you tell us about your new book?

My new book will draw from my research into how technology is changing our ideas of what it means to “know” something and what it means to be smart

I have a couple of book projects on the front burner. My most recent book, Distracted: Reclaiming Our Focus in a World of Lost Attention, explores the fragmentation of focus and the science of attention in the digital age. One of the first books to warn of our current crisis of inattention, it’s been compared by Fast Company magazine to Rachel Carson’s Silent Spring, and will be published in a new updated edition in September. 

After I finished that book, I realized that attention, as crucial a human faculty as it is, is nevertheless a vehicle, a means to whatever goals we are pursuing. And I began to see that if we have a moment’s focus, the crucial next stepping stone to human flourishing is to be able to think well, especially in a digital age. Those musings have led me on a multi-year journey into the nature of deliberation and contemplation, and in particular to the realization that uncertainty is the overlooked gateway or keystone to good thinking in an age of snap judgement. 


We think of uncertainty as something to avoid, particularly in an age that quite narrowly defines productivity and efficiency and good thinking as quick, automatic, machine-like, neat, packaged, and outcome-oriented. Of course humans need to pursue resolution, yet the uncertainty that we scorn is a key trigger to deep thought and itself a space of possibilities. Without giving uncertainty its due, humans don’t have choices. When we open ourselves to speculation or a new point of view, we create a space where deeper thinking can unfold. 

My new book will draw from my research into how technology is changing our ideas of what it means to “know” something and what it means to be smart. As well, I am drawing from new research on the upsides of uncertainty in numerous domains, including medicine, business, education, philosophy and of course psychology/cognitive science. It’s even a topic of conversation and interest in the HCI world, Rich Pak and others have told me. 

I believe that today more and more people are retreating political, psychologically, and culturally into narrow-mindedness, but I am heartened by the possibility that we can envision uncertainty as a new language for critical thinking. 

What does the future of human relationships with technology look like: good, bad, or ugly?

The essential question is: will our technologies help us flourish? The potential – the wondrous abundance, the speed of delivery, the possibility for augmenting the human or inspiring new art forms – is certainly there. But I would argue that at the moment we aren’t for the most part using these tools wisely, mostly because we aren’t doing enough to understand technology’s costs, benefits, and implications.

I’ve been thinking a lot about one of technology’s main characteristics: instantaneity. When information is instant, answers begin to seem so, too. After a brief dose of online searching, people become significantly less willing to struggle with complex problems; their “need for cognition” drops even as they begin to overestimate their ability to know. (The findings echo the well-documented “automation effect,” in which humans stop trying to get better at their jobs when working closely with machines, such as automated cockpits.) In other experiments, people on average ranked themselves far better at locating information than at thinking through a problem themselves.

Overall, the instantaneity that is so commonplace today may shift our ideas about what human cognition can be. I see signs that people have less faith in their own mental capacities, as well as less desire to do the hard work of deliberation. Their faith increasingly instead lies with technology. These trends will affect a broad range of future activities, such as whether or not people can manage a driverless car gone awry or even think it’s their role to do so; whether or not they any longer recognize the value of “inefficient” cognitive states of mind such as daydreaming, or whether or not they have the tenacity to push beyond the surface understanding of a problem on their own. Socially, similar risks are raised by instant access to relationships – whether to a friend on social media or to a companion robot that’s always beside a child or elder. Suddenly the awkwardness of depth need no longer trouble us as humans! 

These are the kinds of questions that we urgently need to be asking across society in order to harness technology’s powers well.We need to ask better questions about the unintended consequences and the costs/benefits of instantaneity, or of gaining knowledge from essentially template-based formats. We need to be vigilant in understanding how humans may be changed when technology becomes their nursemaid, coach, teacher, companion. 


Recently, an interview with the singer Taylor Goldsmith of the LA rock band Dawes caught my eye. The theme of the band’s latest album, Passwords, is hacking, surveillance and espionage. “I recognize what modern technology serves,” he told the New York Times. “I’m just saying, ‘let’s have more of a conversation about it.’” 

Well, there is a growing global conversation about technology’s effects on humanity, as well there should be. But we need to do far moreto truly understand and so better shape our relations with technology. That should mean far more robust schooling of children in information literacy, the market-driven nature of the Net, and in general critical thinking skills. That should mean training developers to become more accountable to users, perhaps by trying to visualize more completely the unintended consequences of their creations. It certainly must mean becoming more measured in our own personal attitudes; we all too often still gravitate to exclusively dystopian or utopian viewpoints on technology. 

Will we have good, bad, or ugly future relations to technology? At best, we’ll have all of the above. But at the moment, I believe that we are allowing technology in its present forms to do far more to diminish human capabilities than to augment them. By better understanding technology, we can avert this frightening scenario.

33 Questions Psychology Must Answer...

The American Psychological Association recently asked 33 psychologists to identify critical questions yet to be answered in their specific area of psychology. I had the honor of answering for the Engineering Psychology (human factors division):

Leaps in technological evolution will turn simple tools into autonomous teammates that have the ability to communicate with us in ways that are even more personal and accessible. A diverse range of new users will collaborate with these entities in new settings. The goal of engineering psychology has always been to enhance the safety, performance, and satisfaction of human-machine interaction. We must adapt to the idea that these machines are quickly changing and becoming less tool-like and more human-like. How will this new human/machine paradigm affect human safety, satisfaction, and performance?

Check out the other interesting questions from other areas; AI is mentioned a few times too!

Technology (AI), humans, and the future...

My friend, journalist Maggie Jackson, recently sent me an interesting article in the Times Magazine about one of the new complexities in the relationship between humans and AI:

In many arenas, A.I. methods have advanced with startling speed; deep neural networks can now detect certain kinds of cancer as accurately as a human. But human doctors still have to make the decisions — and they won’t trust an A.I. unless it can explain itself.

The source of these issues is that AI decision making is hidden, but also in many ways non-deterministic--we don't know what it will come up with and how it did!  We discuss this a bit in our recently published paper.

Maggie Jackson, will be leading a discussion at the Google I/O developer conference on building healthy technologies.  In that session, many existing and new issues regarding human-ai/technology issues will be discussed.

In this Keynote Session, journalist Maggie Jackson, a specialist in how technology impacts humanity, talks to Adam Alter, Professor of Psychology at NYU, about why enabling a healthy tech life balance is important, and what can be done when building apps and services to make healthier products.
JUST PUBLISHED: From “automation” to “autonomy”: The importance of trust repair in human-machine interaction

My colleagues Ewart de Visser and Tyler Shaw recently published a theoretical paper discussing how the field of human factors might need to adapt to study human-autonomy issues: 


Modern interactions with technology are increasingly moving away from simple human use of computers as tools to the establishment of human relationshipswith autonomous entities that carry out actions on our behalf. In a recent commentary, Peter Hancock (Hancock, 2017) issued a stark warning to the field of human factors that attention must be focused on the appropriate design of a new class of technology: highly autonomous systems. In this article, we heed the warning and propose a human-centered approach directly aimed at ensuring that future human-autonomy interactions remain focused on the user’s needs and preferences. By adapting literature from industrial psychology, we propose a framework to infuse a unique human-like ability, building and actively repairing trust, into autonomous systems. We conclude by proposing a model to guide the design of future autonomy and a research agenda to explore current challenges in repairing trust between humans and autonomous systems.

Practitioner summary

This paper is a call to practitioners to re-cast our connection to technology as akin to a relationship between two humans rather than between a human and their tools. To that end, designing autonomy with trust repair abilities will ensure future technology maintains and repairs relationships with their human partners.


Thoughts on the first fatal self driving car accident

You have no doubt heard about the unfortunate fatal accident involving a self-driving car killing a pedestrian (NYT).  

This horrible event might be the "stock market correction" of the self-driving car world that was sorely needed to re-calibrate the public's unrealistic expectations about the capability of these systems.

In the latest news, the Tempe police have released video footage that shows the front and in-vehicle camera view just before impact.  

My first impression of the video was that it seemed like something the car should have detected and avoided.  In such a visually challenging condition as illustrated in the video, a human driver would have great difficulty seeing the pedestrian in the shadowed area.  Humans have inferior vision and reaction time and speed compared to computers (cf Fitts' list, 1951).

One interesting narrative thread that has come out of the coverage, and is evident in the Twitter comments for the video, is that the idea that the "Fatal Uber crash [was] likely 'unavoidable' for any kind of driver."  People seem to be understanding of the difficulty of the situation and thus their trust in these autonomous systems is likely to only be somewhat negatively affected.  But should it be more affected?  Autonomous vehicles, with the megaflops of computing power and advance sensors were never expected to be "any kind of driver"--they were supposed to be much better.

But the car, outfitted with radar-based sensors, should have "seen" it.  I'm certainly not blaming the engineers.  Determining the threshold for signal (pedestrian) versus noise is probably an active area and one that they were testing.

Continuing story and thoughts...

Public views about AI and the Future

The Gallup organization has just released a survey of 3298 American adults about their thoughts on AI and the future.  The interactive website is filled with many great visualizations.  

The key point seems to be that, contrary to popular notions of the fear of AI, most American’s (77%) have a positive view of AI in the next decade.  Interestingly, this is despite most Americans view that AI will have a negative impact on their own employment and the economy (73% believe AI will eliminate jobs).

The other noteworthy point is that optimism about AI, while high, is expected to decrease (difference between current optimism and future optimism).  But this varies by sub-group:  The largest difference between future-current optimism is by middle-aged folks who's livelihood may be affected (green) while older folks seem to be unchanged (blue, orange):

Image source:

Image source:

Changing views of self-driving cars...

I just saw a funny juxtaposition of headlines regarding self-driving cars.  Of most autonomous systems, self-driving cars probably represent the easiest to understand for the lay public.

The first headline, from a Reuters/Ipsos opinion poll:  Most Americans wary of self-driving cars.  

While 27 percent of respondents said they would feel comfortable riding in a self-driving car, poll data indicated that most people were far more trusting of humans than robots and artificial intelligence under a variety of scenarios.

The results are more interesting when viewed by age group.  It makes intuitive sense that millennials are most comfortable with baby boomers the least.  Millenials are less interested in driving and because of greater exposure to autonomous technology, may be more comfortable and trusting than other age groups.  It should be noted that that is not a correct view, however.  Their view of technology could be distorted or unrealistic.

Image source:

Image source:

The next headline:  More Americans Willing To Ride In Self-Driving Cars.  The results of a survey from American Automobile Association (AAA) confirm Reuter's survey:  millennials and males are more willing to buy a self-driving car.  The headline refers to a decrease (78% to 63%), year over year, in the number of people who said they were afraid to ride in a self-driving car.

The crux of these observations seem to be trust:

AAA’s survey also offered insights as to why some motorists are reluctant to purchase advanced vehicle technology. Most trust their driving skills more than the technology (73 percent) — despite the fact that research shows more than 90 percent of crashes involve human error. Men in particular, are confident in their driving abilities with 8 in 10 considering their driving skills better than average.
The Year of the Algorithm. AI Potpourri part 2:
 “We have to grade indecent images for different sentencing, and that has to be done by human beings right now, but machine learning takes that away from humans,” he said.

”You can imagine that doing that for year-on-year is very disturbing.”

But as the next story shows, these AI tools are not advanced enough to replace human content moderators.

[WSJ] The Worst Job in Technology: Staring at Human Depravity to Keep It Off Facebook

Humans, still, are the first line of defense. Facebook, YouTube and other companies are racing to develop algorithms and artificial-intelligence tools, but much of that technology is years away from replacing people, says Eric Gilbert, a computer scientist at the University of Michigan. 
Earlier this month, after a public outcry over disturbing and potentially exploitative YouTube content involving children, CEO Susan Wojcicki said the company would increase its number of human moderators to more than 10,000 in 2018, in an attempt to rein in unsavory content on the web’s biggest video platform.

But guidelines and screenshots obtained by BuzzFeed News, as well as interviews with 10 current and former “raters” — contract workers who train YouTube’s search algorithms — offer insight into the flaws in YouTube’s system.
But algorithms, unlike humans, are susceptible to a specific type of problem called an “adversarial example.” These are specially designed optical illusions that fool computers into doing things like mistake a picture of a panda for one of a gibbon. They can be images, sounds, or paragraphs of text. Think of them as hallucinations for algorithms.
From the ridiculous to the chilling, algorithmic bias — social prejudices embedded in the AIs that play an increasingly large role in society — has been exposed for years. But it seems in 2017 we reached a tipping point in public awareness.

he New York City Council recently passed what may be the US’ first AI transparency bill, requiring government bodies to make public the algorithms behind its decision making. Researchers have launched new institutes to study AI prejudice (along with the ACLU) while Cathy O’Neil, author of Weapons of Math Destruction, launched an algorithmic auditing consultancy called ORCAA.
Macys, CVS, Starbucks, and Sephora turn to AI

If you are scrambling to find last minute gifts, AI/machine learning is here to help!  All the major retailers are now turning to AI to learn what you want.  Big data about retail purchases are being fed into machine learning algorithms to learn things about you.  Here are some examples.  By the way, have you wondered, "what exactly is machine learning?"  Then see the end of this post for an easily digestible video.

[Forbes] Macy's Teams With IBM Watson For AI-Powered Mobile Shopping Assistant

Macy’s is set to launch an in-store shopping assistant powered by artificial intelligence thanks to a new tie-up with IBM Watson via developer partner and intelligent engagement platform, Satisfi.

Macy’s On Call, as it’s called, is a cognitive mobile web tool that will help shoppers get information as they navigate 10 of the retail company’s stores around the US during this pilot stage.

Customers are able to input questions in natural language regarding things like where specific products, departments, and brands are located, to what services and facilities can be found in a particular store. In return, they receive customised relevant responses. The initiative is based on the idea that consumers are increasingly likely to turn to their smartphones than they are a store associate for help when out at physical retail.
If you always have a caramel macchiato on Mondays, but Tuesdays call for the straight stuff, a double espresso, then Starbucks Corporation (SBUX - Get Report) is ready to know every nuance of your coffee habit. There will be no coffee secrets between you, if you’re a Rewards member, and Starbucks.

The chain’s regulars will find their every java wish ready to be fulfilled and, the food and drink items you haven’t yet thought about presented to you as what you’re most likely to want next.

So targeted is the technology behind this program that, if the weather is sunny, you’ll get a different suggestion than if the day is rainy.
Patients tend to be at their local CVS much more frequently than at the doctor. People are also increasingly using fitness trackers like FitBits, smartwatches, and even Bluetooth-enabled scales that are all collecting data patients can choose to share with a provider. All that data isn’t worth much though unless it is carefully interpreted — something Watson can do much more efficiently than a team of people.

A drop in activity levels, a sudden change in weight, or prescriptions that aren’t being filled are the kinds of things that might be flagged by the system. Certain changes could even indicate a developing sickness before someone feels ill — and certainly before someone decides to visit the doctor.

[AdWeek] Sephora Mastered In-Store Sales By Investing in Data and Cutting-Edge Technology

I love Sephora.  As the article aptly states "Sephora isn’t your mother’s makeup company; it’s your modern tech company". I have personally tried the Color IQ, which is their in-store program that scans faces to find out the right shade of foundation and other products for different skin tones. Sephora has an amazing Beauty Insider program that provides it a lot of rich data about their consumers and now the company is leveraging AI to allow customers to virtually try on make-up and spice up their online presence.

Sephora’s innovation lab in San Francisco is tooling with an artificial intelligence feature dubbed Virtual Artist within its mobile app that uses facial recognition to virtually try on makeup products.

[CGP Grey] How do machines learn?

The science behind machine/deep learning neural networks is quite interesting.  For example, the discussion, in the video, about us not knowing what is exactly is being learned is interesting to me (the hidden layer).  But you don't have time for that!  Here is an easily understood video:

What's coming up in 2018, and happy holidays!

Just a short note to let our dear readers know that posting volume will be a bit lighter as we travel for the holidays.  But here is what's coming up!

  • More interviews of notable experts (including an expert in self-driving vehicles, and an expert in human-autonomy teaming)
  • More Throwback Thursdays covering classic automation and autonomy literature
  • NEW: Movie Club; where Arathi and I "review" a particular movie's treatment of automation/autonomy/AI

Thanks for reading!  Tell your friends!!

Siri and Alexa Say #MeToo to Sexual Harassment

The number of prominent celebrities and politicians being taken down for sexual harassment really seems to represent a major change in how society views sexual harassment.  No longer whispered or swept under the rug, harassment is being called-out and harassers are being held accountable for their words and actions.  

So, if AI will soon be collaborators, partners, and team mates, shouldn't they also be given the same treatment?  This story in VentureBeat talks about a campaign by Randy Painter to consider how voice assistants behave when harassed:

We have a unique opportunity to develop AI in a way that creates a kinder world. If we as a society want to move past a place where sexual harassment is permitted, it’s time for Apple and Amazon to reprogram their bots to push back against sexual harassment

I've never harassed Siri so I wasn't aware of the responses she gives when one attempts to harass her:

Siri responds to her harassers with coy remarks that sometimes even express gratitude. When they called Siri a “slut,” she responded with a simple “Now, now.” And when the same person told Siri, “You’re hot,” Siri responded with “I’m just well put together. Um… thanks. Is there something I can help you with?”

In our interview last week with Dr. Julie Carpenter, she addressed this somewhat:

Another ethical question rising from romantic human-AI interaction is, “Will a person who is accustomed to the imbalanced power dynamic of a human-robot relationship transfer their behaviors into their human-human relationships?” The implication there is that (1) the person treats the robot in a way we would find distasteful in human-human dynamics, and (2) that our social behaviors with robots will be something we apply as a model to human-human interactions.

This is fascinating because there is existing and ongoing research examining how humans respond and behave with AI/autonomy that exhibits different levels of politeness.  For example, autonomy that is rude, impatient, and intrusive were considered less trustworthy by human operators. If humans  expect autonomy to have a certain etiquette, isn't it fair to expect at least basic decency from humans towards autonomy?

Citation: Parasuraman R., & Miller C. (2004). Trust and etiquette in high-criticality automated systems. Communications of the Association for Computing Machinery, 47(4), 51–55. 


Throwback Thursday: The ‘problem ’ with automation: inappropriate feedback and interaction, not ‘over-automation’

Today's Throwback article is from Donald Norman.  If that name sounds familiar, it is the same Dr. Norman who authored the widely influential, "The Design of Everyday Things."

In this 1990 paper published in the Philosophical Transactions of the Royal Society, Dr. Norman argued that much of the criticism of automation at the time (and today) is not due to the automation itself (or even over-automation) but due to its poor design; namely the lack of inadequate feedback to the user.  

This is a bit different than the concept of the out-of-the-loop (OOTL) scenario that we've talk about before; there is a subtle emphasis difference.  Yes, lack of feedback contributes to OOTL, but here, feedback is discussed more as an opaqueness of automation status and operations, not that it is carrying out a task that you previously performed.

He first starts off with a statement that should sound familiar if you've read our past Throwback posts:

The problem, I suggest, is that the automation is at an intermediate level of intelligence, powerful enough to take over control that used to be done by people, but not powerful enough to handle all abnormalities.
— pp. 137

The obvious solution, then is to make the automation even more intelligent (i.e., a higher level of automation):

To solve this problem, the automation should either be made less intelligent or more so, but the current level is quite inappropriate.
— pp. 137

If a higher level of automation is what is meant by "more intelligent," then we already  know that this is also not a viable solution (the research to show that was done after the publication of this paper).  However, this point is merely a setup to further the idea that problems with automation are caused not by the mere presence of automation, but by its lack of feedback.  Intelligence means giving just the right feedback at the right time for the task.

He provides aviation case studies that imply that the use of automation lead to out-of-the-loop performance issues (see previous post).  He next directs us through a thought experiment to help drive home his point:

Consider two thought experiments. In the first, imagine a captain of a plane who turns control over to the autopilot, as in the case studies of the loss of engine power and the fuel leak. In the second thought experiment, imagine that the captain turns control over to the first officer, who flies the plane ‘by hand’. In both of these situations, as far as the captain is concerned, the control has been automated: by an autopilot in one situation and by the first officer in the other.
— pp. 141

The implication is that when control is handed over to any entity (automation or a co-pilot), feedback is critical.   Norman cites the widely influential work of Hutchins who found that informal chatter, in addition to lots of other incidental verbal interaction, is crucial to what is essentially situation awareness in human-human teams (although Norman invokes the concept of mental models).  Humans do this, automation does not.  Back then, we did not know how to do it and we probably still do not know how to do it.  The temptation is to provide as much feedback as possible:

We do have a good example of how not to inform people of possible difficulties: overuse of alarms. One of the problems of modern automation is the unintelligent use of alarms, each individual instrument having a single threshold condition that it uses to sound a buzzer or flash a message to the operator, warning of problems.
— pp. 143

This is the current state of automation feedback.  If you have spent any time in a hospital, alerts are omnipresent and overlapping (cf. Seagull & Sanderson, 2001).  Norman ends with some advice about the design of future automation:

What is needed is continual feedback about the state of the system...This means designing systems that are informative, yet non-intrusive, so the interactions are done normally and continually, where the amount and form of feedback adapts to the interactive style of the participants and the nature of the problem.
— pp. 143

Information visualization and presentation research (e.g., sonification; or the work of Edward Tufte) tackles part of the problem. These techniques are an attempt to provide constant, non-intrusive information.   Adaptive automation, or automation that scales its level based on physiological indicators, is another attempt to get closer to Norman's vision but, in my opinion, may be insufficient and more disruptive as they do not address feedback.

To conclude, Norman's human action cycle is completely consistent with, and probably heavily informs, his thoughts on automation.

Further reading:  

Reference: Norman, D. A. (1990). The 'problem' with automation: inappropriate feedback and interaction, not'over-automation'. Philosophical Transactions of the Royal Society of London B: Biological Sciences327(1241), 585-593.

Throwback Thursday: The Out-of-the-Loop Performance Problem and Level of Control in Automation

Dust off your Hypercolor t-shirts, and fanny packs filled with pogs because we are going back to the 1990s for this week's Throwback post.

By definition, when users interact with automated systems, they are carrying out some of the task while the automation does the rest.  When everything is running smoothly, this split in task allocation between user/machine causes no outward problems; the user is, by definition, “out of the loop” in part of the task.

Problems only start to appear when the automation fails or is unavailable and the user is suddenly left to do the activities previously carried out by automation.  The user is now forced back “into the loop.”

I sort of experienced this when I got my new 2017 car.  Naturally, as an automation researcher I asked for “all the things!” when it came to driver assistance features (yeah, for “research purposes”):  active lane assist, adaptive cruise control with stop and go, autonomous emergency braking, auto-high-beams, auto-wipers, and more.

The combined driver assistance features probably equated to somewhere around Level 2 autonomy (see figure).  I had fun testing the capabilities and limitations of each feature (safely!).  They worked, for the most part, and I turned them on and forgot about it for the past 9 months.

However, while my car was serviced recently, I drove a loaner car that had none of the features and was quite shocked to see how sloppy my driving had become.  Driving on the interstate seemed much more effortful than before.  In addition, I had become accustomed to playing with my phone, messing with my music player or doing other things while on long stretches of straight highway.  I could no longer do this safely with no driver assistance features.  

This was when I realized how much of the lower-level task of driving (staying in the lane, keeping a constant distance to the car ahead, turning on and off my high beams) was not done by me.  As an automation researcher, I was quite surprised.

This anecdote illustrates two phenomena in automation:  complacency and the resultant skill degradation.  Complacency is when I easily and willingly gave up a good chunk of the driving task to automation (in my personal car).  I admit this complacency and high trust but it was only after several weeks of testing the limits of automation to understand the conditions where it worked best (e.g., lane keeping did not work well in conditions of high contrast road shadows).  It is doubtful that regular people do this.

Because of my complacency (and high trust), I had mild skill degradation: the ability to smoothly and effortlessly maintain my lane and car distance.  

You may have experienced a similar disorientation when you use your phone for GPS-guided directions and it gets taken away (e.g., you move to a low signal area, phone crashes).  Suddenly, you have no idea where you are or what to do next.

So, what, exactly, causes this performance degradation when automation was taken away?

This "out of the loop" (OOTL) performance problem (decreased performance due to being out of the loop and suddenly being brough back into the loop) is the topic of this week's paper by Endsley and Kiris (1995).  In the paper, Endsley and Kiris explored the out-of-the-loop performance problem for possible specific causes, and solutions.

It is the central thesis of this paper that a loss of situation awareness (SA) underlies a great deal of the out-of-the-loop performance problem (Endsley, 1987).
— pp. 382

Endsley and Kiris make the claim that most of the problems associated with OOTL are due to a loss of situation awareness (SA). SA is a concept that Endsley refined in an earlier paper.  It basically means your awareness of the current situation and your ability to use this information to predict what will happen in the near future.  Endsley defines situation awareness as:

the perception of elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future
— pp. 392

Within this definition of SA, there are three levels:

  1. Level 1: perception of the elements (cues) in the environment:  can you see the warning light on the dashboard?
  2. Level 2: comprehension of the meaning of the perceived elements:  do you understand what the engine symbol means on the dashboard?
  3. Level 3: projection of the future: given what you know, can you predict whether breaking should be done to prevent a collision?

In this paper, she argued that the presence of automation essentially interfered with all levels of situation awareness:

In many cases, certain critical cues may be eliminated with automation and replaced by other cues that do not result in the same level of performance.
— pp. 384

Much of the existing research at the time had only examined physical/manual tasks, not cognitive/decision-making tasks.  The purpose of Endsley and Kiris' paper was:

In order to investigate the hypothesis that the level of automation can have a significant impact on the out-of-the-loop performance problem and to experimentally verify the role of situation awareness in this process, a study was conducted in which a cognitive task was automated via a simulated expert system.
— pp. 385

The key hypotheses they had was that as level of automation increased (see Arathi's earlier Throwback post on level of automation), situation awareness would decrease and this would be evidenced specifically by increased time for users to make a decision, and reduced confidence in decisions since users would feel less skilled or qualified (because they were so OOTL).

The bottom line was that, yes, the hypotheses were confirmed:  increasingly higher levels of automation do seem to negatively impact situation awareness and this is evidenced as longer time to make decisions (because you had less SA), and slightly reduced confidence in your decisions.  Using the examples above, the driver assistance features (a lower level form of automation) did not really lead to loss of situation awareness for me, but it did lead to skill degradation.  However, in the GPS example, a much higher form of automation, it DOES lead to loss of situation awareness.

So what do we do with this information?  First, it should clearly show that high or full autonomy of higher level cognitive-type tasks is not a very desirable goal.  It might be only if automation is proven to be 100% reliable, which is never assured.  Instead, there will be situations where the autonomy will fail and the user will have to assume manual control (think of a self driving car that fails).  In these circumstances, the driver will have dramatically reduced SA and thus poor performance.

The surprising design recommendation?

this study did find that full automation produced more problems than did partial automation
— pp. 392

Let's hope designers of future autonomy are listening!

Reference: Endsley, M. R., & Kiris, E. O. (1995). The out-of-the-loop performance problem and level of control in automation. Human Factors, 37(2), 381–394.

Potpourri: Humorously-runaway AI

Runaway AI is a fear that some researchers have about AI.  While technologically it may be too soon to have this fear, we are coming close.  Here are some recent examples of human-automation or human-AI partnerships running amok with humorous results.

[Sunday Times] Jeremy Clarkson Says He Nearly Crashed While Testing An Autonomous Car (paywalled article); [CarScoops summary]

“I drove a car the other day which has a claim of autonomous capability and twice in the space of 50 miles on the M4 it made a mistake, a huge mistake, which could have resulted in death,” he said. “We have to be very careful legally, so I’m not going to say which one.”
In June, the U.S. Immigrant and Customs Enforcement (ICE) released a letter saying that the agency was searching for someone to design a machine-learning algorithm to automate information gathering about immigrants and determine whether it can be used to prosecute them or deny them entry to the country. The ultimate goal? To enforce President Trump’s executive orders, which have targeted Muslim-majority countries, and to determine whether a person will “contribute to the national interests”—whatever that means.
What I’ve heard is that this is a machine learning problem — that, more or less, for some reason the machine learning algorithm for autocorrect was learning something it never should have learned.
As far as debuts go, there have been more successful ones. During its first hour in service, an automated shuttle in Las Vegas got into an accident, perhaps fittingly the result of a flesh-and-blood human truck driver slowly driving into the unsuspecting robocar, according to a AAA PR representative on Twitter. Nobody was hurt and the truck driver was cited.
[Repost] Prominent Figures Warn of Dangerous Artificial Intelligence (it's probably a bad Human Factors idea, too)

This is an edited repost from Human Factors Blog from 2015

Recently, some very prominent scientists and other figures have warned of the consequences of autonomous weapons, or more generally artificial intelligence run amok.

The field of artificial intelligence is obviously a computational and engineering problem: designing a machine (i.e., robot) or software that can emulate thinking to a high degree.   But eventually, any AI must interact with a human either by taking control of a situation from a human (e.g., flying a plane) or suggesting courses of action to a human.

I thought this recent news item about potentially dangerous AI might be a great segue to another discussion of human-automation interaction.  Specifically, to a detail that does not frequently get discussed in splashy news articles or by non-human-factors people:  degree of automation. This blog post is heavily informed by a proceedings paper by Wickens, Li, Santamaria, Sebok, and Sarter (2010).

First, to HF researchers, automation is a generic term that encompasses anything that carries out a task that was once done by a human.  Such as robotic assembly, medical diagnostic aids, digital camera scene modes, and even hypothetical autonomous weapons with AI.  These disparate examples simply differ in degree of automation.

Let's back up for a bit: Automation can be characterized by two independent dimensions:

  • STAGE or TYPE:  What is it doing and how is it doing it?
  • LEVEL: How much it is doing?

Stage/Type of automation describes the WHAT tasks are being automated and sometimes how.  Is the task perceptual, like enhancing vision at night or amplifying certain sounds?  Or is the automation carrying out a task that is more cognitive, like generating the three best ways to get to your destination in the least amount of time?

The second dimension, Level, refers to the balance of tasks shared between the automation and the human; is the automation doing a tiny bit of the task and then leaving the rest to the user?  Or is the automation acting completely on its own with no input from the operator (or ability to override)?

Figure 1. Degrees of automation (Adapted from Wickens et al., 2010)

See Figure 1.  If you imagine STAGE/TYPE (BLUE/GREEN) and LEVEL (RED) as the X and Y of a chart (below), it becomes clearer how various everyday examples of automation fit into the scheme.  As LEVEL and/or TYPE increase, we get a higher degree of automation (dotted line).

Mainstream discussions of AI and its potential dangers seem to be focusing on a hypothetical ultra-high degree of automation.  A hypothetical weapon that will, on its own, determine threats and act.  There are actually very few examples of such a high level of automation in everyday life because cutting the human completely "out of the loop" can have severely negative human performance consequences.

Figure 2. Approximate degrees of automation of everyday examples of automation

Figure 2 shows some examples of automation and where they fit into the scheme:

Wickens et al., (2010) use the phrase, "the higher they are, the farther they fall."   This means that when humans interact with greater degrees of automation, they do fine if it works correctly, but will encounter catastrophic consequences when automation fails (and it always will at some point).  Why?  Users get complacent with high DOA automation, they forget how to do the task themselves, or they loose track of what was going on before the automation failed and thus cannot recover from the failure so easily.

You may have experienced a mild form of this if your car has a rear-backup camera.  Have you ever rented a car without one?  How do you feel? That feeling of being "out of the loop" tends to get magnified with higher degrees of automation.  More on this in an upcoming throwback post.

So, highly autonomous weapons (or any high degree of automation) is not only a philosophically bad/evil idea, it is bad for human performance!

For more discussion on the degree and level of automation, see Arathi's recent Throwback post.

What Sci-Fi Movies Can Tell Us about Future Autonomy

I gave a talk a few months ago to a department on campus.  It is based on work that Ewart de Visser PhD and I are doing on adaptive trust repair with autonomy.  That is a complex way of saying the possibility of giving machines an active role in managing human-machine trust.

The talk is based on a paper currently under review and is meant to be fun but also an attempt to seriously consider the shape of future autonomy based on fictional representations; sci-fi movies serve as data.  It is about 40 minutes long.

Autonomy Potpourri: Evil smart houses, trucker hats, & farming

Upcoming Netflix movie: Evil smart house terrorizes street-smart grifter

I'm sure this movie will give people positive and accurate portrayals of AI/autonomy, and smart home technology; like Sharknado did for weather phenomena/marine life...

Monroe plays a victim who was a street-smart grifter that has been kidnapped and held captive in order to be part of a fatal experiment. The only thing standing in the way of her freedom is Tau, an advanced artificial intelligence developed by her captor, played by Skrein. Tau is armed with a battalion of drones that automate a futuristic smart house.

Trucker hat that alerts of sleepiness

I bet the main issue will be a problem of false alarms, leading to disuse.

Being a trucker means driving huge distances on demanding deadlines. And one of the biggest dangers in trucking is the threat of drivers falling asleep at the wheel. To celebrate 60 years of truck production in Brazil, Ford decided to try to help the problem by creating a hat that tracks head movements and alerts drivers in danger of snoozing.
Driverless tractors, combine harvesters and drones have grown a field of crops in Shropshire in a move that could change the face of farming. From sowing the seeds to tending and harvesting the crop, the robot crew farmed a field of barley without humans ever setting foot on the land in a world first. The autonomous vehicles followed a pre-determined path set by GPS to perform each task, while the field was monitored by scientists using self-driving drones.
Throwback Thursday: The Ironies of Automation
If I have seen further, it is by standing on the shoulders of giants
— Isaac Newton, 1675

Don't worry, our Throwback Thursday doesn’t involve embarrassing pictures of me or Arathi from 5 years ago.  Instead, it is more cerebral.  The social science behind automation and autonomy is long and rich, and despite being one of the earliest topics of study in engineering psychology, it has even more relevance today.

Instead of re-inventing the wheel, why don't we look at the past literature to see what is still relevant?

In an effort to honor that past but also inform the future, the inaugural "Throwback Thursday" post will highlight scientific literature from the past that is relevant to modern discussion of autonomy.

Both Arathi and I have taught graduate seminars in automation and autonomy so we have a rich treasure trove of literature from which to draw.  Don't worry: while some of the readings can be complex and academic, in deference to our potentially diverse readership, we will focus on key points and discuss their relevance today.

The Ironies of Automation

In this aptly titled paper, Bainbridge discusses, back in 1983(!), the ironic things that can happen when humans interact with automation.  The words of this paper ring especially true today when the design strategy of some companies is to consider the human as an error term to be eliminated:

The designer’s view of the human operator may be that the operator is unreliable and inefficient, so should be eliminated from the system.
— Bainbridge, pp. 775

But is this design strategy sustainable?  Bainbridge later wisely points out that:

The second irony is that the designer who tries to eliminate the operator still leaves the operator to do the tasks which the designer cannot think how to automate.
— Bainbridge, pp. 775

The paper then discusses, how, under such an approach, many unintended problems arise.  The ultimate irony, however, is that the implementation of very high levels of automation (including eliminating the driver in a self-driving car) will ultimately lead to a higher workload burden for the "passenger."

A more serious irony is that the automatic control system has been put in because it can do the job better than the operator, but yet the operator is being asked to monitor that it is working effectively.
— Bainbridge, pp. 775
Weekend Reading: Fear of AI and Autonomy

In our inaugural post, I alluded to the current discussion surrounding AI/Autonomy as being dominated by philosophers, politicians, and engineers.  They are, of course, working at the forefront of this technology and raise important points.

But focusing on their big-picture concerns may prevent a fuller view of the day-to-day role of this technology, and the fact that humans are expected to interact, collaborate, and in some cases submit to these systems (social science issues; why this blog exists).

That said, one of the philosophers examining the future role and risk associated with AI is Nick Bostrom, director of the Future of Humanity Institute.  This profile from the New Yorker from a few years ago (2015) is a great way to get up to speed on the basis of much of the fear of AI.

Bostrom’s sole responsibility at Oxford is to direct an organization called the Future of Humanity Institute, which he founded ten years ago, with financial support from James Martin, a futurist and tech millionaire. Bostrom runs the institute as a kind of philosophical radar station: a bunker sending out navigational pulses into the haze of possible futures. Not long ago, an F.H.I. fellow studied the possibility of a “dark fire scenario,” a cosmic event that, he hypothesized, could occur under certain high-energy conditions: everyday matter mutating into dark matter, in a runaway process that could erase most of the known universe. (He concluded that it was highly unlikely.) Discussions at F.H.I. range from conventional philosophic topics, like the nature of compromise, to the optimal structure of space empires—whether a single intergalactic machine intelligence, supported by a vast array of probes, presents a more ethical future than a cosmic imperium housing millions of digital minds.

Warning: Settle-in because this is a typical New Yorker article (i.e., is very, satisfyingly long).

The similar-sounding Future of Life Institute has similar goals but is focused on the explaining the risks of AI but also dispelling myths.