Interview: Dr. Frank Durso

In our fourth post in a new series, we interview a leading social science researcher and leader in aviation psychology, Dr. Frank Durso. Frank was also my academic advisor (a decade ago) and it was a pleasure to chat with him about his thoughts about the impact and future of automation in aviation.

About Dr. Frank Durso

Francis T. (Frank) Durso is Professor and Chair of the School of Psychology at the Georgia Institute of Technology where he directs the Cognitive Ergonomics Lab.  Frank received his Ph.D. from SUNY at Stony Brook and his B.S. from Carnegie-Mellon University.    While at the University of Oklahoma, he was a Regents Research recipient and founding director of their Human-Technology Interaction Center.

frank_durso.jpg

Frank is Past-President of the Human Factors and Ergonomics Society (HFES), the Southwestern Psychological Association, the American Psychological Association’s (APA) Division of Engineering Psychology, and founding President of the Oklahoma Psychological Society.  He is a sitting member of the National Research Council’s Board of Human Systems Integration.  He has served as advisor and panelist for the Transportation Research Board, the National Science Foundation, the APA, the Army Research Lab, and the Government Accountability Office. 

Frank was associate editor of the Journal of Experimental Psychology: Applied, senior editor of Wiley’s Handbook of Applied Cognition, co-editor of the APA Handbook of Human Systems Integration, and serve as founding editor of the HFES monograph series entitled User’s Guides to Methods in Human Factors and Ergonomics.   He has served on several editorial boards including Human Factors.  He co-authored Stories of Modern Technology Failures and Cognitive Engineering Successes.  He is a fellow of the HFES, APA, the Association for Psychological Science, and the Psychonomic Society.   He was awarded the Franklin V. Taylor award for outstanding achievements in applied experimental and engineering psychology from APA  

His research has been funded by the Federal Aviation Administration, the National Science Foundation, and the Center for Disease Control as well as various industries.  Most of Frank’s research has focused on cognition in dynamic environments, especially in transportation (primarily air traffic control) and healthcare.   He is a co-developer of the Pathfinder scaling algorithm, the SPAM method of assessing situation awareness, and the Threat-Strategy Interview procedure. His current research interests focus on cognitive factors underlying situation understanding and strategy selection.


For part of your career, you have been involved in air traffic control and have seen the use of automation evolve from paper-based flight strips to NexGen automation.  In your opinion, what is the biggest upcoming automation-related challenge in this domain?

As you know, people, including big thinkers like Paul Fitts in 1951, have given thought to how to divide up a task between a machine and a person.  While we people haven’t changed much, our silicon helpers have.  Quite a bit. They’re progressed to the point that autonomy, and the issues that accompany them are now both very real.  (I’ll get back to autonomy in your other question).  Short of just letting the machine do it, or just doing it yourself, the puzzle of how to divvy up a task remains although the answer to the puzzle changes.

When I first started doing research for the FAA in the early 90s,  there was talk of automation soon to be available that would detect conflicts and suggest ways to resolve the conflict, leaving the controller to choose among recommendations.  A deployed version of this was URET, an aid that the controller could use if he or she wanted.  In one mode, controllers were given a list like representation of flight data much like the paper strips did or a graphic representation of flight paths.  Either mode depicted conflicts up to 20 minutes out. 

I do worry that this new level of automation can take much of the agency away from the controller

When I toured facilities back then, I remember finding a controller who was using the aid when a level red conflict appeared.  I waited for him to make changes to resolve the conflict.  And waited.  He never did anything to either plane in conflict, and yet the conflict was resolved.  When I asked him about it, he told me “Things will probably change before I need to worry about it.”  He gave me two insights that stayed with me. One was that in dynamic environments, things change and the more dynamic the more likely is what you (or your electronic aid) expect and plan for are mere possibilities, not certainties.  This influenced much of my subsequent thinking about situation awareness, what it was, and how to measure it. 

 Next Generation Air Transport System (NextGen): https://www.nasa.gov/topics/aeronautics/features/8q_nextgen.html

Next Generation Air Transport System (NextGen): https://www.nasa.gov/topics/aeronautics/features/8q_nextgen.html

I also realized that day that I would never understand anything unless I understood the strategies that people used.  I didn’t do anything with that realization back then, thinking it would be like trying to nail jello to a wall.  I’m fascinated by strategy research today, but then I was afraid the jello and my career in aviation human factors would both be a mess lying at my feet.

Our big worries with automation that does the thinking for us were things like, will controllers use the technology?  Today we’d call that technology acceptance.  will the smart automation change the job from that of controlling air traffic to managing it?  Of course, when people are put into a situation where they merely observe, while the automation does the work, there’s the risk that the human will not truly be engaged and situation awareness would suffer.  That’s a real concern especially if you ever expect the human to again take over the task.

Now there are initiatives and technologies in the FAA that eliminate or at least reduce conflictions by optimizing the aircraft sequence and leave to the controller the task of getting the aircraft to fall in line with that optimization.  Imagine that the computer optimizes the spacing and timing of planes landing at an airport.  The planes are not, of course, naturally in this optimized pattern, so the computer presents to the controller “plane bubbles” Those plane bubbles are optimized.  All the controller has to do is get the plane into that bubble and conflicts will be reduced and landings would be optimized.  This notion of having the computer do the heavy cognitive lifting of solving conflict and optimization and then presenting those “targets” to the controller can be used in a variety of circumstances.  Now the “controller” is not even a manager, but in some ways the controller is being kept in the game and should therefore show pretty good situation awareness. 

Now I worry that situation awareness be very local—tied to a specific, perhaps meaningless piece of the overall picture.  This time global SA levels may be a conern; they may have little or no understanding of the big picture of all those planes landing, even if he or she does have good SA of getting a particular plane in queue. 

For some reason, I no longer worry about technology acceptance as I did in 1997.  Twenty years later, I do worry that this new level of automation can take much of the agency away from the controller—so much of what makes the job interesting and fun.  Retention of controllers might suffer and those that stay will be less satisfied with their work, which produces other job consequences.

As an end to this answer, I note that much has changed in the last quarter of a century, but we still seem to be following a rather static list of machines do this and people do that.  Instead, I think the industry needs to adopt the adaptive allocation of tasks that human factors professionals have studied.  The question is not really when should the computer sequence flights, but when should that responsibility be handed over to the human.  Or when should the computer, detecting a tired controller perhaps, rest responsibility for separation from him or her. 


You are on the Board of Human-Systems Integration for the National Academies of Science and Engineering. What is purpose of the Board and what is your role?

how they interact within and with complex systems ...must be addressed if we are to solve today’s societal challenges

The National Academies of Science, Engineering, and Medicine do their operational work through seven programs governed by the rules of the National Research Council.  One of the programs, the Division of Behavioral, Social Science, and Education contains the Board of Human Systems Integration or BOHSI.  Established by President Lincoln, the Academies is not a government agency.  A consequence of that for the Boards is that financing is through sponsors. 

The original Committee on Human Factors was founded in 1980 bty the Army, Navy, and Air Force.  Since then, BOHSI has been sponsored by a myriad of agencies including NASA, NIOSH, FAA, and Veteran’s Health.  I’m proud to say APA Division 21 and the Human Factors and Ergonomics Society, two organizations I’ve led in the past are also sponsors.

BOHSI’s mandate is to provide an independent voice on the HSI  issues that interest the nation.  We provide  theoretical and methodological perspectives on people-organization-technology-environment systems.  The board itself currently comprises 16 members, including National Academy members, academics, business leaders, and industry professional.  They were invited from a usually (very) long list of nominations.  A visit to the webpage will show the caliber of the members. http://sites.nationalacademies.org/DBASSE/BOHSI/Members/index.htm   

The issues BOHSI is asked to address are myriad.  Decision makers, leaders, and scholars from other disciplines are becoming increasingly aware of the fact that people and how they interact within and with complex systems is a critical feature that must be addressed if we are to solve today’s societal challenges.  We’ve looked at remote controlled aviation systems, at self-escape from mining, and how to establish safety cultures in academic labs, to mention a few.

BOHSI addresses these problems in a variety of ways.  The most extensive efforts result in reports like those currently on the webpage: Integrating Social and Behavioral Science within the Weather Enterprise; Personnel Selection in the Pattern Evidence Domain of Forensic Science; and CMV Driver Fatigue, Long Term Health, and Highway Safety.  These reports are generated by committees appointed by BOHSI.  A member of the board or two often sits on these working committees, but the majority of the committee is made up of national experts on the specific topic representing various scientific, policy, and operational perspectives.  The hallmark of these reports is that that provide an independent assessment and recommendation for the sponsor and the nation.


As a social scientist studying autonomy, what do you see as the biggest unresolved issue?

As technology advances at an accelerating rate, real autonomy becomes a real and exciting possibility.  The issues that accompany truly independent automated agents are exciting as well.   I think there are a number of questions of interest and there are lots of smart people looking into them.  For example, there’s the critical question of trust.  Why did Luke trust R2-D2?  (Did R2 trust Luke?)  And technology acceptance continues to be with us:  Why will elderly folk allow a robot to assist with this task, but not that one.

The issues that accompany truly independent automated agents are exciting as well....Why did Luke trust R2-D2?  (Did R2 trust Luke?)

But I think the biggest issues with autonomy is getting a handle on when responsibility, control, or both switch from one member of the technology-person pair to the other.  How can the autonomous car hand over control to the driver?  Will the driver have the SA to receive it?  How does this handshaking occur if each system does not have an understanding of the state of the other?  We don’t really understand the answer to these questions regarding two humans let alone between a human and an automaton. 

There are indeed ways we can inform the human of the automation’s state, but we can also inform the automaton of the human’s state.  Advances in machine learning allows the automaton to learn how the human prefers to interact with it.  Advances in augmented cognition can allow us to feed information about the physiological information about the operator to the automaton.  If the car knew the driver was stressed (cortisol levels) or tired (eye closures) it might decide to not hand over control.   

I should mention here that this kind of separation of responsibilities between machine and human is quite different than the static lists I discussed in my first answer regarding the FAA technology.  There, the computer had certain tasks and the controller had others; here any particular task belongs to either particular agent, depending on the situation.

I think future work has to really investigate the system properties of the human and technology, and not (just) each alone.