Exploring the Social Science of Human-Autonomy Interaction

Human-Autonomy Sciences

We are psychological scientists / practitioners who are excited about the future of autonomy.  This blog will cover recent developments in human-autonomy sciences with a focus on the social science angle.

[Repost] Prominent Figures Warn of Dangerous Artificial Intelligence (it's probably a bad Human Factors idea, too)

This is an edited repost from Human Factors Blog from 2015

Recently, some very prominent scientists and other figures have warned of the consequences of autonomous weapons, or more generally artificial intelligence run amok.

The field of artificial intelligence is obviously a computational and engineering problem: designing a machine (i.e., robot) or software that can emulate thinking to a high degree.   But eventually, any AI must interact with a human either by taking control of a situation from a human (e.g., flying a plane) or suggesting courses of action to a human.

I thought this recent news item about potentially dangerous AI might be a great segue to another discussion of human-automation interaction.  Specifically, to a detail that does not frequently get discussed in splashy news articles or by non-human-factors people:  degree of automation. This blog post is heavily informed by a proceedings paper by Wickens, Li, Santamaria, Sebok, and Sarter (2010).

First, to HF researchers, automation is a generic term that encompasses anything that carries out a task that was once done by a human.  Such as robotic assembly, medical diagnostic aids, digital camera scene modes, and even hypothetical autonomous weapons with AI.  These disparate examples simply differ in degree of automation.

Let's back up for a bit: Automation can be characterized by two independent dimensions:

  • STAGE or TYPE:  What is it doing and how is it doing it?
  • LEVEL: How much it is doing?

Stage/Type of automation describes the WHAT tasks are being automated and sometimes how.  Is the task perceptual, like enhancing vision at night or amplifying certain sounds?  Or is the automation carrying out a task that is more cognitive, like generating the three best ways to get to your destination in the least amount of time?

The second dimension, Level, refers to the balance of tasks shared between the automation and the human; is the automation doing a tiny bit of the task and then leaving the rest to the user?  Or is the automation acting completely on its own with no input from the operator (or ability to override)?

Figure 1. Degrees of automation (Adapted from Wickens et al., 2010)

See Figure 1.  If you imagine STAGE/TYPE (BLUE/GREEN) and LEVEL (RED) as the X and Y of a chart (below), it becomes clearer how various everyday examples of automation fit into the scheme.  As LEVEL and/or TYPE increase, we get a higher degree of automation (dotted line).

Mainstream discussions of AI and its potential dangers seem to be focusing on a hypothetical ultra-high degree of automation.  A hypothetical weapon that will, on its own, determine threats and act.  There are actually very few examples of such a high level of automation in everyday life because cutting the human completely "out of the loop" can have severely negative human performance consequences.

Figure 2. Approximate degrees of automation of everyday examples of automation

Figure 2 shows some examples of automation and where they fit into the scheme:

Wickens et al., (2010) use the phrase, "the higher they are, the farther they fall."   This means that when humans interact with greater degrees of automation, they do fine if it works correctly, but will encounter catastrophic consequences when automation fails (and it always will at some point).  Why?  Users get complacent with high DOA automation, they forget how to do the task themselves, or they loose track of what was going on before the automation failed and thus cannot recover from the failure so easily.

You may have experienced a mild form of this if your car has a rear-backup camera.  Have you ever rented a car without one?  How do you feel? That feeling of being "out of the loop" tends to get magnified with higher degrees of automation.  More on this in an upcoming throwback post.

So, highly autonomous weapons (or any high degree of automation) is not only a philosophically bad/evil idea, it is bad for human performance!

For more discussion on the degree and level of automation, see Arathi's recent Throwback post.

Throwback Thursday: A model for types and levels of automation

This is our second post on our “throwback” series. In this paper, I will take you through an article written by the best in the human factors and ergonomics field, the late Raja Parasuraman, Tom Sheridan, and Chris Wickens. Though several authors have introduced the concept of automation being implemented at various levels, for me this article nailed it.

The key excerpts from this article are highlighted below along with my commentary. Companies chasing automation blindly should keep these points in mind when designing their systems.

Automation is not all or none, but can vary across a continuum of levels, from the lowest level of fully manual performance to the highest level of full automation.
— Parasuraman, Sheridan, & Wickens, pp. 287

This means that between the extremes of a machine offering no assistance to a human to a machine doing everything for  the human, there are other automation design options. For example, the machine can offer a suggestion or implement a suggestion if the human approves or does everything autonomously and then informs the human or does everything autonomously and informs the human when asked. Let's consider the context of driving. In the example below, as we move from 1 to 4, the level of automation increases.

  1. I drive my car to work 
  2. I drive my car, KITT (from the Knight Rider) tells me the fastest route to work but I chose to override its suggestion 
  3. I drive my car, KITT tells me the fastest route to work and does not give me the option to override its suggestion 
  4. KITT plans and drives me to work
Automation can be applied to four broad classes of functions: 1) information acquisition; 2) information analysis; 3) decision and action selection; and 4) action implementation. Within each of these types, automation can be applied across a continuum of levels from low to high, i.e., from fully manual to fully automatic.
— Parasuraman, Sheridan, & Wickens, pp. 286

The way humans process information can be divided into four stages:

  1. information acquisition, which involves sensing data
  2. information analysis which involves making inferences with data
  3. decision and action selection, which involves making decision from among various choices
  4. action implementation, which involves doing the action.

Here are four examples of automation applied at each level:

  1. information acquisition, which involves sensing data
    • Example: night vision goggles enhance external data
  2. information analysis which involves making inferences with data
    • Example: historical graph of MPG in some cars
  3. decision and action selection, which involves making decision from among various choices
    • Example: Google Maps routes to a destination; where it presents 3 possible routes based on different criteria
  4. action implementation, which involves doing the action.
    • Example: automatic stapling in a photocopier

The authors say that automation can be applied to each of these stages of human information processing

An important consideration in deciding upon the type and level of automation in any system design is the evaluation of the consequences for human operator performance in the resulting system.

— Parasuraman, Sheridan, & Wickens, pp. 290

Choosing an automation design without any regard for the strengths and limitations of the human operator or for the characteristics of the environment in which the operator works in (e.g., high stress) is not an effective strategy.  When choosing the degree of automation, it is important to consider the impacts it may have on the operator.

  • How would it affect the operator workload?
  • How would it affect the operator's understanding of the environment (in research we call this situation awareness)?
  • How would it affect the combined operator-machine performance?
  • Would operators over-trust the machine and be unable to overcome automation failures?

It is worth noting that NHTSA's current description of vehicle autonomy (figure) is NOT human-centered and is instead focused on the capabilities and tasks of the machine.

From NHTSA.gov (https://www.nhtsa.gov/technology-innovation/automated-vehicles-safety)

From NHTSA.gov (https://www.nhtsa.gov/technology-innovation/automated-vehicles-safety)

Citation:  Parasuraman, R., Sheridan, T. B., & Wickens, C. (2000). A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man, and Cybernetics, 30, 286-297.

Downloadable link here.

 

[Reprint] Human-Robot Interaction
This is a reprint of an article, authored by Arathi Sethumadhaan, and is part of the series “Research Digest,” originally published in Ergonomics in Design

Personal service robots are predicted to be the next big thing in technology (e.g., Jones & Schmidlin, 2011). The term personal service robot refers to a type of robot that will assist people in myriad household activities, such as caring for the elderly, gardening, or even assisting children with their homework. Jones and Schmidlin (2011) examined the factors that need to be taken into consideration in the design of personal service robots.

For example, a personal service robot must be able to do the following:

  • Understand users’ intentions and infer the ability of users to accomplish tasks (e.g., does a senior citizen want to get medicine from the cupboard, and, if so, can the senior citizen get the medicine without any help?).
  • Determine the appropriate time to interrupt users (e.g., stopping a person on her way to work to inform her that a trivial task is complete may not be an appropriate time to intercede).
  • Approach users in the appropriate direction (e. g., approaching from front or rear vs. left or right). This is based on the user group (e.g., women vs. men) and the circumstance (e.g., user is sitting vs. user is standing).
  • Position itself at an appropriate distance from users. This distance is dependent on the users’ attitudes toward robots.
  • Capture users’ attention by identifying receptive users, positioning itself appropriately, and speaking to users.

The physical appearance of a robot is another important element that designers need to take into account. Appearance plays a significant role in the capabilities that users perceive a robot to possess. For example, Lee, Lau, and Hong (2011) found that users expected more emotion and communication (e.g., speech) capabilities from human-like robots compared with machine-like robots.

Further, the appearance of a robot influenced the environment in which it is likely to be used. Specifically, human-like robots (which are expected to have more warmth) were preferred for social and service occupations that required interaction with humans compared with task-oriented occupations.

Like personal service robots, professional robots are becoming increasingly popular. These robots assist people with professional tasks in nonindustrial environments. For example, professional robots are used in urban search-and-rescue missions, with operators remotely in control. Designing robots for use in such complex environments brings a unique set of challenges.

For example, Jones, Johnson, and Schmidlin (2011) found that one of the problems involved with teleoperating urban search-and-rescue robots is that the robot gets stuck because operators lack the ability to accurately judge whether they could drive a robot through an aperture. In that situation, operators may have to jeopardize their lives to retrieve the robot.

The failure to make accurate judgment arises because driveability decisions are based solely on whether the robot is smaller or larger than the aperture and not on the ability to drive the robot through the aperture.

In summary, bear in mind the following points when designing your “R2D2”:

  • A personal service robot must be able to infer the user’s intentions and desires; must determine whether the user is able to complete the task without assistance; needs to decide when to interrupt the user; has to approach and position itself at a suitable distance from the user; and needs to be able to engage the user.
  • The appearance of robots should match users’ mental models. Humans expect human-like robots to have warmth capabilities (e.g., emotion, cognition) and prefer human-like robots in occupations requiring interactions with people. However, not all robots need to be human-like; machine-like robots are considered suitable for task-oriented, blue-collar occupations.
  • Teleoperating a robot successfully through an aperture is dependent not only on the robot’s width but also on a safety margin that is associated with the operator’s control of the robot. Therefore, robots used in urban search-and rescue missions must be designed to account for the safety margin that operators fail to consider when making driveability judgments. 

References

Jones, K. S., Johnson, B. R., & Schmidlin, E. A. (2011). Teleoperation through apertures: Passability versus driveability. Journal of Cognitive Engineering and Decision Making, 5, 10–28. http://edm.sagepub.com/ content/5/1/10.full.pdf+html.

Jones, K. S., & Schmidlin, E. A. (2011). Human-robot interaction: Toward usable personal service robots. In Reviews of Human Factors and Ergonomics (vol. 7, pp. 100–148). Santa Monica, CA: Human Factors and Ergonomics Society. http://rev.sagepub.com/ content/7/1/100.full.pdf+html.

Lee, S., Lau, I. Y., & Hong, Y. (2011). Effects of appearance and functions on likability and perceived occupational suitability of robots. Journal of Cognitive Engineering and Decision Making, 5, 232–250. http://edm.sagepub .com/content/5/2/232.full.pdf+html. 

Original article link: http://journals.sagepub.com/doi/pdf/10.1177/1064804612449796

What Sci-Fi Movies Can Tell Us about Future Autonomy

I gave a talk a few months ago to a department on campus.  It is based on work that Ewart de Visser PhD and I are doing on adaptive trust repair with autonomy.  That is a complex way of saying the possibility of giving machines an active role in managing human-machine trust.

The talk is based on a paper currently under review and is meant to be fun but also an attempt to seriously consider the shape of future autonomy based on fictional representations; sci-fi movies serve as data.  It is about 40 minutes long.

"Alexa, what should I wear today?"
Do you prefer “fashion victim” or “ensembly challenged”?
— Cher Horowitz, Clueless (1995)

I am bit of a style junkie and draw inspiration from the uber-talented fashion designers and celebrities I follow on Instagram.  What I really enjoy is designing my own outfits and accessories by drawing inspiration from the amazing pictures I see on Instagram,  in a cost-effective manner. 

Echo Look, the latest virtual assistant from Amazon that provides wardrobe recommendations grabbed my attention but for all the wrong reasons. 

Amazon introduced its latest Alexa-powered device, a gadget with a built-in camera that is being marketed as a way to photograph, organize and get recommendations on outfits. Of course, Amazon will then try to sell you clothing, too.

It reminded me of the scene in Clueless (1995) where Cher, the main character, uses an early "app" to help her decide what to wear.

Echo Look can serve as your personal stylist and provide recommendations if you are confused between two outfits. The recommendation is based on "style trends" and "what flatters" the user.   

But from where does the Echo Look draw its style trends?  Personally, I will work with a personal human stylist only after making sure he or she has a good grasp of the current trends and  who understand my needs (comfort and practicality is important), sensibilities, and most importantly my personality.

Fashion-wise, blindly following the current trends is not an effective strategy.  So, how can I trust a machine that does not know me? To gain trust, the machine should convey to me how it arrived at its decision. Or even better, present the raw data gathered and let me decide what fits me best.

In the automation literature, we refer to this as stages of automation (more in depth on this topic later).  What gets automated (deciding for the human versus simply gathering the data for the human) is an important design decision that affects how people perform and behave with automation.  I think that high level automation, simply deciding,  does not work in this context!

But Rich disagrees (in research we call this “individual differences”). In the process of writing this post, I found out that Rich is a big fan of Bonobos. Being, as he says, "style-impaired," he especially appreciated a new feature of the app that will suggest pairings of all the items he's purchased or with new items in the store.  

He shared some screenshots of the Bonobos app, which I thought was pretty cool. After you select an item, it will instantly create a complete outfit based on occasion and temperature.  Because it is a brand he trusts, and he is not knowledgeable about style, he rarely  question the decisions  (Ed: maybe I question why they keep pushing Henleys; I like collars). 

So what makes my opinion of Echo different from Rich's reaction to the decision aid in the Bonobos app?  

Experience

Experience comes from years of experimenting different things (and in the process creating some disastrous looks), and understanding trends but most importantly understanding your body and skin, and textures and colors that suit you the best. With experience, comes efficiency (in research we call this "expert versus novice differences"). If I am an expert, why would I need to rely on a machine to tell me what to wear or what looks good on me?

However, I am not dismissing the usefulness of Echo for everyone. For millennials who do most of their shopping on Amazon, Echo could provide a lot of value by putting together their outfits based on their shopping choices (Stitch Fix is a similar, popular concept). 

Passion

Even the simplest clothes can look stylish with the right shoes and statement jewelry. I genuinely enjoy the art of styling an outfit. This activity stimulates my right brain.  Why would I give up doing this activity? For those like Mark Zuckerberg who think that dressing in the same outfit will help them save their energy to do other important things in life (which he does!), Echo may do wonders. 

Gender

Now, do I dislike Echo and Rich likes his Bonobos app because I am a female and he is a male? So, statistically speaking, are men more fashion-impaired than women? Or to be politically correct, do women have more fashion wisdom than men? I dont know. What I do know is that some of my favorite fashion designers (e.g., Manish Malhotra, Prabal Gurung) are men. 

Automation Design

A major difference between Echo and the Bonobos app is in their level of automation. The Bonobos app provides recommendations on pairing outfits  that users already purchased (important point, users made the purchasing decision on their own) but users are empowered to use the data presented by the app to decide whether they want to follow the recommendations or not.  The Bonobos app also presents alternate outfits if the first choice is unsatisfactory.  This would be considered a form of “low decision automation” where it alleviates a moderate load of decision making but leaves some for the user.

Echo, on the other hand, tells users "yay" or "nay" as to why certain outfits are not flattering on them but givers users no information on how it arrived at the decision. The lack of transparency is a major drawback in how Echo is designed and a big reason as to why it wont work for me. It also could represent a much higher form of decision automation where it gives users no alternate options other than the binary yay or nay.

So, will I ever ask the question, "Alexa, what should I wear today?". I will if Alexa convinces me that she understands me and my personality (she should be my fashion-partner who evolves as I evolve), collaborates with me, clearly conveys her thought processes, and is nice to me (in research, we call it automation etiquette)! 

Style is a way to say who you are without having to speak.
— Rachel Zoe
Autonomy Potpourri: Evil smart houses, trucker hats, & farming

Upcoming Netflix movie: Evil smart house terrorizes street-smart grifter

I'm sure this movie will give people positive and accurate portrayals of AI/autonomy, and smart home technology; like Sharknado did for weather phenomena/marine life...

Monroe plays a victim who was a street-smart grifter that has been kidnapped and held captive in order to be part of a fatal experiment. The only thing standing in the way of her freedom is Tau, an advanced artificial intelligence developed by her captor, played by Skrein. Tau is armed with a battalion of drones that automate a futuristic smart house.

Trucker hat that alerts of sleepiness

I bet the main issue will be a problem of false alarms, leading to disuse.

Being a trucker means driving huge distances on demanding deadlines. And one of the biggest dangers in trucking is the threat of drivers falling asleep at the wheel. To celebrate 60 years of truck production in Brazil, Ford decided to try to help the problem by creating a hat that tracks head movements and alerts drivers in danger of snoozing.
Driverless tractors, combine harvesters and drones have grown a field of crops in Shropshire in a move that could change the face of farming. From sowing the seeds to tending and harvesting the crop, the robot crew farmed a field of barley without humans ever setting foot on the land in a world first. The autonomous vehicles followed a pre-determined path set by GPS to perform each task, while the field was monitored by scientists using self-driving drones.
Throwback Thursday: The Ironies of Automation
If I have seen further, it is by standing on the shoulders of giants
— Isaac Newton, 1675

Don't worry, our Throwback Thursday doesn’t involve embarrassing pictures of me or Arathi from 5 years ago.  Instead, it is more cerebral.  The social science behind automation and autonomy is long and rich, and despite being one of the earliest topics of study in engineering psychology, it has even more relevance today.

Instead of re-inventing the wheel, why don't we look at the past literature to see what is still relevant?

In an effort to honor that past but also inform the future, the inaugural "Throwback Thursday" post will highlight scientific literature from the past that is relevant to modern discussion of autonomy.

Both Arathi and I have taught graduate seminars in automation and autonomy so we have a rich treasure trove of literature from which to draw.  Don't worry: while some of the readings can be complex and academic, in deference to our potentially diverse readership, we will focus on key points and discuss their relevance today.

The Ironies of Automation

In this aptly titled paper, Bainbridge discusses, back in 1983(!), the ironic things that can happen when humans interact with automation.  The words of this paper ring especially true today when the design strategy of some companies is to consider the human as an error term to be eliminated:

The designer’s view of the human operator may be that the operator is unreliable and inefficient, so should be eliminated from the system.
— Bainbridge, pp. 775

But is this design strategy sustainable?  Bainbridge later wisely points out that:

The second irony is that the designer who tries to eliminate the operator still leaves the operator to do the tasks which the designer cannot think how to automate.
— Bainbridge, pp. 775

The paper then discusses, how, under such an approach, many unintended problems arise.  The ultimate irony, however, is that the implementation of very high levels of automation (including eliminating the driver in a self-driving car) will ultimately lead to a higher workload burden for the "passenger."

A more serious irony is that the automatic control system has been put in because it can do the job better than the operator, but yet the operator is being asked to monitor that it is working effectively.
— Bainbridge, pp. 775
[Reprint] Automation: Friend or Foe
This is a reprint of an article, authored by Arathi Sethumadhavan, which is series of articles originally published in Ergonomics in Design in April 2011.

With advancements in technology, automated systems have become an integral part of our society. Modern humans interact with a variety of automated systems every day, ranging from the timer in the microwave oven to the global positioning system in the car to the elevator button.

Just as automation plays a pivotal part in improving the quality of living, it also plays an integral role in reducing operator errors in safety-critical domains. For example, using a simulated air traffic control task, Rovira and Parasuraman (2010) showed that the conflict detection performance of air traffic service providers was higher with reliable automation compared with manual control.

Although automated systems offer several benefits when reliable, the consequences associated with their failure are severe. For example, Rovira and Parasuraman (2010) showed that when the primary task of conflict detection was automated, even highly reliable (but imperfect) automation resulted in serious negative effects on operator performance. Such performance decrements when working with automated systems can be explained by a phenomenon called automation-induced complacency, which refers to lower-thanoptimal monitoring of automation by operators (Parasuraman & Manzey, 2010).

High operator workload and high automation reliability contribute to complacency. Experts and novices as well as individuals and teams are prone to automation-induced complacency, and task training does not appear to completely eliminate its effects. However, performance decrements arising from automation incomplacency can be addressed by applying good design solutions.

In this issue of the Research Digest, John D. Lee, professor of industrial and systems engineering at the University of Wisconsin–Madison, and Ericka Rovira, assistant [Eds. now associate] professor of engineering psychology at West Point, provide automation design guidelines for practitioners based on their research and expertise in the area of human-automation interaction.


What factors need to be taken into consideration when designing automated systems?

Ericka Rovira

  • Determine the level of operator involvement. This should be the first step, as discussed in Rovira, McGarry, and Parasuraman (2007). How engaged should the operator be? Is the operator expected to take over control in the event of an automation failure?
  • Determine the degree of automation appropriate for the domain. The appropriate degree of automation is closely tied to the level of operator involvement. The level of automation in a programmable stopwatch can be very different from the degree of automation in a military reconnaissance task. In the latter task, the failure of the automated aid can have disastrous consequences. As a rule of thumb, as the degree of automation increases, operator involvement declines, and as a result, there is less opportunity for the operator to recover in the face of an automation error.
  • Design automated aids in such a way that operators have adequate time to respond to an automation failure.
  • Make the automation algorithm transparent so that operators are able to build a mental picture of how the automation is functioning. Providing operators with information on the uncertainties involved in the algorithm can help them engage in better information sampling and consequently help them respond quickly to automation failures.

John Lee

Make automation trustable. Appropriate trust and reliance depends on how well the capabilities of the automation are conveyed to the operator. Specific design considerations (Lee & See, 2004) include the following:

  • Design the automation for appropriate trust and not simply greater trust. • Show the past performance of the automation.
  • Illustrate the automation algorithms by revealing intermediate results in a way that is comprehensible to the operators.
  • Simplify the algorithms and operation of the automation to make it more understandable.
  • Show the purpose of the automation, design basis, and range of applications in a way that relates to operators’ goals.
  • Evaluate any anthropomorphizing of the automation to ensure appropriate trust.
  • Show how the context affects the performance of the automation and support operators’ assessment of situations relative to the capabilities of the automation.
  • Go beyond the individual operator working with the automation. That is, take into consideration the cultural differences and organizational structure when designing automated systems, because this can influence trust and reliance on automation.

In conclusion, whether automation is an operator’s friend or foe depends largely on how well practitioners are able to consider automation design principles that are paramount for effective human-automation interaction – some of which are outlined here – when designing these systems.

Original article link: http://journals.sagepub.com/doi/pdf/10.1177/1064804611411409

Robot potpourri: Nannies, teachers, and companions

Today's collection of potpourri items inadvertently coalesced into how robots are beginning to weave themselves into our lives; literally from infancy to old age.

[Gaurdian] 'This is awful': robot can keep children occupied for hours without supervision

Robots do not have the sensitivity or understanding needed for childcare. 
The use of artificial intelligence to aid student’s learning dates back to 1980s. It was the time when major technology companies like Lego, Leaf, and Androbot introduced the robots to simplify the study and related activities.

Since then, the robotic technology has gone through various changes to become more advance and sophisticated. Meanwhile, a new term, educational robots, was coined for these “classroom robots.”

As I reflect back on my own education, I did best in subjects where I admired and respected my teachers.  If this quality is crucial to student-teacher bonding, it suggests that these robotic teachers need to be designed to elicit such emotions.

[CNBC] A wall-crawling robot will soon teach Harvard students how to code

Harvard computer science professor Radhika Nagpal is hoping to use a robotic toy she helped develop, Root, to teach coding languages like Python and Javascript in her undergraduate courses at Harvard.

The Root prototype is already being used in Harvard research labs. And the Root will be widely available this spring.

[Guardian] In an age of robots, schools are teaching our children to be redundant

Interesting story on how being different from a robot is important to survive in the future workplace but how todays’s schools are designed to produce a 19th century factory workforce.

In the future, if you want a job, you must be as unlike a machine as possible: creative, critical and socially skilled. So why are children being taught to behave like machines?

[Intuition robotics] A companion for the elderly

Older adults value their independence and want to be able to live in the homes where they have lived in for years. Home robots can help them to live independently. These robots can do a number of things, including reminding older adults to take medications to exercising. Check out ElliQ, a robot developed by a start up in Israel for older adults.

ELLI•Q™ is an active aging companion that keeps older adults active and engaged. ELLI•Q seamlessly enables older adults to use a vast array of technologies, including video chats, online games and social media to connect with families and friends and overcome the complexity of the digital world.

ELLI•Q inspires participation in activities by proactively suggesting and instantly connecting older adults to digital content such as TED talks, music or audiobooks; recommending activities in the physical world like taking a walk after watching television for a prolonged period of time, keeping appointments and taking medications on time; and connecting with family through technology like chat-bots such as Facebook Messenger.
Weekend Reading: Fear of AI and Autonomy

In our inaugural post, I alluded to the current discussion surrounding AI/Autonomy as being dominated by philosophers, politicians, and engineers.  They are, of course, working at the forefront of this technology and raise important points.

But focusing on their big-picture concerns may prevent a fuller view of the day-to-day role of this technology, and the fact that humans are expected to interact, collaborate, and in some cases submit to these systems (social science issues; why this blog exists).

That said, one of the philosophers examining the future role and risk associated with AI is Nick Bostrom, director of the Future of Humanity Institute.  This profile from the New Yorker from a few years ago (2015) is a great way to get up to speed on the basis of much of the fear of AI.

Bostrom’s sole responsibility at Oxford is to direct an organization called the Future of Humanity Institute, which he founded ten years ago, with financial support from James Martin, a futurist and tech millionaire. Bostrom runs the institute as a kind of philosophical radar station: a bunker sending out navigational pulses into the haze of possible futures. Not long ago, an F.H.I. fellow studied the possibility of a “dark fire scenario,” a cosmic event that, he hypothesized, could occur under certain high-energy conditions: everyday matter mutating into dark matter, in a runaway process that could erase most of the known universe. (He concluded that it was highly unlikely.) Discussions at F.H.I. range from conventional philosophic topics, like the nature of compromise, to the optimal structure of space empires—whether a single intergalactic machine intelligence, supported by a vast array of probes, presents a more ethical future than a cosmic imperium housing millions of digital minds.

Warning: Settle-in because this is a typical New Yorker article (i.e., is very, satisfyingly long).

The similar-sounding Future of Life Institute has similar goals but is focused on the explaining the risks of AI but also dispelling myths.

Autonomy Potpourri from Around the Web

I'm a bit of a news junkie; I start my day by reviewing the 200 or so blogs I follow on a regular basis.  When I come across an item related to autonomy, I'll collect it into what I call a potpourri post.  Some minor commentary follows.

Hello, World

Welcome, please come in!  We are still arranging our furniture but we are happy to have you here.  We (Arathi Sethumadhavan & Richard Pak) are both engineering psychologists.  Arathi works in industry (Senior Director at CORE HF), while Rich is in academia (Associate Professor in Psychology, Clemson University).  

We are both excited and passionate about the future.  Many people think it will be dominated by humans working alongside highly autonomous things (software agents, robots).  These autonomous systems will be present in our work, home, recreation, and even romantic lives.  

However, human behavior with higher levels of autonomy can be counter-intuitive, irrational, and maladaptive.  What will this future look like?  How will humans react?  How will this change us?

As we approach this uncertain future, it becomes even more critical to understand how humans behave and react with such systems.  

In this blog, we will cover news and developments focusing on the social science side of human-automation partnerships; it’s one area we think is not covered enough.  We also think the current conversation is being driven by engineers, philosophers, and politicians; we'd like to make some room at the table for social scientists.  If humans are to interact with future autonomy in a safe and productive way, we must better understand the social science aspects--the human.  

Occasionally, we'll take a deep dive into issues of autonomy with a social science spin.

We hope we have something interesting and unique to contribute and we also hope that you will participate in the discussion.

Richard Pak & Arathi Sethumadhavan