Posts in Arathi Sethumadhavan
[Reprint] Human-Robot Interaction
This is a reprint of an article, authored by Arathi Sethumadhaan, and is part of the series “Research Digest,” originally published in Ergonomics in Design

Personal service robots are predicted to be the next big thing in technology (e.g., Jones & Schmidlin, 2011). The term personal service robot refers to a type of robot that will assist people in myriad household activities, such as caring for the elderly, gardening, or even assisting children with their homework. Jones and Schmidlin (2011) examined the factors that need to be taken into consideration in the design of personal service robots.

For example, a personal service robot must be able to do the following:

  • Understand users’ intentions and infer the ability of users to accomplish tasks (e.g., does a senior citizen want to get medicine from the cupboard, and, if so, can the senior citizen get the medicine without any help?).
  • Determine the appropriate time to interrupt users (e.g., stopping a person on her way to work to inform her that a trivial task is complete may not be an appropriate time to intercede).
  • Approach users in the appropriate direction (e. g., approaching from front or rear vs. left or right). This is based on the user group (e.g., women vs. men) and the circumstance (e.g., user is sitting vs. user is standing).
  • Position itself at an appropriate distance from users. This distance is dependent on the users’ attitudes toward robots.
  • Capture users’ attention by identifying receptive users, positioning itself appropriately, and speaking to users.

The physical appearance of a robot is another important element that designers need to take into account. Appearance plays a significant role in the capabilities that users perceive a robot to possess. For example, Lee, Lau, and Hong (2011) found that users expected more emotion and communication (e.g., speech) capabilities from human-like robots compared with machine-like robots.

Further, the appearance of a robot influenced the environment in which it is likely to be used. Specifically, human-like robots (which are expected to have more warmth) were preferred for social and service occupations that required interaction with humans compared with task-oriented occupations.

Like personal service robots, professional robots are becoming increasingly popular. These robots assist people with professional tasks in nonindustrial environments. For example, professional robots are used in urban search-and-rescue missions, with operators remotely in control. Designing robots for use in such complex environments brings a unique set of challenges.

For example, Jones, Johnson, and Schmidlin (2011) found that one of the problems involved with teleoperating urban search-and-rescue robots is that the robot gets stuck because operators lack the ability to accurately judge whether they could drive a robot through an aperture. In that situation, operators may have to jeopardize their lives to retrieve the robot.

The failure to make accurate judgment arises because driveability decisions are based solely on whether the robot is smaller or larger than the aperture and not on the ability to drive the robot through the aperture.

In summary, bear in mind the following points when designing your “R2D2”:

  • A personal service robot must be able to infer the user’s intentions and desires; must determine whether the user is able to complete the task without assistance; needs to decide when to interrupt the user; has to approach and position itself at a suitable distance from the user; and needs to be able to engage the user.
  • The appearance of robots should match users’ mental models. Humans expect human-like robots to have warmth capabilities (e.g., emotion, cognition) and prefer human-like robots in occupations requiring interactions with people. However, not all robots need to be human-like; machine-like robots are considered suitable for task-oriented, blue-collar occupations.
  • Teleoperating a robot successfully through an aperture is dependent not only on the robot’s width but also on a safety margin that is associated with the operator’s control of the robot. Therefore, robots used in urban search-and rescue missions must be designed to account for the safety margin that operators fail to consider when making driveability judgments. 

References

Jones, K. S., Johnson, B. R., & Schmidlin, E. A. (2011). Teleoperation through apertures: Passability versus driveability. Journal of Cognitive Engineering and Decision Making, 5, 10–28. http://edm.sagepub.com/ content/5/1/10.full.pdf+html.

Jones, K. S., & Schmidlin, E. A. (2011). Human-robot interaction: Toward usable personal service robots. In Reviews of Human Factors and Ergonomics (vol. 7, pp. 100–148). Santa Monica, CA: Human Factors and Ergonomics Society. http://rev.sagepub.com/ content/7/1/100.full.pdf+html.

Lee, S., Lau, I. Y., & Hong, Y. (2011). Effects of appearance and functions on likability and perceived occupational suitability of robots. Journal of Cognitive Engineering and Decision Making, 5, 232–250. http://edm.sagepub .com/content/5/2/232.full.pdf+html. 

Original article link: http://journals.sagepub.com/doi/pdf/10.1177/1064804612449796

"Alexa, what should I wear today?"
Do you prefer “fashion victim” or “ensembly challenged”?
— Cher Horowitz, Clueless (1995)

I am bit of a style junkie and draw inspiration from the uber-talented fashion designers and celebrities I follow on Instagram.  What I really enjoy is designing my own outfits and accessories by drawing inspiration from the amazing pictures I see on Instagram,  in a cost-effective manner. 

Echo Look, the latest virtual assistant from Amazon that provides wardrobe recommendations grabbed my attention but for all the wrong reasons. 

Amazon introduced its latest Alexa-powered device, a gadget with a built-in camera that is being marketed as a way to photograph, organize and get recommendations on outfits. Of course, Amazon will then try to sell you clothing, too.

It reminded me of the scene in Clueless (1995) where Cher, the main character, uses an early "app" to help her decide what to wear.

Echo Look can serve as your personal stylist and provide recommendations if you are confused between two outfits. The recommendation is based on "style trends" and "what flatters" the user.   

But from where does the Echo Look draw its style trends?  Personally, I will work with a personal human stylist only after making sure he or she has a good grasp of the current trends and  who understand my needs (comfort and practicality is important), sensibilities, and most importantly my personality.

Fashion-wise, blindly following the current trends is not an effective strategy.  So, how can I trust a machine that does not know me? To gain trust, the machine should convey to me how it arrived at its decision. Or even better, present the raw data gathered and let me decide what fits me best.

In the automation literature, we refer to this as stages of automation (more in depth on this topic later).  What gets automated (deciding for the human versus simply gathering the data for the human) is an important design decision that affects how people perform and behave with automation.  I think that high level automation, simply deciding,  does not work in this context!

But Rich disagrees (in research we call this “individual differences”). In the process of writing this post, I found out that Rich is a big fan of Bonobos. Being, as he says, "style-impaired," he especially appreciated a new feature of the app that will suggest pairings of all the items he's purchased or with new items in the store.  

He shared some screenshots of the Bonobos app, which I thought was pretty cool. After you select an item, it will instantly create a complete outfit based on occasion and temperature.  Because it is a brand he trusts, and he is not knowledgeable about style, he rarely  question the decisions  (Ed: maybe I question why they keep pushing Henleys; I like collars). 

So what makes my opinion of Echo different from Rich's reaction to the decision aid in the Bonobos app?  

Experience

Experience comes from years of experimenting different things (and in the process creating some disastrous looks), and understanding trends but most importantly understanding your body and skin, and textures and colors that suit you the best. With experience, comes efficiency (in research we call this "expert versus novice differences"). If I am an expert, why would I need to rely on a machine to tell me what to wear or what looks good on me?

However, I am not dismissing the usefulness of Echo for everyone. For millennials who do most of their shopping on Amazon, Echo could provide a lot of value by putting together their outfits based on their shopping choices (Stitch Fix is a similar, popular concept). 

Passion

Even the simplest clothes can look stylish with the right shoes and statement jewelry. I genuinely enjoy the art of styling an outfit. This activity stimulates my right brain.  Why would I give up doing this activity? For those like Mark Zuckerberg who think that dressing in the same outfit will help them save their energy to do other important things in life (which he does!), Echo may do wonders. 

Gender

Now, do I dislike Echo and Rich likes his Bonobos app because I am a female and he is a male? So, statistically speaking, are men more fashion-impaired than women? Or to be politically correct, do women have more fashion wisdom than men? I dont know. What I do know is that some of my favorite fashion designers (e.g., Manish Malhotra, Prabal Gurung) are men. 

Automation Design

A major difference between Echo and the Bonobos app is in their level of automation. The Bonobos app provides recommendations on pairing outfits  that users already purchased (important point, users made the purchasing decision on their own) but users are empowered to use the data presented by the app to decide whether they want to follow the recommendations or not.  The Bonobos app also presents alternate outfits if the first choice is unsatisfactory.  This would be considered a form of “low decision automation” where it alleviates a moderate load of decision making but leaves some for the user.

Echo, on the other hand, tells users "yay" or "nay" as to why certain outfits are not flattering on them but givers users no information on how it arrived at the decision. The lack of transparency is a major drawback in how Echo is designed and a big reason as to why it wont work for me. It also could represent a much higher form of decision automation where it gives users no alternate options other than the binary yay or nay.

So, will I ever ask the question, "Alexa, what should I wear today?". I will if Alexa convinces me that she understands me and my personality (she should be my fashion-partner who evolves as I evolve), collaborates with me, clearly conveys her thought processes, and is nice to me (in research, we call it automation etiquette)! 

Style is a way to say who you are without having to speak.
— Rachel Zoe
[Reprint] Automation: Friend or Foe
This is a reprint of an article, authored by Arathi Sethumadhavan, which is series of articles originally published in Ergonomics in Design in April 2011.

With advancements in technology, automated systems have become an integral part of our society. Modern humans interact with a variety of automated systems every day, ranging from the timer in the microwave oven to the global positioning system in the car to the elevator button.

Just as automation plays a pivotal part in improving the quality of living, it also plays an integral role in reducing operator errors in safety-critical domains. For example, using a simulated air traffic control task, Rovira and Parasuraman (2010) showed that the conflict detection performance of air traffic service providers was higher with reliable automation compared with manual control.

Although automated systems offer several benefits when reliable, the consequences associated with their failure are severe. For example, Rovira and Parasuraman (2010) showed that when the primary task of conflict detection was automated, even highly reliable (but imperfect) automation resulted in serious negative effects on operator performance. Such performance decrements when working with automated systems can be explained by a phenomenon called automation-induced complacency, which refers to lower-thanoptimal monitoring of automation by operators (Parasuraman & Manzey, 2010).

High operator workload and high automation reliability contribute to complacency. Experts and novices as well as individuals and teams are prone to automation-induced complacency, and task training does not appear to completely eliminate its effects. However, performance decrements arising from automation incomplacency can be addressed by applying good design solutions.

In this issue of the Research Digest, John D. Lee, professor of industrial and systems engineering at the University of Wisconsin–Madison, and Ericka Rovira, assistant [Eds. now associate] professor of engineering psychology at West Point, provide automation design guidelines for practitioners based on their research and expertise in the area of human-automation interaction.


What factors need to be taken into consideration when designing automated systems?

Ericka Rovira

  • Determine the level of operator involvement. This should be the first step, as discussed in Rovira, McGarry, and Parasuraman (2007). How engaged should the operator be? Is the operator expected to take over control in the event of an automation failure?
  • Determine the degree of automation appropriate for the domain. The appropriate degree of automation is closely tied to the level of operator involvement. The level of automation in a programmable stopwatch can be very different from the degree of automation in a military reconnaissance task. In the latter task, the failure of the automated aid can have disastrous consequences. As a rule of thumb, as the degree of automation increases, operator involvement declines, and as a result, there is less opportunity for the operator to recover in the face of an automation error.
  • Design automated aids in such a way that operators have adequate time to respond to an automation failure.
  • Make the automation algorithm transparent so that operators are able to build a mental picture of how the automation is functioning. Providing operators with information on the uncertainties involved in the algorithm can help them engage in better information sampling and consequently help them respond quickly to automation failures.

John Lee

Make automation trustable. Appropriate trust and reliance depends on how well the capabilities of the automation are conveyed to the operator. Specific design considerations (Lee & See, 2004) include the following:

  • Design the automation for appropriate trust and not simply greater trust. • Show the past performance of the automation.
  • Illustrate the automation algorithms by revealing intermediate results in a way that is comprehensible to the operators.
  • Simplify the algorithms and operation of the automation to make it more understandable.
  • Show the purpose of the automation, design basis, and range of applications in a way that relates to operators’ goals.
  • Evaluate any anthropomorphizing of the automation to ensure appropriate trust.
  • Show how the context affects the performance of the automation and support operators’ assessment of situations relative to the capabilities of the automation.
  • Go beyond the individual operator working with the automation. That is, take into consideration the cultural differences and organizational structure when designing automated systems, because this can influence trust and reliance on automation.

In conclusion, whether automation is an operator’s friend or foe depends largely on how well practitioners are able to consider automation design principles that are paramount for effective human-automation interaction – some of which are outlined here – when designing these systems.

Original article link: http://journals.sagepub.com/doi/pdf/10.1177/1064804611411409

Robot potpourri: Nannies, teachers, and companions

Today's collection of potpourri items inadvertently coalesced into how robots are beginning to weave themselves into our lives; literally from infancy to old age.

[Gaurdian] 'This is awful': robot can keep children occupied for hours without supervision

Robots do not have the sensitivity or understanding needed for childcare. 
The use of artificial intelligence to aid student’s learning dates back to 1980s. It was the time when major technology companies like Lego, Leaf, and Androbot introduced the robots to simplify the study and related activities.

Since then, the robotic technology has gone through various changes to become more advance and sophisticated. Meanwhile, a new term, educational robots, was coined for these “classroom robots.”

As I reflect back on my own education, I did best in subjects where I admired and respected my teachers.  If this quality is crucial to student-teacher bonding, it suggests that these robotic teachers need to be designed to elicit such emotions.

[CNBC] A wall-crawling robot will soon teach Harvard students how to code

Harvard computer science professor Radhika Nagpal is hoping to use a robotic toy she helped develop, Root, to teach coding languages like Python and Javascript in her undergraduate courses at Harvard.

The Root prototype is already being used in Harvard research labs. And the Root will be widely available this spring.

[Guardian] In an age of robots, schools are teaching our children to be redundant

Interesting story on how being different from a robot is important to survive in the future workplace but how todays’s schools are designed to produce a 19th century factory workforce.

In the future, if you want a job, you must be as unlike a machine as possible: creative, critical and socially skilled. So why are children being taught to behave like machines?

[Intuition robotics] A companion for the elderly

Older adults value their independence and want to be able to live in the homes where they have lived in for years. Home robots can help them to live independently. These robots can do a number of things, including reminding older adults to take medications to exercising. Check out ElliQ, a robot developed by a start up in Israel for older adults.

ELLI•Q™ is an active aging companion that keeps older adults active and engaged. ELLI•Q seamlessly enables older adults to use a vast array of technologies, including video chats, online games and social media to connect with families and friends and overcome the complexity of the digital world.

ELLI•Q inspires participation in activities by proactively suggesting and instantly connecting older adults to digital content such as TED talks, music or audiobooks; recommending activities in the physical world like taking a walk after watching television for a prolonged period of time, keeping appointments and taking medications on time; and connecting with family through technology like chat-bots such as Facebook Messenger.
Hello, World

Welcome, please come in!  We are still arranging our furniture but we are happy to have you here.  We (Arathi Sethumadhavan & Richard Pak) are both engineering psychologists.  Arathi works in industry (Senior User Research Manager at Microsoft), while Rich is in academia (Associate Professor in Psychology, Clemson University).  

We are both excited and passionate about the future.  Many people think it will be dominated by humans working alongside highly autonomous things (software agents, robots).  These autonomous systems will be present in our work, home, recreation, and even romantic lives.  

However, human behavior with higher levels of autonomy can be counter-intuitive, irrational, and maladaptive.  What will this future look like?  How will humans react?  How will this change us?

As we approach this uncertain future, it becomes even more critical to understand how humans behave and react with such systems.  

In this blog, we will cover news and developments focusing on the social science side of human-automation partnerships; it’s one area we think is not covered enough.  We also think the current conversation is being driven by engineers, philosophers, and politicians; we'd like to make some room at the table for social scientists.  If humans are to interact with future autonomy in a safe and productive way, we must better understand the social science aspects--the human.  

Occasionally, we'll take a deep dive into issues of autonomy with a social science spin.

We hope we have something interesting and unique to contribute and we also hope that you will participate in the discussion.

Richard Pak & Arathi Sethumadhavan

  •