Human-Autonomy Sciences

We are psychological scientists / practitioners who are excited about the future of autonomy.  This blog will cover recent developments in human-autonomy sciences with a focus on the social science angle.

Maggie Jackson: Technology, distraction, digital health, and the future

About Maggie Jackson

 Photo credit: Karen Smul

Photo credit: Karen Smul

Maggie Jackson is an award-winning author and journalist known for her writings on technology’s impact on humanity. Her acclaimed book Distracted: Reclaiming Our Focus in a World of Lost Attention was compared by Fast Company magazine to Silent Spring for its prescient warnings of a looming crisis in attention. The book, with a foreword by Bill McKibben, will be published in a new updated edition in September.

Jackson’s articles have appeared in The New York Times, The Wall Street Journal, Los Angeles Times, and on National Public Radio, among many other publications, and her work and comments have been featured in media worldwide. Her essays appear in numerous anthologies, including The State of the American Mind: Sixteen Leading Critics on the New Anti-Intellectualism (Templeton, 2015) and The Digital Divide (Penguin, 2010). 

A former Boston Globe contributing columnist, Jackson is the recipient of Media Awards from the Work-Life Council of the Conference Board; the Massachusetts Psychological Association; and the Women’s Press Club of New York. She was a finalist for the Hillman Prize, one of journalism’s highest honors for social justice reporting and has served as a Visiting Fellow at the Bard Graduate Center, an affiliate of the Institute of the Future in Palo Alto, and a University of Maryland Journalism Fellow in Child and Family Policy. A graduate of Yale University and the London School of Economics with highest honors, Jackson lives with her family in New York and Rhode Island.


How can technology facilitate a healthy work-life balance? 

I believe that the crucial question today is improving the balance between digital and non-digital worlds

Over the last 20 years, technology has changed human experience of time and space radically. Distance no longer matters much, nor duration, as devices allow us to fling our bodies and thoughts around the globe near-instantly. While on a business trip, a parent can skype a bedtime story with a child at home. The boss can reach a worker who’s hiking on a remote mountaintop. Technology has broken down cultural and physical boundaries and walls – making home, work, and relationships portable. That’s old news now, and yet we’re still coming to grips with the deep impact of such changes. 

For instance, it’s becoming more apparent that the anywhere-anytime culture isn’t simply a matter of carrying our work or home lives around with us and attending to them as we wish. It’s not that simple by far. First, today’s devices are designed to be insistent, intrusive systems of delivery, so any single object of our focus – an email, a text, a news alert – is in competition with others at every minute. We now inhabit spaces of overlapping, often-conflicting commitments and so have trouble choosing the nature and pace of our focus. 

The overall result, I believe, is a life of continual negotiation of roles and attentional priorities. Constant checking behavior (polls suggest Americans check their phones on average up to 150 times a day) is a visible symptom of the need to rewrite work-life balance dozens of times a day. The “fear of missing out” that partly drives always-on connectivity also is a symptom of the necessity of continually renegotiating the fabric of life on- and off-line. 

Because this trend toward boundary-less living is so tech-driven, I believe that the crucial question today is improving the balance between digital and non-digital worlds. After that, work-life balance will follow. 

We need to save time for uninterrupted social presence, the kind that nurtures deeper relationships. We urgently need space in our lives where we are not mechanically poked, prodded and managed, ie when we are in touch with and able to manage our inner lives. (Even a silent phone in “off” mode undercuts both focus and cognitive ability, according to research by Adrian Ward at the University of Texas/Austin.) 

One solution would be to think more deliberately about boundaries in all parts of our life, but especially in the digital sphere. Too often lines of division are seen as a confinement, a kind of archaic Industrial Age habit. But boundaries demarcate; think of a job description, a child’s bedtime, or the invention of the weekend, a ritual that boosts well-being even among the jobless. Boundaries are systems of prioritization, safety zones, structures for depth, and crucial tools for providing structure in a digital age. A family that turns off its cell phones at dinner is creating opportunities for the kind of in-depth bonding that rarely is forged online.

Technology can help facilitate creative boundary-making – think of the new Apple and Google product designs that prompt offline time. But our devices cannot do the work of inventing and managing the boundaries that are crucial for human flourishing. 


Can you tell us about your new book?

My new book will draw from my research into how technology is changing our ideas of what it means to “know” something and what it means to be smart

I have a couple of book projects on the front burner. My most recent book, Distracted: Reclaiming Our Focus in a World of Lost Attention, explores the fragmentation of focus and the science of attention in the digital age. One of the first books to warn of our current crisis of inattention, it’s been compared by Fast Company magazine to Rachel Carson’s Silent Spring, and will be published in a new updated edition in September. 

After I finished that book, I realized that attention, as crucial a human faculty as it is, is nevertheless a vehicle, a means to whatever goals we are pursuing. And I began to see that if we have a moment’s focus, the crucial next stepping stone to human flourishing is to be able to think well, especially in a digital age. Those musings have led me on a multi-year journey into the nature of deliberation and contemplation, and in particular to the realization that uncertainty is the overlooked gateway or keystone to good thinking in an age of snap judgement. 

image.png

We think of uncertainty as something to avoid, particularly in an age that quite narrowly defines productivity and efficiency and good thinking as quick, automatic, machine-like, neat, packaged, and outcome-oriented. Of course humans need to pursue resolution, yet the uncertainty that we scorn is a key trigger to deep thought and itself a space of possibilities. Without giving uncertainty its due, humans don’t have choices. When we open ourselves to speculation or a new point of view, we create a space where deeper thinking can unfold. 

My new book will draw from my research into how technology is changing our ideas of what it means to “know” something and what it means to be smart. As well, I am drawing from new research on the upsides of uncertainty in numerous domains, including medicine, business, education, philosophy and of course psychology/cognitive science. It’s even a topic of conversation and interest in the HCI world, Rich Pak and others have told me. 

I believe that today more and more people are retreating political, psychologically, and culturally into narrow-mindedness, but I am heartened by the possibility that we can envision uncertainty as a new language for critical thinking. 


What does the future of human relationships with technology look like: good, bad, or ugly?

The essential question is: will our technologies help us flourish? The potential – the wondrous abundance, the speed of delivery, the possibility for augmenting the human or inspiring new art forms – is certainly there. But I would argue that at the moment we aren’t for the most part using these tools wisely, mostly because we aren’t doing enough to understand technology’s costs, benefits, and implications.

I’ve been thinking a lot about one of technology’s main characteristics: instantaneity. When information is instant, answers begin to seem so, too. After a brief dose of online searching, people become significantly less willing to struggle with complex problems; their “need for cognition” drops even as they begin to overestimate their ability to know. (The findings echo the well-documented “automation effect,” in which humans stop trying to get better at their jobs when working closely with machines, such as automated cockpits.) In other experiments, people on average ranked themselves far better at locating information than at thinking through a problem themselves.

Overall, the instantaneity that is so commonplace today may shift our ideas about what human cognition can be. I see signs that people have less faith in their own mental capacities, as well as less desire to do the hard work of deliberation. Their faith increasingly instead lies with technology. These trends will affect a broad range of future activities, such as whether or not people can manage a driverless car gone awry or even think it’s their role to do so; whether or not they any longer recognize the value of “inefficient” cognitive states of mind such as daydreaming, or whether or not they have the tenacity to push beyond the surface understanding of a problem on their own. Socially, similar risks are raised by instant access to relationships – whether to a friend on social media or to a companion robot that’s always beside a child or elder. Suddenly the awkwardness of depth need no longer trouble us as humans! 

These are the kinds of questions that we urgently need to be asking across society in order to harness technology’s powers well.We need to ask better questions about the unintended consequences and the costs/benefits of instantaneity, or of gaining knowledge from essentially template-based formats. We need to be vigilant in understanding how humans may be changed when technology becomes their nursemaid, coach, teacher, companion. 

image.png

Recently, an interview with the singer Taylor Goldsmith of the LA rock band Dawes caught my eye. The theme of the band’s latest album, Passwords, is hacking, surveillance and espionage. “I recognize what modern technology serves,” he told the New York Times. “I’m just saying, ‘let’s have more of a conversation about it.’” 

Well, there is a growing global conversation about technology’s effects on humanity, as well there should be. But we need to do far moreto truly understand and so better shape our relations with technology. That should mean far more robust schooling of children in information literacy, the market-driven nature of the Net, and in general critical thinking skills. That should mean training developers to become more accountable to users, perhaps by trying to visualize more completely the unintended consequences of their creations. It certainly must mean becoming more measured in our own personal attitudes; we all too often still gravitate to exclusively dystopian or utopian viewpoints on technology. 

Will we have good, bad, or ugly future relations to technology? At best, we’ll have all of the above. But at the moment, I believe that we are allowing technology in its present forms to do far more to diminish human capabilities than to augment them. By better understanding technology, we can avert this frightening scenario.

33 Questions Psychology Must Answer...

The American Psychological Association recently asked 33 psychologists to identify critical questions yet to be answered in their specific area of psychology. I had the honor of answering for the Engineering Psychology (human factors division):

Leaps in technological evolution will turn simple tools into autonomous teammates that have the ability to communicate with us in ways that are even more personal and accessible. A diverse range of new users will collaborate with these entities in new settings. The goal of engineering psychology has always been to enhance the safety, performance, and satisfaction of human-machine interaction. We must adapt to the idea that these machines are quickly changing and becoming less tool-like and more human-like. How will this new human/machine paradigm affect human safety, satisfaction, and performance?
— http://www.apamonitor-digital.org/apamonitor/20180708/MobilePagedReplica.action?pm=1&folio=47#pg50

Check out the other interesting questions from other areas; AI is mentioned a few times too!

Subscription options...

You can subscribe to updates via this blog's RSS feed.  The feed address is:

https://www.humanautonomy.com/blog?format=rss

You can enter it into your favorite feed reader or other news aggregation program.  This blog is also a channel in Apple News.

Richard PakComment
Technology (AI), humans, and the future...

My friend, journalist Maggie Jackson, recently sent me an interesting article in the Times Magazine about one of the new complexities in the relationship between humans and AI:

In many arenas, A.I. methods have advanced with startling speed; deep neural networks can now detect certain kinds of cancer as accurately as a human. But human doctors still have to make the decisions — and they won’t trust an A.I. unless it can explain itself.

The source of these issues is that AI decision making is hidden, but also in many ways non-deterministic--we don't know what it will come up with and how it did!  We discuss this a bit in our recently published paper.

Maggie Jackson, will be leading a discussion at the Google I/O developer conference on building healthy technologies.  In that session, many existing and new issues regarding human-ai/technology issues will be discussed.

In this Keynote Session, journalist Maggie Jackson, a specialist in how technology impacts humanity, talks to Adam Alter, Professor of Psychology at NYU, about why enabling a healthy tech life balance is important, and what can be done when building apps and services to make healthier products.
JUST PUBLISHED: From “automation” to “autonomy”: The importance of trust repair in human-machine interaction

My colleagues Ewart de Visser and Tyler Shaw recently published a theoretical paper discussing how the field of human factors might need to adapt to study human-autonomy issues: 

Abstract

Modern interactions with technology are increasingly moving away from simple human use of computers as tools to the establishment of human relationshipswith autonomous entities that carry out actions on our behalf. In a recent commentary, Peter Hancock (Hancock, 2017) issued a stark warning to the field of human factors that attention must be focused on the appropriate design of a new class of technology: highly autonomous systems. In this article, we heed the warning and propose a human-centered approach directly aimed at ensuring that future human-autonomy interactions remain focused on the user’s needs and preferences. By adapting literature from industrial psychology, we propose a framework to infuse a unique human-like ability, building and actively repairing trust, into autonomous systems. We conclude by proposing a model to guide the design of future autonomy and a research agenda to explore current challenges in repairing trust between humans and autonomous systems.

Practitioner summary

This paper is a call to practitioners to re-cast our connection to technology as akin to a relationship between two humans rather than between a human and their tools. To that end, designing autonomy with trust repair abilities will ensure future technology maintains and repairs relationships with their human partners.

Link: https://www.tandfonline.com/doi/abs/10.1080/00140139.2018.1457725

Thoughts on the first fatal self driving car accident

You have no doubt heard about the unfortunate fatal accident involving a self-driving car killing a pedestrian (NYT).  

This horrible event might be the "stock market correction" of the self-driving car world that was sorely needed to re-calibrate the public's unrealistic expectations about the capability of these systems.

In the latest news, the Tempe police have released video footage that shows the front and in-vehicle camera view just before impact.  

My first impression of the video was that it seemed like something the car should have detected and avoided.  In such a visually challenging condition as illustrated in the video, a human driver would have great difficulty seeing the pedestrian in the shadowed area.  Humans have inferior vision and reaction time and speed compared to computers (cf Fitts' list, 1951).

One interesting narrative thread that has come out of the coverage, and is evident in the Twitter comments for the video, is that the idea that the "Fatal Uber crash [was] likely 'unavoidable' for any kind of driver."  People seem to be understanding of the difficulty of the situation and thus their trust in these autonomous systems is likely to only be somewhat negatively affected.  But should it be more affected?  Autonomous vehicles, with the megaflops of computing power and advance sensors were never expected to be "any kind of driver"--they were supposed to be much better.

But the car, outfitted with radar-based sensors, should have "seen" it.  I'm certainly not blaming the engineers.  Determining the threshold for signal (pedestrian) versus noise is probably an active area and one that they were testing.

Continuing story and thoughts...

Kitchen robots potpourri

 

The World's First Home Robotic Chef Can Cook Over 100 Meals

This year, Moley, the first robotic kitchen will be launched by a London-based company, that has unlimited access to chefs and their recipes worldwide.  It is expected to cook and clean up after itself. But looks like, it does not completely eliminate human supervision.

The way this machine works is by specifying the number of portions, type of cuisine, dietary restrictions, calorie count, desired ingredients, cooking method, chef, etc. from the recipe library first. Then, with a single tap, you could choose your recipe, place the individual pre-packaged containers of measured, washed and cut ingredients (that you could order through Moley) on designated spots, and press “start” for the cooking process to begin.

Since the Moley kitchen could essentially cook any downloadable recipe on the internet, the food-robotics-AI startup expects to include a “share and sell” your own recipes feature, where consumers and professional chefs could access and sell their ideas via the “digital style library of recipes” database.

However, there are safety and quality concerns about having a robot-chef. What if the machine chops aimlessly and the owner is left without a meal is a concern raised in the article. Further, cooking involves the chef's personal touch and an engagement of all the five senses, which cannot be realized by a robot. 

Our Robot Overlords Are Now Delivering Pizza, And Cooking It On The Go

To solve the problem of cold pizzas, Zume Pizza, where robots and AI run the show, was started in Mountain View, California was started. 

A customer places an order on the app. A team of mostly robots assembles the 14-inch pies, each of which gets loaded par-baked — or partially baked.

 

 

There is only one human worker I the delivery truck  - to drive, slice and deliver to your doorstep. The human does not have to think about when to turn the ovens on and off or what route to take - because these are all decided by AI.  A few minutes prior to arriving at the scheduled delivery destination, the AI starts the oven to finish cooking the order.

Augmented reality kitchens keep novice chefs on track

Japan is not far behind either with regards to the use of robots in cooking. Scientists at Kyoto Sangyo University have developed a kitchen with ceiling-mounted cameras and projectors that overlay cooking instructions on the ingredients. This lets cooks concentrate on their task (e.g., slicing) without having to look up at a recipe book or a screen.

The upgrade from a clasping claw to a classic spinning spatula took a lot of programming but it was necessary. After all, you need the easiest to clean surface when dealing with raw meat — you really don’t want that stuff getting caught up in a device’s various nooks and crannies.

The developers of Flippy is working on a number of new features for the robot, including advanced computer imaging and AI that will help it adapt over time to things like a changing seasonal menu.

 

Flippy, the hamburger cooking robot, gets its first restaurant gig

Caliburger, a fast food chain based in California is using Flippy to flip hamburgers. Flippy is an industrial robotic arm with a classic spinning spatula.

Suppose you want to fillet a fish. Lay it down on a chopping board and the cameras will detect its outline and orientation so the projectors can overlay a virtual knife on the fish with a line indicating where to cut. Speech bubbles even appear to sprout from the fish’s mouth, guiding you through each step.

The kitchen also comes equipped with a small robot assistant named Phyno that sits on the countertop. When its cameras detect the chef has stopped touching the ingredients, Phyno asks whether that particular step in the recipe is complete. Users can answer “yes” to move on to the next step or “no” to have the robot repeat the instructions.

 

 

 

Robots Cooked and Served My Dinner

 

 

 

 

 

In the Chinese city of Kunshan, a small team of robot cooks and waiters serve dumplings and fried rice  at Tian Waike Restaurant.

“A robot can work for seven to eight years and more than ten hours a day,” Song Yugang, the owner of the company that designed the robots said. “Waiters and waitresses work for eight hours every day, nine at most. You need to provide accommodations and meals. But our robots consume three yuan [50 cents, or 30 pence] worth of electricity a day at most.”
AI potpourri: Passenger pickup and suicide prevention

GM just revealed a fully autonomous electric car — and it doesn't have a steering wheel

GM has announced their fourth generation of self-driving vehicles. Note that there is not a single mention of what the passenger is supposed to do in the event that the self-driving algorithm fails!

No driver. No pedals. No steering wheel. Just seats and screens and doors that can close themselves. That’s what riders will see when they get into one of General Motors’ Cruise self-driving electric vehicles, scheduled to hit the road in 2019.

 

 

 

A prominent social scientist, Dr. Peter Hancock aptly stated the following

Today’s new car, a partial robot itself built by robots in an automated factory, may for a time be content to sit in a parking spot and wait for its user’s call. But if people aren’t careful, its fully autonomous cousin may one day drive the joy of driving, or even an entire joy of living, out of human experience.

 

Would You Send Your Kids To School On A Self-Driving School Bus?

A Seattle-based design firm is working on a six-passenger vehicle picks up and drops off every child at their front door, ensuring their identity with facial recognition.

The vehicle’s AI changes its route based on traffic or other roadblocks, even rejiggering the order in which it drops kids off if, for instance, their parent is running late. And during the rest of the day, each Hannah vehicle can be used to deliver packages, food, or donations, earning school districts extra cash.

But questions remain. Will parents ever trust an autonomous vehicle enough to allow their children to ride in one with no human supervision? And will autonomous technology ever be advanced enough to supervise children, much less cheap enough for school districts to afford? Hannah is a kind of thought experiment: If autonomy is coming to every street, what does getting to school look like?

The researchers at the design firm are also investigating other issues such as how AI will address bullying in buses as well as bringing in extra money to the school by using the bus for food delivery for a service like Uber Eats. 

Canada will track suicide risk through social media with AI

The Canadian government is partnering with an AI firm to predict rises in regional suicide risk. Facebook has also recently launched initiatives to prevent suicides by analyzing posts that suggest suicidal thoughts.

The AI will analyze posts from 160,000 social media accounts and will look for suicide trends.

The AI company aims to be able to predict which areas of Canada might see an increase in suicidal behavior, which according to the contract document includes “ideation (i.e., thoughts), behaviors (i.e., suicide attempts, self-harm, suicide) and communications (i.e., suicidal threats, plans).” With that knowledge, the Canadian government could make sure more mental health resources are in the right places when needed.
Public views about AI and the Future

The Gallup organization has just released a survey of 3298 American adults about their thoughts on AI and the future.  The interactive website is filled with many great visualizations.  

The key point seems to be that, contrary to popular notions of the fear of AI, most American’s (77%) have a positive view of AI in the next decade.  Interestingly, this is despite most Americans view that AI will have a negative impact on their own employment and the economy (73% believe AI will eliminate jobs).

The other noteworthy point is that optimism about AI, while high, is expected to decrease (difference between current optimism and future optimism).  But this varies by sub-group:  The largest difference between future-current optimism is by middle-aged folks who's livelihood may be affected (green) while older folks seem to be unchanged (blue, orange):

 Image source: https://www.northeastern.edu/gallup/

Image source: https://www.northeastern.edu/gallup/

Changing views of self-driving cars...

I just saw a funny juxtaposition of headlines regarding self-driving cars.  Of most autonomous systems, self-driving cars probably represent the easiest to understand for the lay public.

The first headline, from a Reuters/Ipsos opinion poll:  Most Americans wary of self-driving cars.  

While 27 percent of respondents said they would feel comfortable riding in a self-driving car, poll data indicated that most people were far more trusting of humans than robots and artificial intelligence under a variety of scenarios.

The results are more interesting when viewed by age group.  It makes intuitive sense that millennials are most comfortable with baby boomers the least.  Millenials are less interested in driving and because of greater exposure to autonomous technology, may be more comfortable and trusting than other age groups.  It should be noted that that is not a correct view, however.  Their view of technology could be distorted or unrealistic.

 Image source: http://fingfx.thomsonreuters.com/gfx/rngs/AUTO-SELFDRIVING-SURVEY/010060NM16V/AUTO-SELFDRIVING-SURVEY.jpg

Image source: http://fingfx.thomsonreuters.com/gfx/rngs/AUTO-SELFDRIVING-SURVEY/010060NM16V/AUTO-SELFDRIVING-SURVEY.jpg

The next headline:  More Americans Willing To Ride In Self-Driving Cars.  The results of a survey from American Automobile Association (AAA) confirm Reuter's survey:  millennials and males are more willing to buy a self-driving car.  The headline refers to a decrease (78% to 63%), year over year, in the number of people who said they were afraid to ride in a self-driving car.

The crux of these observations seem to be trust:

AAA’s survey also offered insights as to why some motorists are reluctant to purchase advanced vehicle technology. Most trust their driving skills more than the technology (73 percent) — despite the fact that research shows more than 90 percent of crashes involve human error. Men in particular, are confident in their driving abilities with 8 in 10 considering their driving skills better than average.
AI potpourri: Reading, investing, diagnosis, and retail

A.I. Has Arrived in Investing. Humans Are Still Dominating

AI is taking a bigger role in investing. Large fund management companies like Fidelity and Vanguard say they use AI for a range of purposes.

An exchange-traded fund introduced in October uses A.I. algorithms to choose long-term stock holdings.

It is to early to say whether the E.T.F., A.I. Powered Equity, will be a trendsetter or merely a curiosity. Artificial intelligence continues to become more sophisticated and complex, but so do the markets. That leaves technology and investment authorities debating the role of A.I. in managing portfolios. Some say it will only ever be a tool, valuable but subordinate to its flesh-and-blood masters, while others envision it taking control and making decisions for many funds.

AI has an edge over the natural kind because of the inherent emotional and psychological weaknesses that encumber human reasoning.

While some people see huge potential in AI as an investment advisor, there are others who think that it cannot be relied on for heaving cognitive decision-making.  The following  is a quote from a portfolio manager.

“I’m a fan of automating everything possible, but having a human being push the last button is still a good thin. Hopefully, we all get better and better and smarter and smarter, but there’s something comforting about having an informed human being with sound judgment at the end of the process.”

AI models beat humans at reading comprehension, but they’ve still got a ways to go

AI models designed by Alibaba and Microsoft have surpassed humans in reading comprehension, which demonstrates that AI has the potential to understand and process the meaning of words with the same fluidity as humans. But there is still a long way to go.  Specifically, adding meaningless text into the passages, which a human would easily ignore, tended to confuse the AI.

“Technically it’s an accomplishment, but it’s not like we have to begin worshiping our robot overlords,” said Ernest Davis, a New York University professor of computer science and longtime AI researcher.

“When you read a passage, it doesn’t come out of the clear blue sky: It draws on a lot of what you know about the world,” Davis said. “We really need to deal much more deeply with the problem of extracting the meaning of a text in a rich sense. That problem is still not solved.”

 

 

5 ways the future of retail is already here

The retail industry is also starting to rely on AI to shape the way people shop. 

  1. Digital-price displays at grocery stores (e.g., Kroger) now allow retailers to make changes to their prices in one go. 
  2. Digital mirrors are used by retailers such as Sephora and Neiman Marcus to allow shoppers to get feedback on makeup and other items.
  3. Robotic shopping carts can now import your shopping list, guide you to each item in the store, help you to check out, follow you to your car for unloading groceries, and find its way back to a docking station.
  4. Technology is being used by companies like Stitch Fix and American Eagle to recommend outfits to their customers. 
  5. Robots are being used in stores to keep shelves well stocked to help shoppers find what they are looking for.  

Microsoft and Adaptive Biotechnologies announce partnership using AI to decode immune system; diagnose, treat disease

AI and the cloud have the power to transform healthcare – improving outcomes, providing better access and lowering costs. The Microsoft Healthcare NExT initiative was launched last year to maximize the ability of artificial intelligence and cloud computing to accelerate innovation in the healthcare industry, advance science through technology and turn the lifesaving potential of next discoveries into reality.

Each T-cell has a corresponding surface protein called a T-cell receptor (TCR), which has a genetic code that targets a specific signal of disease, or an antigens. Mapping TCRs to antigens is a massive challenge, requiring very deep AI technology and machine learning capabilities coupled with emerging research and techniques in computational biology applied to genomics and immunosequencing.

The result would provide a true breakthrough – a Sequencing the immune system can reveal what diseases the body currently is fighting or has ever fought.
Dr. Nancy Cooke: Human-Autonomy Teaming, Synthetic Teammates, and the Future

In our fifth post in a new series, we interview a thought leader in human-systems engineering, Dr Nancy Cooke.

About Dr. Nancy Cooke

Cooke_Nancy_1116b.jpg

Nancy J. Cooke is a professor of Human Systems Engineering at Arizona State University and is Science Director of the Cognitive Engineering Research Institute in Mesa, AZ. She also directs ASU’s Center for Human, Artificial Intelligence, and Robot Teaming and the Advanced Distributed Learning Partnership Lab.

She received her PhD in Cognitive Psychology from New Mexico State University in 1987.  Dr. Cooke is currently Past President of the Human Factors and Ergonomics Society, chaired the National Academies Board on Human Systems Integration from 2012-2016, and served on the US Air Force Scientific Advisory board from 2008-2012.  She is a member of the National Academies of Science, Engineering, and Medicine Committees on High-Performance Bolting Technology for Offshore Oil and Natural Gas Operations and the Decadal Survey of Social and Behavioral Sciences and Applications to National Security.

In 2014 Dr. Cooke received the Human Factors and Ergonomics Society’s Arnold M. Small President’s Distinguished Service Award. She is a fellow of the Human Factors and Ergonomics Society, the American Psychological Association, the Association for Psychological Science, and The International Ergonomics Association.  Dr. Cooke was designated a National Associate of the National Research Council of the National Academies of Sciences, Engineering, and Medicine in 2016.

Dr. Cooke’s research interests include the study of individual and team cognition and its application to cyber and intelligence analysis, remotely-piloted aircraft systems, human-robot teaming, healthcare systems, and emergency response systems. Dr. Cooke specializes in the development, application, and evaluation of methodologies to elicit and assess individual and team cognition.


Tell us about your current ongoing projects, especially the synthetic teammate and human-autonomous vehicle teaming projects.

I am excited about both projects, as well as another one that is upcoming.  I am involved in the synthetic teammate project, a large ongoing project started about 15 years ago, with the Air Force Research Lab (AFRL; Chris Myers, Jerry Ball and others) and former post docs, Jamie Gorman (Georgia Tech) and Nathan McNeese (Clemson) and current post doc, Mustafa Demir.  Sandia Research Corporation (Steve Shope and Paul Jorgenson) is also involved.   It is exciting to be working with so many bright, energetic, and dedicated people.  In this project AFRL is developing a synthetic agent capable of serving as a full-fledged teammate that works with two human teammates to control a Remotely Piloted Aircraft System and take reconnaissance photos of ground targets.  The team (including the synthetic pilot) interacts via text chat. 

The USAF (United States Air Force) would like to eventually use synthetic agents as teammates for large scale team training exercises.  Ultimately an individual should be able to have a team training experience over the internet without having to involve any other humans to serve as white forces for someone else’s training.  In addition, our laboratory is interested in learning about human-autonomy teaming, and in particular, the importance of coordination.  In other studies we have found an interesting curvilinear relation relating coordination stability to performance, wherein the best performance is associated with mid-level coordination stability (not too rigid or unpredictable).  This project is funded by the Office of Naval Research.

 Image source: https://mojang.com/2016/12/shouldnt-you-be-on-minecraftnet-right-now/

Image source: https://mojang.com/2016/12/shouldnt-you-be-on-minecraftnet-right-now/

We are also conducting another project with Subbarao Kambhampati “Rao”at ASU.  In this project our team informs robot planning algorithms of Rao’s team by use of a human dyad working in a Minecraft setting.  One person is inside the Minecraft structure representing a collapsed building and the other has limited view of the Minecraft environment, but does have a map that now is inaccurate in regard to the collapsed environment.  The two humans work together to identify and mark on the map the location of victims.  We are paying careful attention to only to the variables that affect the dyads’ interactions, but also to features of communication that are tied to higher levels of performance.  This project is also funded by the Office of Naval Research.

Finally, I am very excited to be directing a new center at ASU called the Center for Human, Artificial Intelligence, and Robot Teaming or CHART.  I am working with Spring Berman, a swarm roboticist, to develop a testbed in which to conduct studies of driverless cars interacting on the road with human-driven cars.  Dr. Berman has a large floor mat that depicts a roadway with small robots that obey traffic signals and can avoid colliding with each other.  We are adding to that robots that are remotely controlled by humans as they look at imagery from the robot’s camera.  In this testbed we are excited to test all kinds of scenarios involving human-autonomous vehicle interactions.


You have co-authored the book "Stories of Modern Technology Failures and Cognitive Engineering Successes" with Dr. Frank Durso. What are some of the key points on human-autonomy interactions that you would like to share with our readers?

Too often automation is developed without consideration for the user.  It is often thought that automation/autonomy will not require human intervention, but that is far from the truth.  Humans are required to interact with autonomy at some level.  

A lack of good Human Systems Integration from the beginning can cause unexpected consequences and brittleness in the system.  The recent mistaken incoming missile message sent to Hawaii’s general public provides a great example of the potential effects of introducing a new interface with minimal understanding of the human task or preparation of the general public.


Can you paint us a scenario of humans and synthetic teammates working together in 50 years?

I am currently reading Four Futures by Peter Frase that paints four different scenarios of humans and AI in the future.  Two of the scenarios are dark with robots in control and two are more optimistic.  I tend toward the optimistic scenarios, but realize that this situation would be the result of thoughtful application of AI, coupled with checks to keep nefarious actors at bay.  Robots and AI have already, and will continue to, take on jobs that are “dull, dirty, or dangerous” for humans.  Humans need to retrain for other jobs (many that do not exist now) and teams of humans, AI and robots need to be more thoughtful composed based on the capabilities of each.  I believe that this is the path toward a more positive outcome.

Interview: Dr. Frank Durso

In our fourth post in a new series, we interview a leading social science researcher and leader in aviation psychology, Dr. Frank Durso. Frank was also my academic advisor (a decade ago) and it was a pleasure to chat with him about his thoughts about the impact and future of automation in aviation.

About Dr. Frank Durso

Francis T. (Frank) Durso is Professor and Chair of the School of Psychology at the Georgia Institute of Technology where he directs the Cognitive Ergonomics Lab.  Frank received his Ph.D. from SUNY at Stony Brook and his B.S. from Carnegie-Mellon University.    While at the University of Oklahoma, he was a Regents Research recipient and founding director of their Human-Technology Interaction Center.

frank_durso.jpg

Frank is Past-President of the Human Factors and Ergonomics Society (HFES), the Southwestern Psychological Association, the American Psychological Association’s (APA) Division of Engineering Psychology, and founding President of the Oklahoma Psychological Society.  He is a sitting member of the National Research Council’s Board of Human Systems Integration.  He has served as advisor and panelist for the Transportation Research Board, the National Science Foundation, the APA, the Army Research Lab, and the Government Accountability Office. 

Frank was associate editor of the Journal of Experimental Psychology: Applied, senior editor of Wiley’s Handbook of Applied Cognition, co-editor of the APA Handbook of Human Systems Integration, and serve as founding editor of the HFES monograph series entitled User’s Guides to Methods in Human Factors and Ergonomics.   He has served on several editorial boards including Human Factors.  He co-authored Stories of Modern Technology Failures and Cognitive Engineering Successes.  He is a fellow of the HFES, APA, the Association for Psychological Science, and the Psychonomic Society.   He was awarded the Franklin V. Taylor award for outstanding achievements in applied experimental and engineering psychology from APA  

His research has been funded by the Federal Aviation Administration, the National Science Foundation, and the Center for Disease Control as well as various industries.  Most of Frank’s research has focused on cognition in dynamic environments, especially in transportation (primarily air traffic control) and healthcare.   He is a co-developer of the Pathfinder scaling algorithm, the SPAM method of assessing situation awareness, and the Threat-Strategy Interview procedure. His current research interests focus on cognitive factors underlying situation understanding and strategy selection.


For part of your career, you have been involved in air traffic control and have seen the use of automation evolve from paper-based flight strips to NexGen automation.  In your opinion, what is the biggest upcoming automation-related challenge in this domain?

As you know, people, including big thinkers like Paul Fitts in 1951, have given thought to how to divide up a task between a machine and a person.  While we people haven’t changed much, our silicon helpers have.  Quite a bit. They’re progressed to the point that autonomy, and the issues that accompany them are now both very real.  (I’ll get back to autonomy in your other question).  Short of just letting the machine do it, or just doing it yourself, the puzzle of how to divvy up a task remains although the answer to the puzzle changes.

When I first started doing research for the FAA in the early 90s,  there was talk of automation soon to be available that would detect conflicts and suggest ways to resolve the conflict, leaving the controller to choose among recommendations.  A deployed version of this was URET, an aid that the controller could use if he or she wanted.  In one mode, controllers were given a list like representation of flight data much like the paper strips did or a graphic representation of flight paths.  Either mode depicted conflicts up to 20 minutes out. 

I do worry that this new level of automation can take much of the agency away from the controller

When I toured facilities back then, I remember finding a controller who was using the aid when a level red conflict appeared.  I waited for him to make changes to resolve the conflict.  And waited.  He never did anything to either plane in conflict, and yet the conflict was resolved.  When I asked him about it, he told me “Things will probably change before I need to worry about it.”  He gave me two insights that stayed with me. One was that in dynamic environments, things change and the more dynamic the more likely is what you (or your electronic aid) expect and plan for are mere possibilities, not certainties.  This influenced much of my subsequent thinking about situation awareness, what it was, and how to measure it. 

 Next Generation Air Transport System (NextGen): https://www.nasa.gov/topics/aeronautics/features/8q_nextgen.html

Next Generation Air Transport System (NextGen): https://www.nasa.gov/topics/aeronautics/features/8q_nextgen.html

I also realized that day that I would never understand anything unless I understood the strategies that people used.  I didn’t do anything with that realization back then, thinking it would be like trying to nail jello to a wall.  I’m fascinated by strategy research today, but then I was afraid the jello and my career in aviation human factors would both be a mess lying at my feet.

Our big worries with automation that does the thinking for us were things like, will controllers use the technology?  Today we’d call that technology acceptance.  will the smart automation change the job from that of controlling air traffic to managing it?  Of course, when people are put into a situation where they merely observe, while the automation does the work, there’s the risk that the human will not truly be engaged and situation awareness would suffer.  That’s a real concern especially if you ever expect the human to again take over the task.

Now there are initiatives and technologies in the FAA that eliminate or at least reduce conflictions by optimizing the aircraft sequence and leave to the controller the task of getting the aircraft to fall in line with that optimization.  Imagine that the computer optimizes the spacing and timing of planes landing at an airport.  The planes are not, of course, naturally in this optimized pattern, so the computer presents to the controller “plane bubbles” Those plane bubbles are optimized.  All the controller has to do is get the plane into that bubble and conflicts will be reduced and landings would be optimized.  This notion of having the computer do the heavy cognitive lifting of solving conflict and optimization and then presenting those “targets” to the controller can be used in a variety of circumstances.  Now the “controller” is not even a manager, but in some ways the controller is being kept in the game and should therefore show pretty good situation awareness. 

Now I worry that situation awareness be very local—tied to a specific, perhaps meaningless piece of the overall picture.  This time global SA levels may be a conern; they may have little or no understanding of the big picture of all those planes landing, even if he or she does have good SA of getting a particular plane in queue. 

For some reason, I no longer worry about technology acceptance as I did in 1997.  Twenty years later, I do worry that this new level of automation can take much of the agency away from the controller—so much of what makes the job interesting and fun.  Retention of controllers might suffer and those that stay will be less satisfied with their work, which produces other job consequences.

As an end to this answer, I note that much has changed in the last quarter of a century, but we still seem to be following a rather static list of machines do this and people do that.  Instead, I think the industry needs to adopt the adaptive allocation of tasks that human factors professionals have studied.  The question is not really when should the computer sequence flights, but when should that responsibility be handed over to the human.  Or when should the computer, detecting a tired controller perhaps, rest responsibility for separation from him or her. 


You are on the Board of Human-Systems Integration for the National Academies of Science and Engineering. What is purpose of the Board and what is your role?

how they interact within and with complex systems ...must be addressed if we are to solve today’s societal challenges

The National Academies of Science, Engineering, and Medicine do their operational work through seven programs governed by the rules of the National Research Council.  One of the programs, the Division of Behavioral, Social Science, and Education contains the Board of Human Systems Integration or BOHSI.  Established by President Lincoln, the Academies is not a government agency.  A consequence of that for the Boards is that financing is through sponsors. 

The original Committee on Human Factors was founded in 1980 bty the Army, Navy, and Air Force.  Since then, BOHSI has been sponsored by a myriad of agencies including NASA, NIOSH, FAA, and Veteran’s Health.  I’m proud to say APA Division 21 and the Human Factors and Ergonomics Society, two organizations I’ve led in the past are also sponsors.

BOHSI’s mandate is to provide an independent voice on the HSI  issues that interest the nation.  We provide  theoretical and methodological perspectives on people-organization-technology-environment systems.  The board itself currently comprises 16 members, including National Academy members, academics, business leaders, and industry professional.  They were invited from a usually (very) long list of nominations.  A visit to the webpage will show the caliber of the members. http://sites.nationalacademies.org/DBASSE/BOHSI/Members/index.htm   

The issues BOHSI is asked to address are myriad.  Decision makers, leaders, and scholars from other disciplines are becoming increasingly aware of the fact that people and how they interact within and with complex systems is a critical feature that must be addressed if we are to solve today’s societal challenges.  We’ve looked at remote controlled aviation systems, at self-escape from mining, and how to establish safety cultures in academic labs, to mention a few.

BOHSI addresses these problems in a variety of ways.  The most extensive efforts result in reports like those currently on the webpage: Integrating Social and Behavioral Science within the Weather Enterprise; Personnel Selection in the Pattern Evidence Domain of Forensic Science; and CMV Driver Fatigue, Long Term Health, and Highway Safety.  These reports are generated by committees appointed by BOHSI.  A member of the board or two often sits on these working committees, but the majority of the committee is made up of national experts on the specific topic representing various scientific, policy, and operational perspectives.  The hallmark of these reports is that that provide an independent assessment and recommendation for the sponsor and the nation.


As a social scientist studying autonomy, what do you see as the biggest unresolved issue?

As technology advances at an accelerating rate, real autonomy becomes a real and exciting possibility.  The issues that accompany truly independent automated agents are exciting as well.   I think there are a number of questions of interest and there are lots of smart people looking into them.  For example, there’s the critical question of trust.  Why did Luke trust R2-D2?  (Did R2 trust Luke?)  And technology acceptance continues to be with us:  Why will elderly folk allow a robot to assist with this task, but not that one.

The issues that accompany truly independent automated agents are exciting as well....Why did Luke trust R2-D2?  (Did R2 trust Luke?)

But I think the biggest issues with autonomy is getting a handle on when responsibility, control, or both switch from one member of the technology-person pair to the other.  How can the autonomous car hand over control to the driver?  Will the driver have the SA to receive it?  How does this handshaking occur if each system does not have an understanding of the state of the other?  We don’t really understand the answer to these questions regarding two humans let alone between a human and an automaton. 

There are indeed ways we can inform the human of the automation’s state, but we can also inform the automaton of the human’s state.  Advances in machine learning allows the automaton to learn how the human prefers to interact with it.  Advances in augmented cognition can allow us to feed information about the physiological information about the operator to the automaton.  If the car knew the driver was stressed (cortisol levels) or tired (eye closures) it might decide to not hand over control.   

I should mention here that this kind of separation of responsibilities between machine and human is quite different than the static lists I discussed in my first answer regarding the FAA technology.  There, the computer had certain tasks and the controller had others; here any particular task belongs to either particular agent, depending on the situation.

I think future work has to really investigate the system properties of the human and technology, and not (just) each alone.

David Bruemmer: Thoughts on the Future of AI & Robotics

In our third post in a new series, we interview a leader in robotics technology, Mr. David Bruemmer. We talk to David about the future of autonomous robotic technologies and the ethics associated with automation use. 

About David Bruemmer

BruemmerHeadShot.gif

Mr. Bruemmer is Founder and CEO of Adaptive Motion Group which provides Smart Mobility solutions based on accurate positioning and autonomous vehicles. Previously, Mr. Bruemmer co-founded 5D Robotics, supplying innovative solutions for a variety of automotive, industrial and military applications.

Mr. Bruemmer has led large scale robotics programs for the Army and Navy, the Department of Energy, and the Defense Advanced Research Projects Agency. He has patented robotic technologies for landmine detection, urban search and rescue, decontamination of radioactive environments, air and ground teaming, facility security and a variety of autonomous mapping solutions.

Mr. Bruemmer has authored over 60 publications and has been awarded 20 patents in robotics and positioning. He recently won the South by Southwest Pitch competition and is a recipient of the R&D 100 Award and the Stoel Reeves Innovation award. Mr. Bruemmer led robotics research at the Idaho National Lab for a diverse, multi-million dollar R&D portfolio. Between 1999 and 2000, Mr. Bruemmer served as a consultant to the Defense Advanced Research Projects Agency (DARPA), where he worked to coordinate development of autonomous robotics technologies across several offices and programs.


You have been working on developing autonomous robotics technologies for a long time now, during your tenure with Idaho National Lab and now as the CEO of Adaptive Motion Group. How has the field evolved and how excited are you about the future?

I think there is a large amount of marketing and spin especially in the autonomous driving arena

There seems to be waves of optimism about AI followed by disappointment as the somewhat inflated goals meet the realities of trying to deploy robotics and AI. Machine learning has come a long way but the growth has been linear and I really do not feel that deep learning is necessarily a “fundamentally new” machine learning tool.

I think there is a large amount of marketing and spin especially in the autonomous driving arena. I have been sad to see that in the past several years, some of the new cadre of self-driving companies seem to have overlooked many of the hard lessons we learned in the military and energy sectors regarding the perils of “full autonomy” and the need for what I call “context sensitive shared control”.

Reliability continues to be the hard nut to crack and I believe that for a significant shift in the level of reliability of overall automation we need to focus more energy on positioning. Positioning is sometimes considered to be a “solved problem” as various programs and projects have offered lidar mapping, RTK GPS and camera based localization. These work in various constrained circumstances but often fail outside of the bounds were they were intended.

I think that even after the past twenty years of progress we need a more flexible, resilient means of ensuring accurate positioning. I would also like to point out that machine learning and AI is not a cure-all. If it was we wouldn’t have the increasing death toll on our roads or the worsening congestion. When I look at AI I see a great deal of potential but most of it still unrealized. This is either cause for enthusiasm or pessimism depending on your perspective.


There are quite a few unfortunate events associated with automation use. For example, there is the story of a family who got lost in Death Valley due to overreliance on their GPS. Do you think of human-machine interaction issues during design?

Yes I do. I think that many are overlooking the real issue: that our increasingly naïve dependence on AI is harmful not only from a cultural and societal perspective, but that it also hurts system performance. Some applications and environments allow for a very high degree of  autonomy. However there are many other tasks and environment where we need to give up on this notion of fully removing the human from the drivers seat or the processing loop and instead focus on the rich opportunity for context sensitive shared control where the human and machine work as team mates balancing the task allocation as needed.

our increasingly naïve dependence on AI is harmful not only from a cultural and societal perspective, but that it also hurts system performance

Part of the problem is that those who make the products want you to believe their system is perfect. It’s an ego thing on the part of the developers and a marketing problem to boot. For the teamwork between human and robot to be effective both human and machine need an accurate understanding of eachothers’ limitations.

Quite frankly, the goal of many AI companies is to make you overlook these limits. So supposedly GPS works all the time and we provide the user no sense of “confidence” or what in psychology we call a “feeling of knowing.” This breeds a strangely unfortunate slew of problems from getting horribly lost in urban jungles or real jungles.

If we were more honest about the limitations and we put more energy into communicating the need for help and more data then things could work a whole lot better. But we almost never design the system to acknowledge its own limitations.


There is some ethical debate about robots or AI making some human occupations obsolete (e.g., long-haul trucking, medical diagnosis). How does ethics factor into your decisions when developing new technologies?

the role of government is to protect the right of every human to be prioritized over machines and over profits

The great thing about my emphasis on shared control is that I never need to base my business model or my technology on the idea of removing the human or eliminating the human labor.

Having said that I do of course believe that better AI and robotics means increased safety and efficiency which in turn can lead to reduced human labor. I think this is a good thing as long as it is coupled with a society that cares for the individual.

Corporations should not be permitted to act with impunity and I believe the role of government is to protect the right of every human to be prioritized over machines and over profits. This mindset has less to do with robotics and more to do with politics so I will leave off there. I do always try to emphasize that robotics should not ever be about the robots, but rather about the people they work with.

The Year of the Algorithm. AI Potpourri part 2:
 “We have to grade indecent images for different sentencing, and that has to be done by human beings right now, but machine learning takes that away from humans,” he said.

”You can imagine that doing that for year-on-year is very disturbing.”

But as the next story shows, these AI tools are not advanced enough to replace human content moderators.

[WSJ] The Worst Job in Technology: Staring at Human Depravity to Keep It Off Facebook

Humans, still, are the first line of defense. Facebook, YouTube and other companies are racing to develop algorithms and artificial-intelligence tools, but much of that technology is years away from replacing people, says Eric Gilbert, a computer scientist at the University of Michigan. 
Earlier this month, after a public outcry over disturbing and potentially exploitative YouTube content involving children, CEO Susan Wojcicki said the company would increase its number of human moderators to more than 10,000 in 2018, in an attempt to rein in unsavory content on the web’s biggest video platform.

But guidelines and screenshots obtained by BuzzFeed News, as well as interviews with 10 current and former “raters” — contract workers who train YouTube’s search algorithms — offer insight into the flaws in YouTube’s system.
But algorithms, unlike humans, are susceptible to a specific type of problem called an “adversarial example.” These are specially designed optical illusions that fool computers into doing things like mistake a picture of a panda for one of a gibbon. They can be images, sounds, or paragraphs of text. Think of them as hallucinations for algorithms.
From the ridiculous to the chilling, algorithmic bias — social prejudices embedded in the AIs that play an increasingly large role in society — has been exposed for years. But it seems in 2017 we reached a tipping point in public awareness.

he New York City Council recently passed what may be the US’ first AI transparency bill, requiring government bodies to make public the algorithms behind its decision making. Researchers have launched new institutes to study AI prejudice (along with the ACLU) while Cathy O’Neil, author of Weapons of Math Destruction, launched an algorithmic auditing consultancy called ORCAA.
The Year of the Algorithm. AI potpourri, part I: Astronomer, Factory Worker, Musician, and more

2017 seems to have been a watershed year for the use and application of AI and algorithms.  This is part 1 of a two part post highlighting the use (and possible regulation) of AI. 

[NYTimes] An 8th Planet Is Found Orbiting a Distant Star, With A.I.’s Help

NASA announced the discovery of a new exoplanet orbiting a distant star some 2,500 light years away from here called Kepler 90.

The new exoplanet was detected with the help of an artificial intelligence researcher at Google using a machine learning technique called neural networking.

The technology, which is loosely inspired by the human brain, is designed to recognize patterns and classify images.
In many factories, workers look over parts coming off an assembly line for defects.

Andrew Ng, co-founder of some of Alphabet Inc, launches a new venture with iPhone assembler Foxconn to bring AI and so-called machine learning onto the factory floor.

He said he understands that his firm’s technology is likely to displace factory workers but that the firm is already working on how to train workers for higher-skilled, higher paying factory work involving computers.
Bing is working on a system to help users get to the information they are looking for even if they aren’t exactly sure how to find it. For example, let’s say you are trying to turn on Bluetooth on a new device. The new system could prompt users to provide more information, such as the type of gadget or operating system they are using.

Another new, AI-driven advance in Bing is aimed at getting people multiple viewpoints on a search query that might be more subjective.

Microsoft also announced plans to release a tool that highlights action items in email and gives you options for responding quickly on the go.
Researchers at MIT want to get rid of subjective feelings in treatment by using a facial recognition algorithm that can detect your pain levels by studying your face.

Trained on thousands of videos of people wincing in pain, the algorithm creates a baseline for each patient based on common pain indicators – generally, movements around the nose and mouth are telltale signs.

So far, the algorithm is 85% successful at weeding out the fakers. Meaning that people trying to fake pain to get prescription painkillers will soon be out of business.
In the city (London) that spawned David Bowie, Pink Floyd, and the Spice Girls, two college professors are working on an artificial intelligence capable of making its own music. And it’s already played its first show.

The race is on to see whether A.I. can add something meaningful to this cultural activity.

The pair invited a number of musicians to come together for a show called “Partnerships,” a reference to the relationship between human and machine. The show featured a mix of compositions, all performed by humans, with varying levels of input from the A.I. Some compositions took the computer’s work as a starting point, some used the project as inspiration, while others directly played the generated work as it stood.
Artificial intelligence could one day scan the music videos we watch to come up with predictive music discovery options based on the emotions of the performer.

Consumers of the future will rely on computer software to serve them music discovery options. YouTube Red and the YouTube Music app do a good job of serving up new and different options for music discovery, but it’s dragged down by its inability to actually identify what’s playing on the screen. Sure, Google knows which videos you gave a thumbs up to, watched 50 times on repeat, shared on social media, and commented on, but it doesn’t have the visual cues to tell it why.
Macys, CVS, Starbucks, and Sephora turn to AI

If you are scrambling to find last minute gifts, AI/machine learning is here to help!  All the major retailers are now turning to AI to learn what you want.  Big data about retail purchases are being fed into machine learning algorithms to learn things about you.  Here are some examples.  By the way, have you wondered, "what exactly is machine learning?"  Then see the end of this post for an easily digestible video.

[Forbes] Macy's Teams With IBM Watson For AI-Powered Mobile Shopping Assistant

Macy’s is set to launch an in-store shopping assistant powered by artificial intelligence thanks to a new tie-up with IBM Watson via developer partner and intelligent engagement platform, Satisfi.

Macy’s On Call, as it’s called, is a cognitive mobile web tool that will help shoppers get information as they navigate 10 of the retail company’s stores around the US during this pilot stage.

Customers are able to input questions in natural language regarding things like where specific products, departments, and brands are located, to what services and facilities can be found in a particular store. In return, they receive customised relevant responses. The initiative is based on the idea that consumers are increasingly likely to turn to their smartphones than they are a store associate for help when out at physical retail.
If you always have a caramel macchiato on Mondays, but Tuesdays call for the straight stuff, a double espresso, then Starbucks Corporation (SBUX - Get Report) is ready to know every nuance of your coffee habit. There will be no coffee secrets between you, if you’re a Rewards member, and Starbucks.

The chain’s regulars will find their every java wish ready to be fulfilled and, the food and drink items you haven’t yet thought about presented to you as what you’re most likely to want next.

So targeted is the technology behind this program that, if the weather is sunny, you’ll get a different suggestion than if the day is rainy.
Patients tend to be at their local CVS much more frequently than at the doctor. People are also increasingly using fitness trackers like FitBits, smartwatches, and even Bluetooth-enabled scales that are all collecting data patients can choose to share with a provider. All that data isn’t worth much though unless it is carefully interpreted — something Watson can do much more efficiently than a team of people.

A drop in activity levels, a sudden change in weight, or prescriptions that aren’t being filled are the kinds of things that might be flagged by the system. Certain changes could even indicate a developing sickness before someone feels ill — and certainly before someone decides to visit the doctor.

[AdWeek] Sephora Mastered In-Store Sales By Investing in Data and Cutting-Edge Technology

I love Sephora.  As the article aptly states "Sephora isn’t your mother’s makeup company; it’s your modern tech company". I have personally tried the Color IQ, which is their in-store program that scans faces to find out the right shade of foundation and other products for different skin tones. Sephora has an amazing Beauty Insider program that provides it a lot of rich data about their consumers and now the company is leveraging AI to allow customers to virtually try on make-up and spice up their online presence.

Sephora’s innovation lab in San Francisco is tooling with an artificial intelligence feature dubbed Virtual Artist within its mobile app that uses facial recognition to virtually try on makeup products.

[CGP Grey] How do machines learn?

The science behind machine/deep learning neural networks is quite interesting.  For example, the discussion, in the video, about us not knowing what is exactly is being learned is interesting to me (the hidden layer).  But you don't have time for that!  Here is an easily understood video:

What's coming up in 2018, and happy holidays!

Just a short note to let our dear readers know that posting volume will be a bit lighter as we travel for the holidays.  But here is what's coming up!

  • More interviews of notable experts (including an expert in self-driving vehicles, and an expert in human-autonomy teaming)
  • More Throwback Thursdays covering classic automation and autonomy literature
  • NEW: Movie Club; where Arathi and I "review" a particular movie's treatment of automation/autonomy/AI

Thanks for reading!  Tell your friends!!

Robot potpourri: Concierge, security guard, and VIP greeter
Connie will work side-by-side with Hilton’s Team Members to assist with visitor requests, personalize the guest experience and empower travelers with more information to help them plan their trips.

The more guests interact with Connie, the more it learns, adapts and improves its recommendations. The hotel will also have access to a log of the questions asked and Connie’s answers, which can enable improvements to guests’ experiences before, during and after their stays.

Connie is powered by Watson, a cognitive computing technology platform that represents a new era in computing where systems understand the world in the way that humans do - through senses, learning and experience.

 

 

 

 

 

 

 

 

After backlash, animal shelter fires security robot, “effective immediately

The Society for the Prevention of Cruelty to Animals (SPCA) based in San Francisco has been asked to halt the use of their security robot, which they had started using  after experiencing a lot of car break-ins, theft, and vandalism. SPCA also reported that they have seen a decline in the crimes after adopting the robot.  However, some tagged the robot as the "anti-homeless" robot, whose aim was to dislodge homeless campers and whose appearance was considered creepy.

Mitra: The ‘Made in India’ robot that stole the show at GES Hyderabad

The Global Entrepreneurship Summit last year was inagurated with Modi and Trump pressing a button on a robot developed by a startup based in Bangalore, India. 

Variations of the robot are envisioned to be used for customer assistance and therefore projected to increase sales via smart conversations as well as a party photographer, DJ, and live tweeter.

Mitra features a facial recognition technology, allowing the robot to quickly identify the person and deliver the customised services.

The humanoid also understands multiple languages. At the moment, Mitra supports Kannada and English but is soon going to add support for Hindi as well.
Can Robots Address Unethical Issues in Fashion?

The fashion industry is one that is rife with ethical issues at the high end (haute couture, impossible body standards of models) to the low end (fast fashion, manufacturing).  Can robots solve these issues?

[NY Times] Fashion Finds a More Perfect Model: The Robot

This article mainly discusses how fashion is embracing the look of robots.  But could robots soon replace fashion models?

Fashion has been especially quick to seize on the notion that robots are slicker, more perfect versions of ourselves. In the last few months alone, androids have filtered into the glossies and stalked the runways of designers as audacious as Thom Browne and Rick Owens, and of inventive newcomers like David Koma, who riffed on fembot imagery in his fall 2015 collection for Mugler, sending out models in frocks that were patterned with soldering dots and faux computer circuitry.

In a Steven Klein photo shoot in the current Vogue, drones hover overhead, seeming to spy on a party of human models cavorting in a field. For the March issue of W magazine, he portrayed the designer Jason Wu wrapped in the arms of a tin man.

[Reuters] Meet Lulu Hashimoto, the 'living doll' fashion model

Not far behind is Japan, where a doll with the motion of a human is co-existing with humans, is active in the fashion scene, and is being idoloized.

Meet Lulu Hashimoto, a “living doll” and the latest trend in Tokyo’s fashion modeling scene.

Lulu’s ability to blur the line between reality and fiction has mesmerized fans on social media, where the Lulu Twitter and Instagram accounts have drawn tens of thousands of followers.

While popular among fans of Japanese subculture, Lulu is now turning heads at the annual Miss iD beauty pageant where she is among the 134 semi-finalists chosen from around 4,000 entrants.
While automation does take away human jobs, the current frenzy over cheap clothing has created a whole host of unethical labor issues—like the ones that recently caused a factory fire in India killing 13 people—and robots could potentially avert that.

Robots in apparel manufacturing may be good, or they may be bad. They may give us cheap clothes and U.S. jobs (at managerial and administrative level), or they may detrimentally impact the economies of developing nations.