David Bruemmer: Thoughts on the Future of AI & Robotics
Mr. Bruemmer is Founder and CEO of Adaptive Motion Group which provides Smart Mobility solutions based on accurate positioning and autonomous vehicles. Previously, Mr. Bruemmer co-founded 5D Robotics, supplying innovative solutions for a variety of automotive, industrial and military applications.
Mr. Bruemmer has led large scale robotics programs for the Army and Navy, the Department of Energy, and the Defense Advanced Research Projects Agency. He has patented robotic technologies for landmine detection, urban search and rescue, decontamination of radioactive environments, air and ground teaming, facility security and a variety of autonomous mapping solutions.
Mr. Bruemmer has authored over 60 publications and has been awarded 20 patents in robotics and positioning. He recently won the South by Southwest Pitch competition and is a recipient of the R&D 100 Award and the Stoel Reeves Innovation award. Mr. Bruemmer led robotics research at the Idaho National Lab for a diverse, multi-million dollar R&D portfolio. Between 1999 and 2000, Mr. Bruemmer served as a consultant to the Defense Advanced Research Projects Agency (DARPA), where he worked to coordinate development of autonomous robotics technologies across several offices and programs.
You have been working on developing autonomous robotics technologies for a long time now, during your tenure with Idaho National Lab and now as the CEO of Adaptive Motion Group. How has the field evolved and how excited are you about the future?
There seems to be waves of optimism about AI followed by disappointment as the somewhat inflated goals meet the realities of trying to deploy robotics and AI. Machine learning has come a long way but the growth has been linear and I really do not feel that deep learning is necessarily a “fundamentally new” machine learning tool.
I think there is a large amount of marketing and spin especially in the autonomous driving arena. I have been sad to see that in the past several years, some of the new cadre of self-driving companies seem to have overlooked many of the hard lessons we learned in the military and energy sectors regarding the perils of “full autonomy” and the need for what I call “context sensitive shared control”.
Reliability continues to be the hard nut to crack and I believe that for a significant shift in the level of reliability of overall automation we need to focus more energy on positioning. Positioning is sometimes considered to be a “solved problem” as various programs and projects have offered lidar mapping, RTK GPS and camera based localization. These work in various constrained circumstances but often fail outside of the bounds were they were intended.
I think that even after the past twenty years of progress we need a more flexible, resilient means of ensuring accurate positioning. I would also like to point out that machine learning and AI is not a cure-all. If it was we wouldn’t have the increasing death toll on our roads or the worsening congestion. When I look at AI I see a great deal of potential but most of it still unrealized. This is either cause for enthusiasm or pessimism depending on your perspective.
There are quite a few unfortunate events associated with automation use. For example, there is the story of a family who got lost in Death Valley due to overreliance on their GPS. Do you think of human-machine interaction issues during design?
Yes I do. I think that many are overlooking the real issue: that our increasingly naïve dependence on AI is harmful not only from a cultural and societal perspective, but that it also hurts system performance. Some applications and environments allow for a very high degree of autonomy. However there are many other tasks and environment where we need to give up on this notion of fully removing the human from the drivers seat or the processing loop and instead focus on the rich opportunity for context sensitive shared control where the human and machine work as team mates balancing the task allocation as needed.
Part of the problem is that those who make the products want you to believe their system is perfect. It’s an ego thing on the part of the developers and a marketing problem to boot. For the teamwork between human and robot to be effective both human and machine need an accurate understanding of eachothers’ limitations.
Quite frankly, the goal of many AI companies is to make you overlook these limits. So supposedly GPS works all the time and we provide the user no sense of “confidence” or what in psychology we call a “feeling of knowing.” This breeds a strangely unfortunate slew of problems from getting horribly lost in urban jungles or real jungles.
If we were more honest about the limitations and we put more energy into communicating the need for help and more data then things could work a whole lot better. But we almost never design the system to acknowledge its own limitations.
There is some ethical debate about robots or AI making some human occupations obsolete (e.g., long-haul trucking, medical diagnosis). How does ethics factor into your decisions when developing new technologies?
The great thing about my emphasis on shared control is that I never need to base my business model or my technology on the idea of removing the human or eliminating the human labor.
Having said that I do of course believe that better AI and robotics means increased safety and efficiency which in turn can lead to reduced human labor. I think this is a good thing as long as it is coupled with a society that cares for the individual.
Corporations should not be permitted to act with impunity and I believe the role of government is to protect the right of every human to be prioritized over machines and over profits. This mindset has less to do with robotics and more to do with politics so I will leave off there. I do always try to emphasize that robotics should not ever be about the robots, but rather about the people they work with.