Potpourri: Humorously-runaway AI

Runaway AI is a fear that some researchers have about AI.  While technologically it may be too soon to have this fear, we are coming close.  Here are some recent examples of human-automation or human-AI partnerships running amok with humorous results.

[Sunday Times] Jeremy Clarkson Says He Nearly Crashed While Testing An Autonomous Car (paywalled article); [CarScoops summary]

“I drove a car the other day which has a claim of autonomous capability and twice in the space of 50 miles on the M4 it made a mistake, a huge mistake, which could have resulted in death,” he said. “We have to be very careful legally, so I’m not going to say which one.”
In June, the U.S. Immigrant and Customs Enforcement (ICE) released a letter saying that the agency was searching for someone to design a machine-learning algorithm to automate information gathering about immigrants and determine whether it can be used to prosecute them or deny them entry to the country. The ultimate goal? To enforce President Trump’s executive orders, which have targeted Muslim-majority countries, and to determine whether a person will “contribute to the national interests”—whatever that means.
What I’ve heard is that this is a machine learning problem — that, more or less, for some reason the machine learning algorithm for autocorrect was learning something it never should have learned.
As far as debuts go, there have been more successful ones. During its first hour in service, an automated shuttle in Las Vegas got into an accident, perhaps fittingly the result of a flesh-and-blood human truck driver slowly driving into the unsuspecting robocar, according to a AAA PR representative on Twitter. Nobody was hurt and the truck driver was cited.