Thoughts on the first fatal self driving car accident
You have no doubt heard about the unfortunate fatal accident involving a self-driving car killing a pedestrian (NYT).
This horrible event might be the "stock market correction" of the self-driving car world that was sorely needed to re-calibrate the public's unrealistic expectations about the capability of these systems.
In the latest news, the Tempe police have released video footage that shows the front and in-vehicle camera view just before impact.
My first impression of the video was that it seemed like something the car should have detected and avoided. In such a visually challenging condition as illustrated in the video, a human driver would have great difficulty seeing the pedestrian in the shadowed area. Humans have inferior vision and reaction time and speed compared to computers (cf Fitts' list, 1951).
One interesting narrative thread that has come out of the coverage, and is evident in the Twitter comments for the video, is that the idea that the "Fatal Uber crash [was] likely 'unavoidable' for any kind of driver." People seem to be understanding of the difficulty of the situation and thus their trust in these autonomous systems is likely to only be somewhat negatively affected. But should it be more affected? Autonomous vehicles, with the megaflops of computing power and advance sensors were never expected to be "any kind of driver"--they were supposed to be much better.
But the car, outfitted with radar-based sensors, should have "seen" it. I'm certainly not blaming the engineers. Determining the threshold for signal (pedestrian) versus noise is probably an active area and one that they were testing.
Continuing story and thoughts...