There have been a lot of articles talking about how the driver could have avoided the accident. I'm instead going to take a look at how state-of-the-art computer vision methods could have avoided it.
With autonomous car systems, we are going to need to have backups of backups in order to prevent these kinds of accidents. Uber has not yet discussed why the LiDAR or depth sensors failed to prevent this. Velodyne has released a statement saying it wasn't a failure of their hardware (which is most likely true from my experience with their technology).
But even not considering specialized sensors, just looking at the video footage, it's clear something wrong happened here.
Just using the video footage supplied, I ran some frames through a state of the art neural network called Mask RCNN trained on the COCO dataset. Note, this isn't an autonomous driving dataset but it does contain people, car, and bicycles so it is relevant. Below are the algorithm's output on some frames preceding the accident (more than a second before).
These images bring up some interesting questions for Uber. If the Velodyne LiDar should have caught this and the computer vision system should have caught this, then why did it happen?
This implies there was a most likely avoidable bug or failure in Uber's software stack and this caused someone's death.
This is not a post to say autonomous driving is bad or that we shouldn't pursue it; we most certainly should. This is just the first case of an easily avoidable death. We as researchers and programmers need to be careful about testing these methods and having many backups to prevent easy edge cases. Companies pushing this technology should be even more robust and careful in testing many edge cases and being confident in their software.
Places you can find me