Monday, April 2, 2018

A Way Forward for Safe Self-Driving Car Road Testing

The self-driving car industry's reputation has suffered a setback, but the resulting debate about whether autonomy is safe for road use is focused on solving the wrong problem.

Dashcam image from mishap released by Tempe AZ Police


The recent tragic fatality in Arizona in which a pedestrian was hit and killed by a self-driving car test vehicle has sent a shock wave through the industry. At least some companies have shut down their road testing while they see how things develop. Uber just had its Arizona road testing permission revoked, and apparently is not going to be testing in California any time soon. The longer term effects on public enthusiasm for the technology overall remain unclear.

Proponents of self-driving car testing point out that human drivers are far from perfect, and compare the one life lost to the many lives lost every day in conventional vehicle crashes. Opponents say that potentially avoidable fatalities due to immature technology are unacceptable. The debate about whether self-driving car technology can be trusted with human lives is important in the long term, but isn't actually what went wrong here.


What went wrong is that a test vehicle killed Elaine Herzberg -- not a fully autonomous vehicle. The vehicle involved wasn't supposed to have perfect autonomy technology at all. Rather, it had unproven systems still under development, and a safety driver who was supposed to ensure that failures in the technology did no harm.  Whether the autonomy failed to detect a mid-block pedestrian crossing at night isn't the real issue. The real issue is why the safety driver didn't avoid the mishap despite any potential technology flaw.


The expedient approach of blaming the safety driver (or the victim) won't make test vehicles safer. We've known for decades that putting a single human in charge of supervising a self-driving car with no way to ensure attentiveness is asking for trouble.  And we know that pedestrians don't always obey traffic rules. So to really be sure these vehicles are safe on public roads, we need to dig deeper.


Fortunately, there is a way forward that doesn't require waiting for the next death to point out any potential problems in some other company's prototype technology, and doesn't require developers to prove that their autonomy is perfect before testing. That way simply requires treating these vehicles as the test vehicles they are, and not as fully formed self-driving cars. Operators should not be required to prove their autonomy technology is perfect. But they should be required to explain why their on-road testing approach is adequately safe beyond simply saying that a safety driver is sitting in the vehicle.

A safety explanation for a self-driving test platform doesn't have to expose anything about proprietary autonomy technology, and doesn't have to be incredibly complicated. It might be as simple as a three-part safety strategy: proactively ensure the safety driver always pays attention, ensure that the safety driver has time to react if a problem occurs, and ensure that when the safety driver does react, the vehicle will obey the safety driver's commands. Note that whether the autonomy fails doesn't enter into it, other than ensuring that the autonomy gets out of the way when the safety driver needs to take over.

There is no doubt that making a credible safety explanation for a self-driving car test platform requires some effort. The fact that safety drivers can lose focus after only a few minutes must be addressed somehow. That could be as simple as using two safety drivers, as many companies do, if it can be shown that paired drivers actually achieve continuous attentiveness for a full test session. Possibly some additional monitoring technology is needed to track driver eye gaze, driver fatigue, or the like. It must also be clear that the safety driver has a way to know if the autonomy is making a mistake such as failing to detect unexpected pedestrians. For example, a heads-up display that outlines pedestrians as seen through the windshield could make it clear when one has been missed to the eyes-on-the-road safety driver. And the safety explanation has to deal with realities such as pedestrians who are not on crosswalks. Doing all this might be a challenge, but the test system doesn't have to be perfect. Arguably, the test vehicle with its safety driver(s) just has to be as good as single human driver in an ordinary vehicle -- which the self-driving car proponents remind us is far from perfect.

I live in Pittsburgh where I have seen plenty of self-driving car test vehicles, including encounters as a pedestrian. Until now I've just had to take it on faith that the test vehicles were safe.  Self-driving cars have tremendous long-term promise. But for now, I'd rather have some assurance, preferably from independent safety reviewers, that these companies are actually getting their test platform safety right.

Philip Koopman, Carnegie Mellon University.


Author info: Prof. Koopman has been helping government, commercial, and academic self-driving developers improve safety for 20 years.
Contact:  koopman@cmu.edu

This essay was originally published in EE Times:
https://www.eetimes.com/author.asp?section_id=36&doc_id=1333143

No comments:

Post a Comment

All comments are moderated by a human. While it is always nice to see "I like this" comments, only comments that contribute substantively to the discussion will be approved for posting.