Posts

The NIEON Driver Benchmark and the Two-Sided Coin for AV safety

Image
Waymo just published a study showing that they can outperform an unimpaired (NIEON) driver on a set of real-world crashes. That's promising, but it's only half the story. To be a safe driver, an Autonomous Vehicle (AV) must both not only be good -- but also not be bad. Those are two different things. Let me explain... Learning to drive includes learning how to drive well. It also includes learning to avoid making avoidable mistakes. They're not the same thing. A Sept. 29, 2022 blog posting by Waymo  explains their next step in showing that their automated driver can perform better than people at avoiding crashes. The approach is to recreate crashes that happened in the real world and show that their automated driver would have avoided them.  For this newest work they compare their driver to not just any human, but a Non-Impaired, with Eyes always ON the conflict (NIEON) driver. They correctly point out that no human is likely to be this good (you gotta blink, right?), but t

Book: How Safe is Safe Enough? Measuring and Predicting Autonomous Vehicle Safety

Image
How Safe Is Safe Enough for Autonomous Vehicles?  The Book The most pressing question regarding autonomous vehicles is: will they be safe enough? The usual metric of "at least as safe as a human driver" is more complex than it might seem. Which human driver, under what conditions? And are fewer total fatalities OK even if it means more pedestrians die? Who gets to decide what safe enough really means when billions of dollars are on the line? And how will anyone really know the outcome will be as safe as it needs to be when the technology initially deploys without a safety driver? This book is written by an internationally known expert with more than 25 years of experience in self-driving car safety. It covers terminology, autonomous vehicle (AV) safety challenges, risk acceptance frameworks, what people mean by "safe," setting an acceptable safety goal, measuring safety, safety cases, safety performance indicators, deciding when to deploy, and ethical AV deployment.

The Autonomous Vehicle Deployment Governance Problem

Image
The #1 ethical issue in autonomous vehicles is not the infamous Trolley Problem. It is the question of who gets to decide when it is OK to deploy a vehicle without a safety driver on public roads. Consider a thought experiment which, if you follow AV industry news, you might recognize as not entirely hypothetical. You, the reader, are in charge of a company that needs to do a public road demonstration with no driver in an AV. You know that safety is not where you would like it to be. In fact, you have no safety case at all. You might not even have any real safety engineers on staff. But you have a smart, super-capable team. You have done a lot of test driving and it is going pretty well. You intuitively figure it is more likely than not that you can pull off a one-time demo without a crash, and even less likely that a crash will kill someone. You figure you have something like 5 chances out of 6 of pulling off the demo with nobody getting hurt, and it is, in your mind, near certainty d

FMVSS Exemption Considerations for Fully Autonomous Vehicles

Image
Summary: The human role in FMVSS should not be removed for exemption request because there is no driver. Rather, what should be stated is how the driver is being replaced in that safety role as a matter of system design. These comments were filed regarding the  General Motors Petition for Temporary Exemption from FMVSS -- Federal Motor Vehicle Safety Standards ( https://www.regulations.gov/document/NHTSA-2022-0067-0002 )  However, they are likely to apply to any autonomous vehicle that does not have conventional driver controls or operates in a mode which does not have an officially designated driver. It is good to see companies working to advance the potential benefits of autonomous vehicle technology. However, it is also important for NHTSA to ensure public safety when such technology is deployed on public roads. There will almost certainly need to be an auxiliary controller to command motion of vehicles without normally accessible driver controls. To the extent that these are in-

Continuous Learning Approach to Safety Engineering

Image
Continuous Learning Approach to Safety Engineering Rolf Johansson & Philip Koopman / CARS @EDCC 2022 Abstract: A phase change moment is upon us as the automotive industry moves from conventional to highly automated vehicle operation, with questions about how to assure safety. Those struggles underscore larger issues with current functional safety standards in terms of a need to strengthen the traceability between required practices and safety outcomes. There are significant open questions regarding both the efficiency and effectiveness of standards-based safety approaches, including whether some engineering practices might be dropped, or whether others must be added to achieve acceptable safety outcomes. We believe that rather than an incremental approach, it is time to rethink how safety standards work. We propose that real-world field feedback for an initially safe deployment should support a DevOps-style continuous learning approach to lifecycle safety. Safety engineering should

The TuSimple crash raises safety culture questions

Image
A  WSJ scoop confirms the TuSimple April 6th crash video and FMCSA investigation.   See:  https://www.wsj.com/articles/self-driving-truck-accident-draws-attention-to-safety-at-tusimple-11659346202?st This article validates what we saw in this video showing the crash:  https://lnkd.in/ebGF4Quv TuSimple blames human error, but it sounds more like a  #moralcrumplezone  situation, with serious safety culture concerns if the story reflects what is actually going on there. A left turn command was pending from a previous disengagement minutes before. When re-engaged the system started executing a sharp left turn while at 65 mph on a multi-lane highway. That resulted in crossing another traffic lane, a shoulder, and then hitting a barrier. The safety driver reacted as quickly as one could realistically hope -- but was not able to avoid the crash. A slightly different position of an adjacent vehicle would have meant crashing into another road user (the white pickup truck in an adjacent lane can

Blame should not be a factor in AV incident reports

Image
A proposal: Any mention of blame in an autonomous vehicle incident report should immediately discredit the reporting entity's claims of caring about safety and tar them with the brush of "safety theater" Reading mishap reports from California and NHTSA data it is obvious that so many AV companies are trying as hard as they can to blame anyone and anything other than themselves for crashes. That's not how you get safety -- that's how you do damage control. Sure, in the near term people might buy the damage control. And for many who just see a mention it sticks permanently (think "pedestrian jumped out of the shadows" narrative for the Uber ATG fatality -- the opposite of what actually happened). But that only gets short term publicity benefit at the expense of degrading long term safety. If companies spin and distort safety narratives for each crash, they do not deserve trust for safety. If a crash report sounds like it was written by lawyers defending th