There is a case to be made that at-scale AV deployments should be at least ten times safer than human drivers, and perhaps even safer than that. The rationale for this large margin is leaving room for the effects of uncertainty via incorporating a safety factor of some sort.[1]
Consider all the variables and uncertainty
discussed in this chapter. We have seen significant variability in fatality and
injury rates for baseline human drivers depending on geographic area, road
type, vehicle type, road user types, driver experience, and even passenger age.
All those statistics can change year by year as well.
Additionally, even if one were to create a precise
model for acceptable risk for a particular AV’s operational profile within its
ODD, there are additional factors that might require an increase:
·
Human biases to both want an AV
safer than their own driving and to over-estimate their own driving ability as
discussed in a previous section. In short, drivers want an AV driving their
vehicle to better than they think they are rather than better than they
actually are.
·
Risk of brand tarnish from AV
crashes which are treated as more newsworthy than human-driven vehicle crashes
of comparable severity. Like it or not, AV crashes are going to be covered by
news outlets as a consequence of the same media exposure that created interest
in and funding for AV developers. Even if AVs are exactly as safe as human
drivers in every respect, each highly publicized crash will call AV safety into
question and degrade public trust in the technology.
·
Risk of liability exposure to the
degree that AV crashes are treated as being caused by product defects rather
than human driver error. For better or worse (mostly for worse), “driver error”
is attributed to a great many traffic fatalities rather than equipment failure
or unsafe infrastructure design. Insurance tends to cover the costs. Even if a
judicial system is invoked for drunk driving or the like, the consequences tend
to be limited to the participants of a single mishap, and the limits of
personal insurance coverage limit the practical size of monetary awards in many
cases. However, the stakes might be much higher for an AV if it is determined
that the AV is systematically prone to crashes in certain conditions or is
overall less safe than a human driver. A product defect legal action could
affect an entire fleet of AVs and expose a deep-pockets operator or manufacturer
to having to pay a large sum. Being seen to be dramatically safer than human
drivers could help both mitigate this risk and provide a better argument for
responsible AV developer behavior.
·
The risk of not knowing how safe the vehicle
is. The reality is that it will be challenging to predict how safe an AV is
when it is deployed. What if the safety expectation is too optimistic? Human-driven
vehicle fatalities in particular are so rare that it is not practicable to get
enough road experience to validate fatality rates before deployment. Simulation
and other measures can be used to estimate safety but will not provide
certainty. The next chapter talks about this in more detail.
Taken together, there is an argument to be made
that AVs should be safer than human drivers by about a factor of 10 (being a
nice round order of magnitude number) to leave engineering margin for the above
considerations. A similar argument could be made for this margin to be an even
higher factor of 100, especially due to the likelihood of a high degree of
uncertainty regarding safety prediction accuracy while the technology is still
maturing.
The factor of 100 is not to say that the AV must be
guaranteed to be 100 times safer. Rather, it means that the AV design team
should do their best to build an AV that is expected to be 100 times safer plus
or minus some significant uncertainty. The cumulative effect of uncertainties in
safety prediction, inevitable fluctuations in operational exposure to risky
driving conditions, and so on might easily cost a factor of 10 in safety.[2] That
will in turn reduce achieved safety to “only” a factor of 10 better than a
baseline human driver. That second factor of 10[3] is
intended to help deal with the human aspect of expectations being not just a
little better than the safety of human drivers, but a lot better, the risk of
getting unlucky with an early first crash, and so on.
Waiting to deploy until vehicles are thought to be 100
times safer than humans is not a message investors and design teams are likely
to want to hear. But it is, however, a conservative way to think about safety
that leaves room for the messiness of real-world engineering to deploy AVs. Any
AV deployed will have a safety factor over (or under) Positive Risk Balance (PRB).
The question is whether the design team will manage their PRB safety factor proactively. Or not.
[1] Safety factors and
derating are ubiquitous in non-software engineering. It is common to see safety
factors of 2 for well understood areas of engineering, but values can vary. A
long-term challenge for software safety is understanding how to make software
twice as “strong” for some useful meaning of the word “strong.”
Over-simplifying, with mechanical structures, doubling the amount of steel
should make it support twice the load. But with software, adding twice the
number of lines of code just doubles the number of defects, potentially making
the system less reliable instead of more reliable unless special techniques are
applied very carefully. And even then, predicted improvement can be
controversial.
See: https://en.wikipedia.org/wiki/Factor_of_safety
https://en.wikipedia.org/wiki/Derating
and https://en.wikipedia.org/wiki/N-version_programming
[2] For better or worse – but given the optimism ingrained in most
engineers, probably not for better.
[3] Some good news here – by
the time you have a safety factor of 10 or more, nuances such as driver age and
geofence zip codes start being small compared to the safety factor. If someone
says they have a safety factor of 10, it is OK not to sweat the small stuff.
This is an adapted excerpt (Section 5.3.3) from my book: How Safe is Safe Enough? Measuring and Predicting Autonomous Vehicle Safety
No comments:
Post a Comment
All comments are moderated by a human. While it is always nice to see "I like this" comments, only comments that contribute substantively to the discussion will be approved for posting.