Tuesday, March 21, 2023

A Liability-Based Regulatory Framework for Vehicle Automation Technology

State liability laws might be the way out of the automated vehicle regulatory dilemma. From phantom braking to reckless public road testing to permitting using human drivers as moral crumple zones, vehicle automation regulation is a hot mess. States are busy creating absurd laws that assign safety responsibility to a non-legal-person computer, while the best the feds can do under the circumstances is play recall whack-a-mole with unsafe features that are deployed faster than they can investigate.

What has become clear is that attempting to regulate the technology directly is not working out. In the long term it will have to be done, but we will likely need to see fundamental changes at US DOT before we see viable regulatory approaches to automated vehicles. (As a start, they need to abandon the use of SAE Levels for regulatory purposes.) That process will take years, and if history is any guide, one or more horrific tragedies before things settle out. Meanwhile, as companies aggressively exploit the "Level 2 loophole" it is the wild west on public roads. Various companies are taking safety with different levels of seriousness, but there is a dramatic lack of transparency and accountability across the industry that will only get worse with time.

As a short- to mid-term approach we should revisit how liability laws work at the state level to buy time to let the technology mature while avoiding needless harm to constituents. There are three fundamental things that have changed that make the current tort system unworkable in practice for automated vehicle technology:

#1: Machine learning-based technology is inherently unsuitable to traditional software safety analysis. The current legal system which puts the burden of showing technology is defective on victims is simply not viable when even the engineers who designed a system can't necessarily explain why the computer driver did what it did.

#2: Asymmetric access to information makes it easy for car companies to know what happened in a crash (or even if automated driving was activated), but it is very difficult for victims to access, much less interpret such information.

#3: The litigation cost of pursuing a claim against software with non-deterministic defects that require source code analysis is huge, depriving all but the largest cases from having an effective ability to prove a product defect claim, if one is justified.

In response to these realities, a (rebuttable) presumption of liability and burden of proof should be shifted to manufacturers in situations for which it is unreasonable to expect a civilian human driver to be able to ensure safety. The attached summary sketches an approach, with more detail to come.

Read the one-pager policy summary here: https://archive.org/details/2023-03-av-liability-one-pager-published-v-1-00