Posts

Autonomous Vehicle State Policy Issues (Talk Video)

Image
The commercial deployment of robotaxis in San Francisco has made it apparent many issues remain to be resolved regarding the regulation and governance of autonomous vehicle technology at the state and local levels. This talk is directed at state and local stakeholders who are considering how to set policies and regulations governing this technology. Topics: Getting past Automated Vehicle (AV) safety rhetoric AV safety in a nutshell Safe as a human driver on average Avoiding risk transfer to vulnerable populations Avoiding negligent computer driving Conforming to industry consensus safety standards Addressing other ethical & equity concerns Policy points: Societal benefits Public road testing Municipal preemption SAE Level 2/2+/3 issues Federal vs. state regulation Other policy issues Revisiting common myths Slides:  https://users.ece.cmu.edu/~koopman/lectures/L136-AV_State_Policy_Issues.pdf Youtube video:  https://youtu.be/6YTSjmxu-mI Youtube play list of individual slide videos:  

AV Safety Claims and More on My Congressional Testimony

Image
I recently had the privilege of testifying before the US House E&C committee on self-driving car safety. You can see the materials here: The nearly 3-hour hearing video My written testimony Basic hearing information (other written testimony in the AGENDA bar) Document repository (might have other documents later) A venue like this does not offer the best forum for nuance. In particular, one can make a precise statement for a reason and have that statement misunderstood (because there is limited time to explain), or misconstrued. The result can be talking past each other for reasons ranging from simple misunderstanding, to ideological differences, to the other side needing to show they are right regardless of the counter-arguments. I do not attempt to cover all topics here; just ones that feel like they could use some more discussion. (See my written testimony for the full list of topics.) The venue also requires expressing opinions about the best path forward, which can legitimate

AV Safety and the False Dilemma Fallacy

Image
The current AV company messaging strategy is a classic case of a false dilemma fallacy. They frame the situation as a choice between continued human drivers killing people (without statistical context) vs. immature robotaxis who don't drink and drive (but make other mistakes). (Wikipedia: False dilemma ) The recent Cruise ad in particular is a plainly ridiculous doubling-down on the industry's long discredited propaganda playbook. Cruise NY Times ad: https://twitter.com/kvogt/status/1679517290847694848 Analysis of AV industry playbook: https://www.eetimes.com/autonomous-vehicle-myths-the-dirty-dozen/ A more reasonable message would be cities need robotaxis for <reasons> and robotaxi companies will use <defined, balanced metrics, stated in advance rather than cherry picked later> to show they are no worse than human drivers during development, with monthly report card disclosures. Improved safety comes later -- we all hope. Here is where things really stand: It is

A Liability Approach for Automated Vehicles in Three Parts

Image
I'm delighted that months of collaboration with co-author and law professor William Widen have resulted in a trio of papers that together provide a framework for resolving the vast majority of automated vehicle legal questions. Product liability will still be a thing, but that should be reserved for its more usual role, and not be the sole means of recourse for everyday Computer Driver road mishaps that will displace the everyday Human Driver road mishaps. A tort law approach based on negligence is a far better fit and will require far less disruption to existing legal and regulatory systems while providing a fair basis for compensation for anyone harmed by this novel technology. The three parts are in three separate SSRN papers intended to be used as a set, although each paper is self-contained. Below are very simplified summaries to give an overview. Slides for a talk summarizing these ideas are here:  https://users.ece.cmu.edu/~koopman/lectures/L134_SharedResponsibilitySafety.pd

Kia power door pinch recall -- what should be done?

  Sometimes how to handle a safety issue is not so clear. The Kia power sliding door recall is an interesting example of the types of issues that can come up that are not clear-cut. - Nine confirmed injuries due to minivan power door sliding closed on people, ranging from bruising to a fractured thumb to a broken arm. - Investigation reveals that system works exactly as designed, noting auto-reverse to prevent injury is a "supplemental" rather than mission-critical safety feature (i.e., best effort is permissible, and their idea of best effort includes a broken arm) - Investigation results claim all the competitor sliding doors are just as dangerous mechanically (comparable closing force/reversal properties) - The remedy slows down the final inches of closing and sounds a couple warning beeps when moving the door. That does not sound like a particularly robust fix. It sounds like "we need to do something, how about this..." (And why weren't these in place bef

A Liability-Based Regulatory Framework for Vehicle Automation Technology

State liability laws might be the way out of the automated vehicle regulatory dilemma. From phantom braking to reckless public road testing to permitting using human drivers as moral crumple zones, vehicle automation regulation is a hot mess. States are busy creating absurd laws that assign safety responsibility to a non-legal-person computer, while the best the feds can do under the circumstances is play recall whack-a-mole with unsafe features that are deployed faster than they can investigate. What has become clear is that attempting to regulate the technology directly is not working out. In the long term it will have to be done, but we will likely need to see fundamental changes at US DOT before we see viable regulatory approaches to automated vehicles. (As a start, they need to abandon the use of SAE Levels for regulatory purposes.) That process will take years, and if history is any guide, one or more horrific tragedies before things settle out. Meanwhile, as companies aggressive

Insurance Does Not (and will not) Make AVs Acceptably Safe

Image
I frequently hear arguments that insurance will make autonomous vehicles (AVs) safe. For example, : "the insurance company issued a policy, so the AV must be safe," and "economic pressure from insurance premiums will ensure safety." While it is true that insurance premium pressure (and companies) will mitigate egregiously dangerous AVs, they have nowhere near enough power to enforce safety acceptable to many stakeholders in an industry of risk-takers chasing a trillion-dollar market. Insurance policies do not make you safe Getting an insurance policy does not mean you are objectively “safe.” You can insure plenty of things that might be considered risky by everyday standards: skydiving injury insurance, commercial rocket launch payload insurance, marine piracy insurance, [1] and life insurance for front-line military personnel are all routinely issued. An insurance company issuing a policy does not mean any particular activity in general or AV in particular is