Showing posts with label blame. Show all posts
Showing posts with label blame. Show all posts

Friday, December 2, 2022

Blaming the autonomous vehicle computer as a regulatory strategy

The AV industry has been successfully pursuing state regulations to blame the computer for any crashes by saying that the Automated Driving System (the computer) is considered to be the driver of any AV operating on public roads. That way there is no person at fault for any harm to road users. Yes, really, that is what is going on.[1]

Person pointing a finger at a computer

The general AV industry tactic when lobbying for such rules is to argue that when fully automated driving is engaged the “driver” is the driving computer (the ADS). Any remote safety supervisor is just there to lend a hand. In some states a remote human support team member need not have an appropriate driver license, because it is said that the ADS that is the driver. Superficially this seems to make sense. After all, if you are a passenger who has paid for a retail robotaxi ride and the AV breaks a traffic law due to some flaw in the design, you as the passenger should not be the one to receive a ticket or go to jail.

But the tricky bit is that ADS computers are not afforded the legal status of being a “person” – nor should they be.[2] Corporations are held to be fictitious people in some legal circumstances, but a piece of equipment itself is not even a fictitious person.[3]

If a software defect or improper machine learning training procedures result in AV behavior that would count as criminally reckless driving if a human were driving, what happens for an AV? Perhaps nothing. If the ADS is the “driver” then there is nobody to put on trial or throw into jail. If you take away the driver’s license for the ADS, does it get its license back with the next software update?[4] Where are the repercussions for an ADS being a bad actor? Where are the consequences?

Blaming the ADS computer for a bad outcome removes a substantial amount of deterrence due to negative consequences because the ADS does not fear being harmed, destroyed, locked up in jail, fined, or having its driver’s license revoked. It does not feel anything at all.

A related tactic is to blame the “operator” or “owner” for any crash. In the early days of AV technology these roles tended to be either the technology developer or a support contractor, but that will change over time. Contractors perform testing operations for AV developers. Individual vehicle owners are operators for some AV technology road tests. Other AV operators might work through a transportation network service. Someone might buy an AV in the manner of a rental condo and let it run as a robotaxi while they sleep.

Imagine an arrangement in which an investor buys a share in a group of robotaxis as might be done for a timeshare condo. A coordinator lines up independent contractors to manage investment money, negotiate vehicle purchases, arrange maintenance contracts, and participate in a ride-hailing network. Each AV is the sole asset of a series LLC to act as a liability firewall between vehicles. The initial investor later sells their partial ownership shares to an investment bank. The investment bank puts those shares into a basket of AV ownership shares. Various municipal retirement funds buy shares of the basket. At this point, who owns the AV has gotten pretty complicated, and there is no substantive accountability link between the AV “owner” and its operation beyond the value of the shares.

Then a change to the underlying vehicle (which was not sold as an AV platform originally, but rather was adapted by an upfitter contractor) impairs functionality of the aftermarket add-on ADS manufactured by a company that is no longer in business. If there is a crash who is the “operator?” Who is the “owner?” Who should pay compensation for any harm done by the AV? If the resultant ADS behavior qualifies as criminally negligent reckless driving, who should go to jail? If the answer is that nobody goes to jail and that only the state minimum insurance of, say, $25K pays out, what is the incentive to ensure that such an arrangement is acceptably safe so long as the insurance is affordable compared to the profits being made?

While the usual reply to concerns about accountability is that insurance will take care of things, recall that we have taken some passes at discussing insurance and risk management can be insufficient incentive to ensure acceptable safety, especially when it only meets a low state minimum insurance requirement[5] originally set for human drivers that have skin in the game for any crashes.


[1] For a compilation of US state laws and legislative hearing materials see:        https://safeautonomy.blogspot.com/2022/02/kansas-av-regulation-bill-hearings.html

[2] Despite occasional hype to the contrary, machine learning-based systems are nowhere near achieving sentience, let alone being reasonably qualified to be a “person.”

[3] I am not a lawyer (IANAL/TINLA), so this is a lay understanding of the rules that apply and nothing in this should be considered as legal advice.

[4] In several states an ADS is automatically granted a driver’s license even though it is not a person. It might not even be possible to take that license away.

[5] IIHS/HLDI keeps a list of autonomous vehicle laws including required insurance minimums. The $1M to $5M numbers fall short of the $12M statistical value of human life, and are typically per incident (so multiple victims split that maximum). In other states the normal state insurance requirement can apply, which can be something like a maximum of $50,000 per incident and might permit self-insurance by the AV company, such as is the case in Kansas: https://insurance.kansas.gov/auto-insurance/ This insurance maximum payout requirement is less than the cost of a typical AV. In practice it might be the case that victims are limited to recovering insurance plus the scrap value of whatever is left of the AV after a crash, with everyone else being judgement-proof.

Friday, November 11, 2022

The AV Blame Game

Assigning blame does not make roads safer. Rather, blaming is most commonly used to evade responsibility for mitigating a safety problem.

Two robots arguing after a car crash

The blame game is played by AV companies when they find some reason – any reason will do – for an AV crash that is not the fault of the AV itself. Candidates for blame include the safety driver, drivers of other vehicles, jaywalking pedestrians, and possibly unexpected conditions. A cousin of the blame game is claiming that the AV acted in a lawful manner even if doing so was clearly inappropriate for the situation. At a deeper level, the blame game is an extension of the tactic of blaming human drivers for being imperfect to deflect attention away from operational flaws with AVs.

The reality is that placing blame does not make streets safer. Driving involves a continual stream of social interactions with other drivers in which, hopefully, most drivers follow most of the rules most of the time. Importantly, drivers are expected to compensate for mistakes and any lack of rule following by other drivers to the degree they can.[1]

For every AV crash in which the AV design team insists some other party should be blamed, an essential follow-up question is whether the AV could have done something to avoid the crash, even if that something is not strictly required by the rules of the road. Any generally useful response that might have avoided the crash should be added to the AV behavioral repertoire even if not strictly required by law.

As a hypothetical example, when encountering a wrong-way driver it is likely better for an AV to pull to the side of the road than to continue driving in-lane until impact. This is the case even though the AV has right of way, and might be fully justified by the rules of the road in continuing to drive in its lane right into the impending crash. At worst, pulling to the side of the road reduces the relative impact speed. At best an impact is avoided as the other vehicle continues driving the wrong way in the travel lane. And who knows – it is possible that the AV itself was the vehicle going in the wrong direction due to a mapping error or other issue.[2] Blaming the other vehicle for wrong-way driving post-crash provides cold comfort to the families of the victims.

At a higher level, blame is irrelevant for determining AV safety. The crash rate is what it is, regardless of blame. Consider an AV that has twice as many crashes as human-driven vehicles, but would theoretically be able to prove in a court of law that every single crash was someone else’s fault. Such a perfectly blameless vehicle would nonetheless have a track record of being twice as dangerous as a human-driven vehicle. That type of approach should not be how AV designers claim that they are safe.


[1] As an example, pedestrians are not supposed to cross mid-block, but if they do so vehicles have an obligation to make best efforts to stop to avoid a collision. In states with this rule an AV that does not make a reasonable attempt to stop to avoid hitting a jaywalking pedestrian is failing to abide by the rules of the road.

[2] Yes, AV tests traveling the wrong way is a thing. See: https://qz.com/798092/a-self-driving-uber-car-went-the-wrong-way-on-a-one-way-street-in-pittsburgh/

Also, see a related video here: https://youtu.be/Ao2qssbXDXo