Wednesday, December 21, 2022

Holiday AV Safety Video Viewing

Daily video & reading suggestions for the holiday season for those into autonomous vehicle safety. Primarily based on new materials from 2022 that you might have missed.

Friday, December 16, 2022

I have to get out NOW from my autonomous vehicle: urgent egress and passenger overrides

What if you need to get out of the vehicle RIGHT NOW in a robotaxi -- is that allowed? What are the implications?

Woman looking out a car window.


In any automated system there will be times when an occupant wants to override the automation, and especially when they want to exit a moving automated vehicle. Reasons might include: wanting to re-open transit vehicle doors if a passenger was unable to exit in time at their stop; an attack of claustrophobia; wanting to get away from another passenger due to personal safety concerns; or even needing to escape a cabin fire. Some egress requests might constitute misuse or abuse, such as stopping a vehicle to intentionally block traffic, or intentionally accessing an off-limits area such as a bridge with no pedestrian infrastructure.

Creating a complete list of all possible motivations is difficult, and weighing the merits of all such egress attempts in advance seems intractable. Nonetheless, there are times when a passenger desire to exit a moving vehicle should be honored, although the vehicle should likely at least stop before permitting an exit.

In still other situations passengers might want to force an otherwise stopped vehicle to move. One reason might be fear for personal safety if threatened by malicious actors while stopped at a traffic light. Another reason might be overriding a police stop if the vehicle occupant suspects a stopping officer is instead a criminal imposter, at least until legitimacy of the police stop can be confirmed via contact with an emergency dispatcher.[1]

Another special situation is one in which a passenger has a compelling reason to order an AV to operate outside its ODD or with degraded equipment in an emergency, even if doing so will result in a reduced safety margin. For example, an AV might be programmed not to drive through heavy smoke, but doing so might be required to escape a burning town in a wildfire situation. The AV occupant might want to take the chance of driving rather than remaining in the burning town.[2]

Human drivers have the authority to deal with these situations so long as they are willing to accept the responsibility. Do you start driving when someone is trying to forcibly break into your car at a traffic light even if you might injure that malicious actor by doing so? The choice – and responsibility for consequences – falls upon the person driving a manually driven car.

The question is: to what degree should an AV support operator overrides of safety-relevant behaviors? A complication is that there might not be a responsible individual in a vehicle to exercise control. What if a passenger is allowed to override some behaviors of the vehicle, but that passenger is impaired, or not capable of exercising mature judgment? Should an 8 year old riding solo be able to command vehicle safety overrides?[3]

Answers as to how much control a passenger should have over AV operation will depend on how stringent qualifications are for a passenger to be capable of mature decision making. It is easy to say there must be one qualified driver if there are any passengers in an AV and that manual controls must be available if needed. However, requiring a qualified driver undermines the potential benefits that AVs might provide for those who are not capable of driving or should not be driving at a particular time.

If other than unimpaired licensed drivers are permitted to override AV behaviors, there will be difficult tradeoffs as to what overrides might be permitted. Likely an 8 year old child should not be permitted to exit a school vehicle in the middle of a highway to avoid going to school. On the other hand, a 14 year old[4] might be considered mature enough to demand an emergency stop if the vehicle tries to drive into flood waters, or initiate an emergency exit with their younger sibling if the cabin fills with smoke from a vehicle battery fire.

Even if a passenger is an adult licensed to drive, should that adult be permitted to override vehicle behavior if drunk or otherwise impaired? If not, should the vehicle disable override capability if the passenger is drunk? Or should it be illegal to enter an AV with override capability when drunk?

While it can be an interesting exercise to conjure extreme situations, the issue of passenger overrides and egress can also be as simple as a passenger saying “I want to get out now” when the vehicle is stopped at a red traffic light but not at the end of the scheduled trip. Should the passenger be able to unlock doors and exit? Or should the passenger be kept locked inside the vehicle until the end of the trip? Should there be a workaround available such as changing the destination? If so, should the passenger have permission to do this if some authority figure such as a parent input the original destination? Where should the threshold be drawn at which such a passenger request is denied both in context (speeding down a highway vs. stopped) or passenger maturity (a passenger one day before turning 18 years old but with no driver’s license vs. grade school age child)?

There is the possibility that remote operators will need to mediate requests for overriding AV behavior either routinely or if there is doubt as to the competence of passengers to make reasonable decisions. However, any such remote operators can be expensive, will have problems scaling, and might result in wait times long enough to impair safety by delaying decisions in urgent situations.[5]

For AVs to be deployed at scale, designers will need to decide how much authority passengers have to override vehicle behavior, and whether emergency manual vehicle controls will be required even in vehicles that are intended to be completely automated. There will be no perfect policy choice, but not setting a consistent policy is also a policy choice.


[1] Yes, this is a thing. Report of an accused police imposter pulling over a van full of legitimate police detectives: https://www.youtube.com/watch?v=ogGBwrrkKY4

[2] This consideration has become especially relevant for residents of California. See: https://www.insideedition.com/how-drive-through-fire-48422

[3] One might say that no 8 year old should ride in an AV solo. But if that is the case, what exactly is the cut-off age? Some public high school systems rely on public mass transit instead of dedicated school buses, so any AV public transit vehicle will have under-age ridership. Is training or perhaps even a “rider license” required to ride in an AV and use the override controls? This topic gets complex quickly.

[4] Some states issue driver licenses to 14 year-olds in special cases. Would such a driver license be required in this case? See: https://www.thedrive.com/news/39184/americas-rarest-drivers-license-lets-14-year-olds-hit-the-road-legally

[5] The usual solution proposed is remote customer service operators that intervene when needed. Those proposing that passengers need have no control because remote operators can solve all safety problems need to spend more time waiting in customer service phone waiting queues. An additional consideration is the likely disruption to emergency response services during a natural disaster that will also require simultaneous attention to numerous AV passenger distress situations. New Year’s Eve screening of requests from potentially drunk passengers will also be challenging.

Wednesday, December 7, 2022

SCSC Talk: Bootstrapping Safety Assurance

Bootstrapping Safety Assurance

Abstract:
The expense and general impracticability of doing enough real-world testing to demonstrate safety for autonomous systems motivates finding some sort of shortcut. A bootstrapped testing approach is often proposed, using evidence from initial mishap-free testing to argue that continued testing is safe enough. In this talk I'll explain why pure bootstrapping based on testing exposure as well as arguments involving "probably perfect" bootstrapping expose public road users to undue risk. Moreover, phased deployments often used to argue safe update release have the same problem. An approach that bootstraps on the safety case rather than on vehicle testing is proposed as a potentially better alternative. While the examples given involve autonomous ground vehicles, the principles involved apply to any argument that safety will be demonstrated via a bootstrap testing process.

This talk was recorded as part of the SCSC Future of Testing for Safety-Critical Systems seminar on Dec. 1, 2022.
Talks and videos are available here (access with paid annual club membership):  https://scsc.uk/e966prog

Free public-access copy of slides here: 




Friday, December 2, 2022

Blaming the autonomous vehicle computer as a regulatory strategy

The AV industry has been successfully pursuing state regulations to blame the computer for any crashes by saying that the Automated Driving System (the computer) is considered to be the driver of any AV operating on public roads. That way there is no person at fault for any harm to road users. Yes, really, that is what is going on.[1]

Person pointing a finger at a computer

The general AV industry tactic when lobbying for such rules is to argue that when fully automated driving is engaged the “driver” is the driving computer (the ADS). Any remote safety supervisor is just there to lend a hand. In some states a remote human support team member need not have an appropriate driver license, because it is said that the ADS that is the driver. Superficially this seems to make sense. After all, if you are a passenger who has paid for a retail robotaxi ride and the AV breaks a traffic law due to some flaw in the design, you as the passenger should not be the one to receive a ticket or go to jail.

But the tricky bit is that ADS computers are not afforded the legal status of being a “person” – nor should they be.[2] Corporations are held to be fictitious people in some legal circumstances, but a piece of equipment itself is not even a fictitious person.[3]

If a software defect or improper machine learning training procedures result in AV behavior that would count as criminally reckless driving if a human were driving, what happens for an AV? Perhaps nothing. If the ADS is the “driver” then there is nobody to put on trial or throw into jail. If you take away the driver’s license for the ADS, does it get its license back with the next software update?[4] Where are the repercussions for an ADS being a bad actor? Where are the consequences?

Blaming the ADS computer for a bad outcome removes a substantial amount of deterrence due to negative consequences because the ADS does not fear being harmed, destroyed, locked up in jail, fined, or having its driver’s license revoked. It does not feel anything at all.

A related tactic is to blame the “operator” or “owner” for any crash. In the early days of AV technology these roles tended to be either the technology developer or a support contractor, but that will change over time. Contractors perform testing operations for AV developers. Individual vehicle owners are operators for some AV technology road tests. Other AV operators might work through a transportation network service. Someone might buy an AV in the manner of a rental condo and let it run as a robotaxi while they sleep.

Imagine an arrangement in which an investor buys a share in a group of robotaxis as might be done for a timeshare condo. A coordinator lines up independent contractors to manage investment money, negotiate vehicle purchases, arrange maintenance contracts, and participate in a ride-hailing network. Each AV is the sole asset of a series LLC to act as a liability firewall between vehicles. The initial investor later sells their partial ownership shares to an investment bank. The investment bank puts those shares into a basket of AV ownership shares. Various municipal retirement funds buy shares of the basket. At this point, who owns the AV has gotten pretty complicated, and there is no substantive accountability link between the AV “owner” and its operation beyond the value of the shares.

Then a change to the underlying vehicle (which was not sold as an AV platform originally, but rather was adapted by an upfitter contractor) impairs functionality of the aftermarket add-on ADS manufactured by a company that is no longer in business. If there is a crash who is the “operator?” Who is the “owner?” Who should pay compensation for any harm done by the AV? If the resultant ADS behavior qualifies as criminally negligent reckless driving, who should go to jail? If the answer is that nobody goes to jail and that only the state minimum insurance of, say, $25K pays out, what is the incentive to ensure that such an arrangement is acceptably safe so long as the insurance is affordable compared to the profits being made?

While the usual reply to concerns about accountability is that insurance will take care of things, recall that we have taken some passes at discussing insurance and risk management can be insufficient incentive to ensure acceptable safety, especially when it only meets a low state minimum insurance requirement[5] originally set for human drivers that have skin in the game for any crashes.


[1] For a compilation of US state laws and legislative hearing materials see:        https://safeautonomy.blogspot.com/2022/02/kansas-av-regulation-bill-hearings.html

[2] Despite occasional hype to the contrary, machine learning-based systems are nowhere near achieving sentience, let alone being reasonably qualified to be a “person.”

[3] I am not a lawyer (IANAL/TINLA), so this is a lay understanding of the rules that apply and nothing in this should be considered as legal advice.

[4] In several states an ADS is automatically granted a driver’s license even though it is not a person. It might not even be possible to take that license away.

[5] IIHS/HLDI keeps a list of autonomous vehicle laws including required insurance minimums. The $1M to $5M numbers fall short of the $12M statistical value of human life, and are typically per incident (so multiple victims split that maximum). In other states the normal state insurance requirement can apply, which can be something like a maximum of $50,000 per incident and might permit self-insurance by the AV company, such as is the case in Kansas: https://insurance.kansas.gov/auto-insurance/ This insurance maximum payout requirement is less than the cost of a typical AV. In practice it might be the case that victims are limited to recovering insurance plus the scrap value of whatever is left of the AV after a crash, with everyone else being judgement-proof.