Monday, July 22, 2019

Autonomous Vehicle Testing Safety Needs More Transparency

https://pxhere.com/en/photo/41532

Last week there were two injuries involving human-supervised autonomous test shuttles on different continents, with no apparent connection other than random chance.  (For example: Link) As deployment of this work-in-progress technology scales up in public, we know that we can expect more high-profile accidents. Fortunately this time nobody was killed or suffered life-altering injuries. (But we still need to find out what actually happened.) And to be sure, human-driven vehicles are far from accident-free. But what about next time?

The bigger issue for the industry is: will the next autonomous vehicle testing mishap be due to rare and random problems that are within the bounds of reasonable risk? Or will they be due to safety issues that could and should have been addressed beforehand?

Public trust in autonomous vehicle technology has already eroded in the past year. Each new mishap has the unfortunate potential to make that situation worse, regardless of the technical root cause. While no mode of transportation is perfectly safe, it's important that the testing of experimental self driving car technology not expose the public to reasonably avoidable risk. And it's equally important that the public's perception matches the actual risk.

Historically the autonomous vehicle industry has operated under a cloak of secrecy. As we've seen, that can lead to boom and bust cycles of public perception, with booms of optimism followed by backlash after each publicized accident. But in fairness, if there is no information about public testing risk other than hype about an accident-free far-flung future, what is the public supposed to think? Self-driving cars won't be perfect. The goal is to make them better than the current situation. One hopes that along the way things won't actually get worse.

Some progress in public safety disclosure has been made, albeit with low participation rates. One of the two vehicles involved in injuries this past week has a public safety report available. The other does not. In fact, a significant majority of testing organizations have not taken the basic step of making a Voluntary Safety Self-Assessment report available to NHTSA. And to be clear, that disclosure process is more about explaining progress toward production maturity rather than the specific topic of public testing safety.

The industry needs to do better at providing transparent, credible safety information while testing this still-experimental technology. Long term public education and explanation are important. But the more pressing need revolves around what's happening on our roads right now during testing operations. That is what is making news headlines, and is the source of any current risk.

At some point either autonomous vehicle testers are actually doing safe, responsible public operations or they aren't. If they aren't, that is bound to catch up with them as operations scale up. From the point of view of a tester:

- It's a problem if you can't explain to yourself why you are acceptably safe in a methodical, rigorous way. (In that case, probably you're unsafe.) 
- It's a problem if you expect human safety drivers to perform with superhuman ability. They can't.
- It's a problem if you aren't ready to explain to authorities why are still acceptably safe after a mishap.
- It's a problem if you can't explain to a jury that you used reasonable care to ensure safety. 
- It's a problem if your company's testing operations get sidelined by an accident investigation.
- It's a problem for mishap victims if accidents occur that were reasonably avoidable, especially involving vulnerable road users. 
- It's a problem for the whole industry if people lose trust in the technology's ability to operate safely in public areas. 

Therefore: 
- It's a problem if you can't explain to the public -- with technical credibility -- why they should believe you are safe. Preferably before you begin testing operation on public roads. 

Some companies are working on transparent safety more aggressively than others. Some are working on safety cases that contain detailed chains of reasoning and evidence to ensure that they have all the bases covered for public road testing. Others might not be.  But really, we don't know. 

And that is an essential part of the problem -- we really have no idea who is being diligent about safety. Eventually the truth will out, but bad news is all too likely to come in the form of deaths and injuries. We as an industry need to keep that from happening.

It only takes one company to have a severe mishap that, potentially, dramatically hurts the entire industry. While bad luck can happen to anyone, it's more likely to happen to a company that might be cutting corners on safety to get to market faster. 

The days of "trust us, we're smart" are over for the autonomous vehicle industry. Trust has to be earned. Transparency backed with technical credibility is a crucial first step to earning trust. The industry has been given significant latitude to operate on public roads, but that comes with great responsibility and a need for transparency regarding public safety.

Safety should truly come first. To that end, every company testing on public roads should immediately make a transparent, technically credible statement about their road testing safety practices. A simple "we have safety drivers" isn't enough. These disclosures can form a basis for uniform practices across the industry to make sure that this technology survives its adolescence and has a chance to reach maturity and the benefits that promises.

Author:
Dr. Philip Koopman is co-founder and CTO of Edge Case Research, which helps companies make their autonomous systems safer. He is also a professor at Carnegie Mellon University. He is a principle technical author of the UL 4600 draft standard for autonomous system safety, and has been working on self-driving car safety for more than 20 years. Contact: pkoopman@ecr.ai

Monday, July 1, 2019

Autonomous Vehicle Fault and Failure Management

When you build an autonomous vehicle you can't count on a human driver to notice when something's wrong and "do the right thing." Here is a list of faults, system limitations, and fault responses AVs will need to get right. Did you think of these?

https://www.godlikeproductions.com/sm/custom/z/l/mppijgrr.jpeg


System Limitations:

Sometimes the issue isn't that something is broken, but rather simply that all vehicles have limitations. You have to know your system's limitations.
  • Current capabilities of sensors and actuators, which can depend upon the operational state space.
  • Detecting and handling a vehicle excursion outside the operational state space for which it was validated, including all aspects of {ODD, OEDR, Maneuver, Fault} tuples.
  • Desired availability despite fault states, including any graceful degradation plan, and any limits placed upon the degraded operational state space.
  • Capability variation based on payload characteristics (e.g. passenger vehicle overloaded with cargo, uneven weight distribution, truck loaded with gravel, tanker half filled with liquid) and autonomous payload modification (e.g. trailer connect/disconnect).
  • Capability variation based on functional modes (e.g. pivot vs. Ackerman vs. crab steering, rear wheel steering, ABS or 4WD engaged/disengaged).
  • Capability variation based on ad-hoc teaming (e.g. V2V, V2I) and planned teaming (e.g. leader-follower or platooning vehicle pairing).
  • Incompleteness, incorrectness, corruption or unavailability of external information (V2V, V2I).

System Faults:
  • Perception failure, including transient and permanent faults in classification and pose of objects.
  • Planning failures, including those leading to collision, unsafe trajectories (e.g., rollover risk), and dangerous paths (e.g., roadway departure).
  • Vehicle equipment operational faults (e.g., blown tire, engine stall, brake failure, steering failure, lighting system failure, transmission failure, uncommanded engine power, autonomy equipment failure, electrical system failure, vehicle diagnostic trouble codes).
  • Vehicle equipment maintenance faults (e.g., improper tire pressure, bald tires, misaligned wheels, empty sensor cleaning fluid reservoir, depleted fuel/battery).
  • Operational degradation of sensors and actuators including temporary (e.g., accumulation of mud, dirt, dust, heat, water, ice, salt spray, smashed insects) and permanent (e.g., manufacturing imperfections, scratches, scouring, aging, wear-out, blockage, impact damage).
  • Equipment damage including detecting and mitigating catastrophic loss (e.g., vehicle collisions, lighting strikes, roadway departure), minor losses (e.g., sensor knocked off, actuator failures), and temporary losses (e.g., misalignment due to bent support bracket, loss of calibration).
  • Incorrect, missing, stale, and inaccurate map data.
  • Training data incompleteness, incorrectness, known bias, or unknown bias.

Fault Responses:

Some of the faults and limitations fall within the purview of safety standards that apply to non-autonomous functions. However, a unified system-level view of fault detection and mitigation can be useful to ensure that no faults are left unaddressed. More importantly, to the degree that credit has been taken for a human driver participating in fault mitigation by safety standards, that places fault mitigation obligations upon the autonomy.
  • How the system behaves when encountering an exceptional operational state space, experiencing a fault, or reaching a system limitation.
  • Diagnostic gaps (e.g., latent faults, undetected faults, undetected faulty redundancy).
  • How the system re-integrates failed components, including recovery from transient faults and recovery from repaired permanent faults during operation and/or after maintenance.
  • Response and policies for prioritizing or otherwise determining actions in inherently risky or certain-loss situations.
  • Withstanding an attack (system security, compromised infrastructure, compromised other vehicles), and deterring inappropriate use (e.g., malicious commands, inappropriately dangerous cargo, dangerous passenger behavior).
  • How the system is updated to correct functional defects, security defects, safety defects, and addition of new or improved capabilities.
Is there anything we missed?

(This is an excerpt of Koopman, P. & Fratrik, F., "How many operational design domains, objects, and events?" SafeAI 2019, AAAI, Jan 27, 2019.)