Other Autonomous Vehicle Safety Argument Observations

Other AV Safety Issues:
We've seen some teams get it right. And some get it wrong. Don't make these mistakes if you're trying to ensure your autonomous vehicle is safe.

Defective disengagement mechanisms. Generally this involves the ability of an arbitrary fail-active autonomy failure to prevent successful disengagement by a human supervisor. As a concrete example, a system might read the state of the disengagement activation mechanism (the “big red button”) as an I/O device fed directly into the primary autonomy computer rather than using an independent safing mechanism. This is a special case of a single point of failure in the form of the autonomy computer.

Assuming perception failures are independent. Some arguments assume independent failures of multiple perception modes. While there is clearly utility in creating a safety case for the non-perception parts of an autonomous vehicle, one must argue rather than assume the safety of perception to create a credible safety case at the vehicle level.

Requiring perfect human supervision of autonomy. Humans are well known to struggle when assigned such monitoring tasks. Koopman et al. (2019) cover this topic in more detail as it relates to autonomous vehicle road testing safety.

Dismissing a potential fault as “unrealistic” without supporting data. For example, argumentation might state that a lightning strike on a moving vehicle is unrealistic or could not happen in the “real world,” despite data to the contrary (e.g. Holle 2008). To be sure, this does not mean that something like a lightning strike must be completely mitigated via keeping the vehicle fully operational. Rather, such faults must be considered in risk analysis. Dismissing hazards without risk analysis based on a subjective assertion that they are “unrealistic” results in a safety case with insufficient evidence.

Using multi-channel comparison approaches for autonomy. In general autonomy algorithms are nondeterministic, sensitive to initial conditions, and have many acceptable (or at least safe) behaviors for any given situation. Architectural approaches based on voting diverse autonomy algorithms tend to run into a problem of deciding whether the outputs are close enough to be valid. Averaging and other similar approaches are not necessarily appropriate. As a simple example, the average of veering to the right and veering to the left to avoid an obstacle could result in hitting the obstacle dead-on.

Confusion about fault vs. failure. While there is a widely recognized terminology document for dependable system design (Avizienis 2004), we have found that there is widespread confusion about the terms fault and failure in practical use. This is especially true when discussing malfunctions that are not due to a component fault, but rather a requirements gap or an excursion from the intended operational environment. It is beyond the scope of this paper to attempt to resolve this, but we note it as an area worthy of future work and particular attention in interdisciplinary discussions of autonomy safety.

(This is an excerpt of our SSS 2019 paper:  Koopman, P., Kane, A. & Black, J., "Credible Autonomy Safety Argumentation," Safety-Critical Systems Symposium, Bristol UK, Feb. 2019.  Read the full text here)

  • Avizienis, A., Laprie, J.-C., Randell B., Landwehr, C. (2004) “Basic concepts and taxonomy of dependable and secure computing,” IEEE Trans. Dependability, 1(1):11-33, Oct. 2004.
  • Holle, R. (2008) Lightning-caused deaths and injuries in the vicinity of vehicles, American Meteorological Society Conference on Meteorological Applications of Light-ning Data, 2008.
  • Koopman, P. and Latronico, B., (2019) Safety Argument Considerations for Public Road Testing of Autonomous Vehicles, SAE WCX, 2019. 


Popular posts from this blog

Software Safety for Vehicle Automation Short Course

A Reality Check on the 94 Percent Human Error Statistic for Automated Cars

Debunking AV Industry Positions on Standards and Regulations