A gentle introduction to autonomous vehicle safety cases

I recently ran into this readable article about AV safety cases by Thomas & Vandenberg from 2019. While things have changed a bit, it still is a reasonable introduction for anyone asking "what exactly would an AV safety case look like." A real industry-strength safety case is going to be complicated in many ways. In particular, there are many different approaches for breaking down G1 which will significantly affect things. On the other hand all the pieces will need to be there somewhere, so choosing this high level breakdown is more of an architectural choice (for the safety case, not necessarily the system). We do not yet have a consensus on an optimal strategy for building such safety cases, but this is not a bad starting place from safety folks who were previously at Uber ATG. Thomas & Vandenberg, Harnessing Uncertainty in Autonomous Vehicle Safety, Journal of System Safety, Vol. 55, No. 2 (2019) (Uber ATG also published a  muc

SEAMS Keynote talk: Safety Performance Indicators and Continuous Improvement Feedback

Abstract: Successful autonomous ground vehicles will require a continuous improvement strategy after deployment. Feedback from road testing and deployed operation will be required to ensure enduring safety in the face of newly discovered rare events. Additionally, the operational environment will change over time, requiring the system design to adapt to new conditions. The need for ensuring life critical safety is likely to limit the amount of real time adaptation that can be relied upon. Beyond runtime responses, lifecycle safety approaches will need to incorporate significant field engineering feedback based on safety performance indicator monitoring. A continuous monitoring and improvement approach will require a fundamental shift in the safety world-view for automotive applications. Previously, a useful fiction was maintained that vehicles were safe for their entire lifecycle when deployed, and any safety defect was an unwelcome surprise. This approach too often provoked denial and

ICSE keynote: Autonomous Vehicles and Software Safety Engineering

Abstract: Safety assurance remains a significant hurdle for widespread deployment of autonomous vehicle technology. The emphasis for decades has been on getting the technology to work well enough on everyday situations. However, achieving safety for these life-critical systems requires more. While safety encompasses correct operation for the mundane, it also requires special attention to mitigating the risk presented by rare but high consequence potential loss events. In this talk I'll cover some history of autonomous vehicle development and safety at the Carnegie Mellon National Robotics Engineering Center that led over the years to the development of the ANSI/UL 4600 standard for autonomous vehicle safety. I'll also touch upon activities specific to safety engineering, why a heavy tail distribution of rare events makes ensuring safety so difficult, why brute force road testing won't ensure safety, and the emergence of safety assurance cases as the approach of

OTA updates won't save buggy autonomous vehicle software

There is a feeling that it's OK for software to ship with questionable quality if you have the ability to send out updates quickly. You might be able to get away for this with human-driven vehicles, but for autonomous vehicles (no human driver responsible for safety) this strategy might collapse. Right now, companies are all pushing hard to do quick-turn Over The Air (OTA) software updates, with Tesla being the poster child of both shipping dodgy software and pushing out quick updates (not all of which actually solve the problem as intended). There is a moral hazard that comes with the ability to do quick OTAs in that you might not spend much time on quality since you know you can just send another update if the first one doesn't turn out as you hoped. "There's definitely the mindset that you can fix fast so you can take a higher risk," Florian Rohde, a former Tesla validation manager   ( ) For now comp

Maturity Levels for Autonomous Vehicle Safety

I've been on a personal journey to understand what safety really means for autonomous vehicles. As part of this I repeatedly find myself in conversations in which participants have wildly different notions of what it means to be "safe."  Here is an attempt to put some structure around the discussion: An inspiration for this idea is Maslow's famous hierarchy  of needs. The idea is that organizations developing autonomous vehicles have to take care of the lower levels before they might be able to afford at higher levels. For example, if your vehicle crashes every 100 meters because it struggles to detect obstacles in ideal conditions, worrying about nuances of lifecycle support won't get you your next funding round. To succeed as a viable at-scale company, you need to address all the levels in the AV maturity hierarchy. But in reality companies will likely climb the levels like rungs in a ladder. To draw the parallel to Maslow's needs hierarchy, if a company is

Cruise Stopped by Police for Headlights Off -- Why Is This a Big Deal?

In April 2022 San Francisco police pulled over an uncrewed Cruise autonomous test vehicle for not having its headlights on. Much fun was had on social media about the perplexed officer having to chase the car a few meters after it repositioned during the traffic stop. Cruise said it was somehow intentional behavior. They also said their vehicle "did not have its headlights on because of a human error" (Source: )  The traffic behavior indicates that Cruise needs to do better making it easier for local police to do traffic stops -- but that's not the main event. The real issue here is the headlights being off. Cruise said in a public statement: "we have fixed the issue that led to this." Forgive me if I'm not reassured. A purportedly completely autonomous car (the ones that are supposed to be nearly perfect compared to those oh-so-flawed human drivers) that alwa

ANSI/UL 4600 Version 2 (2022)

Version 2 of ANSI/UL 4600 has just been issued. This standard provides guidance on how to ensure that autonomous vehicles safety cases are created and maintained to ensure acceptable safety for deployment. Since version 1 of the standard was issued in April 2020, the Standards Technical Panel members (the voting committee) and stakeholders have been involved in suggesting clarifications, upgrades, and other improvements as part of the standard's continuous improvement process.  Version 1 of the standard included chapters on: terminology, safety cases, risk assessment, interaction with humans, autonomy functions, software/system engineering processes, dependability, data/networking, verification/validation/test, tool qualification/COTS/legacy components, lifecycle concerns, maintenance, metrics, and assessment. The standard is designed to work with other safety standards such as ISO 26262 and ISO 21448 to make sure that all the bases are covered for system-level safety on autonomous