Showing posts from July, 2018

Putting image manipulations in context: robustness testing for safe perception

UPDATE 8/17 -- added presentation slides! I'm very pleased to share a publication from our NREC autonomy validation team that explains how computationally cheap image perturbations and degradations can expose catastrophic perception brittleness issues.  You don't need adversarial attacks to foil machine learning-based perception -- straightforward image degradations such as blur or haze can cause problems too. Our paper "Putting image manipulations in context: robustness testing for safe perception" will be presented at IEEE SSRR August 6-8.  Here's a submission preprint: Abstract —We introduce a method to evaluate the robustness of perception systems to the wide variety of conditions that a deployed system will encounter. Using person detection as a sample safety-critical application, we evaluate the robustness of several state-of-the-art perception systems to a variety

Pennsylvania's Autonomous Vehicle Testing Guidelines

PennDOT has just issued new Automated Vehicle Testing Guidance:         July 2018 PennDOT AV Testing Guidance  (link to acrobat document) (also, there is a press release .) It's only been a short three months since the PA AV Summit in which PennDOT took up a challenge to improve AV testing policy. Today PennDOT released a significantly revised policy as promised. And it looks like they've been listening to safety advocates as well as AV companies. At a high level, there is a lot to like about this policy. It makes it clear that a written safety plan is required, and suggests addressing one way or another the big three items I've proposed for AV testing safety Make sure that the driver is paying attention Make sure that the driver is capable of safing the vehicle in time when something goes wrong Make sure that the Big Red Button (disengagement mechanism) is actually safe There are a number of items in the guidance that look

Road Sign Databases and Safety Critical Data Integrity

It's common for autonomous vehicles to use road map data, sign data, and so on for their operation. But what if that data has a problem? Consider that while some data is being mapped by the vehicle manufacturers, they might be relying upon other data as well.  For example, some companies are encouraging cities to build a database of local road signs  ( ) It's important to understand the integrity of the data. What if there is a stop sign missing from the database and the vehicle decides to believe the database if it's not sure whether a stop sign in the real world is valid?  (Perhaps it's hard to see the real world stop sign due to sun glare and the vehicle just goes with the database.) If the vehicle blows through a stop sign because it's missing from the database, whose fault is that?  And what happens next? Hopefully such databases will be highly

A Safe Way to Apply FMVSS Principles to Self-Driving Cars

As the self-driving car industry works to create safer vehicles, it is facing a significant regulatory challenge.  Complying with existing Federal Motor Vehicle Safety Standards (FMVSS) can be difficult or impossible for advanced designs. For conventional vehicles the FMVSS structure helps ensure a basic level of safety by testing some key safety capabilities. However, it might be impossible to run these tests on advanced self-driving cars that lack a brake pedal, steering wheel, or other components required by test procedures. While there is industry pressure to waive some FMVSS requirements in the name of hastening progress, doing so is likely to result in safety problems. I’ll explain a way out of this dilemma based on the established technique of using safety cases. In brief, auto makers should create an evidence-based explanation as to why they achieve the intended safety goals of current FMVSS regulations even if they can’t perform the tests as written. This does not require

AVS 2018 Panel Session

It was great to have the opportunity to participate in a panel on autonomous vehicle validation and safety at AVS in San Francisco this past week.  Thanks especially to Steve Shladover for organizing such an excellent forum for discussion. The discussion was the super-brief version. If you want to dig deeper, you can find much more complete slide decks attached to other blog posts: Safety Validation and Edge Case Testing The Heavy Tail Ceiling Problem for AV Testing AutoSens Slides on AV Safety Approaches SAE WC Presentation on AV Safety Validation The first question was to spend 5 minutes talking about the types of things we do for validation and safety.  Here are my slides from that very brief opening statement. AVS 2018 Slides from Philip Koopman

Robustness Testing of Autonomy Software (ICSE 2018)

Our Robustness Testing team at CMU/NREC presented a great paper at ICSE on the things we learned on five years with the Automated Stress Testing for Autonomy Systems (ASTAA) project across 11 projects, finding 150 significant bugs. Paper at CMU Slides at CMU Robustness Testing of Autonomy Software from Philip Koopman The team members contributing to the paper were: Casidhe Hutchison, Milda Zizyte, Patrick E. Lanigan, David Guttendorf, Michael Wagner, Claire Le Goues, and Philip Koopman. Special thanks to Cas for doing the heavy lifting on the paper, and to Milda for the conference presentation.