Showing posts from January, 2019

How Many Operational Design Domains, Objects, and Events? Safe AI 2019 talk

Validating self-driving cars requires so, so much more than just "geo-fencing" if you want to make the problem tractable. My  Safe AI 2019  paper and presentation explain and illustrate why this is the case. Paper: Download slides: (For slideshare version see below) How Many Operational Design Domains, Objects, and Events Phil Koopman & Frank Fratrik  Abstract: A first step toward validating an autonomous vehicle is deciding what aspects of the system need to be validated. This paper lists factors we have found to be relevant in the areas of operational design domain, object and event detection and response, vehicle maneuvers, and fault management. While any such list is unlikely to be complete, our contribution can form a starting point for a publicly available master list of considerations to ensure that autonomous vehic

How Road Testing Self-Driving Cars Gets More Dangerous as the Technology Improves

Safe road testing of autonomous vehicle technology assumes that human "safety drivers" will be able to prevent mishaps. But humans are notoriously bad at supervising autonomy. Ensuring that road testing is safe requires designing the test platform to have high "supervisability." In other words, it must be easy for a human to stay in the loop and compensate for autonomy errors, even when the autonomy is gets pretty good and the supervisor job gets pretty boring. This excerpt from a draft paper explains the concept and why it matters. (update: full paper here: ) Figure 1. An essential observation regarding self-driving car road testing is that it relies upon imperfect human responses to provide safety. There is some non-zero probability that the supervisor (a "safety driver") will not react in a timely fashion, and some additional probability that the supervisor will react