Monday, April 9, 2018

Ensuring The Safety of On-Road Self-Driving Car Testing (PA AV Summit Talk Slides)

This is the slide version of my op-ed on how to make self-driving car testing safe.

The take-away is create a test vehicle with a written safety case that addresses these topics:
  • Show that the safety driver is paying adequate attention
  • Show that the safety driver has time to react if needed
  • Show that AV disengagement/safing actually works when things go wrong
(An abbreviated version was also presented in April 2018 at the US-China Transportation Forum in Beijing China.)


7 comments:

  1. Thank you Phil for sharing. My colleagues from CHOP/UPenn and myself have a strong interest in these topics, especially from the Human Factor side (awareness time, reaction time, confidence...)
    LoebH@email.chop.edu

    ReplyDelete
  2. My name is Michael DeKort. I am a former systems engineer, engineering and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, the Aegis Weapon System, and on C4ISR for DHS. I also worked in Commercial IT and Cybersecurity.

    I received the IEEE Barus Ethics Award for whistleblowing regarding the DHS Deepwater program post 9/11 - http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=4468728.

    I am also a member of the SAE On-Road Autonomous Driving Validation & Verification Task Force.)

    It is a myth that public shadow driving is a viable method to create AVs. That process can never come close to creating an AV. You cannot drive the one trillion miles or spend over $300B to do so. You also cannot have any more casualties especially a child or family. The path you are on will create thousands of them as you move into more complex and dangerous scenarios. Both due to handover and because you will have to run thousands of accident scenarios thousands of times. The solution is to use proper simulation 99% of the time.

    (Mr. Koopman is well aware of these issues and the overwhelming data that shows they are true. When asked to counter them he avoids the topic even removing or censoring my posts.)

    I would also like to make sure you are aware that should you switch to 99% simulation use you must look outside of the products in this industry to ensure you have what you need to complete all the scenarios required and not be lulled into a significant level of false confidence. There are simply to may capability gaps in the simulation systems available especially in the cloud. Examples being vehicle, road, tire and environmental models are not precise enough, systems have latency and timing issues and most don't use full motion simulators. I suggest the solution is to leverage the FAA simulation testing and aerospace/DoD technology to fill those gaps.

    Much more can be found in my article here

    Impediments to Creating an Autonomous Vehicle
    https://www.linkedin.com/pulse/impediments-creating-autonomous-vehicle-michael-dekort/

    Proof of the my POV can be found here
    Autonomous Levels 4 and 5 will never be reached without Simulation vs Public Shadow Driving for AI
    https://www.linkedin.com/pulse/autonomous-levels-4-5-never-reached-without-michael-dekort

    ReplyDelete
    Replies
    1. This talk is about how to ensure that the testing that is being done on public roads is safe. It does NOT say whether or not that testing will prove safety. Rather, it acknowledges that such testing is happening and can be dangerous if done improperly. I.e., this talk is about making the TESTING safe, not the ultimate autonomous vehicles. Hence, the title.

      I have given numerous talks and written several papers that make it crystal clear that on-road testing will not ensure AV safety, and that simulation is essential for safety. However, that simulation will need to go far beyond current aerospace-style simulation capabilities, and in fact that is precisely the main emphasis of my research. HIL simulation can help with validating a software simulation stack, but will never provide enough testing capacity to get the job done all by itself.

      If you want to see a discussion of a simulation approach that exactly addresses your comments, look at my SAE paper from earlier this year.
      https://safeautonomy.blogspot.com/2018/04/saewc18.html




      Delete
  3. Mr. Koopman

    Who tests an unsafe and untenable method?

    Saying simulation is necessary without qualifying its level of use, the level of use of public shadow driving and that it cannot be made safe in critical conditions enables unsafe practices. It’s so open ended you cannot be wrong. Don’t you have an obligation as a safety expert to know this and be definitive? What is your actual belief regarding what I stated? Is there or isn’t there a time window where the proper levels of situational awareness cannot be provided? Do you think that’s up to 5 seconds in some scenarios? 10 seconds? More? What ratio of public shadow driving to simulation do you believe is accurate? Do you think accident scenarios should be run and rerun in the public domain? Are NASA, Missy Cummings etc wrong when they say L3 /handover, should be skipped? When you say simulation cannot handle all of it what does that mean? 10%, 50%, 75%. Why not 99%. Exactly what can’t be simulated? Please give me a couple examples? Have you seen FAA Level D simulators and simulation engines in networked urban war games? Do you believe enough shadow driving miles can be driven and redriven so enough scenarios get stumbled on enough times to learn all the scenarios needed? Especially the complex and accident scenarios?

    Regarding your reference to your paper on use of simulation.
    • You say billions of miles have to be driven. That is incorrect. It is hundreds of billions according to RAND, one Trillion according to Toyota and why too many if simple common sense is invoked. And that doesn’t even consider there is no way the public allows thousands of accident scenarios to be run thousands of times each causing thousands of fatalities.
    • Regarding “even perfect simulations need scenarios as inputs”. If I have to drive scenarios to create them in simulation your statement would mean I need to drive thousands of accident scenarios to get that data? The fact is most scenarios can be created without data from driving. And virtually none from shadow driving. (Having said that there is data clearly needed from driving. And I am all for data from drivers who are driving. Not shadow drivers which is the issue).
    • “Perfect simulation is expensive”. What does that actually mean? Why does it need to be perfect? 3 Sigma is not good enough? 5 Sigma, 6 Sigma etc. Why would it have to be perfect? Give me some examples please. Beyond that I can run scenarios in simulation that cannot EVER be done in the real world. I can rerun scenarios you may not duplicate for generations in the real world. I can run complex, dangerous and accident scenarios that should not be run in the real world. If my imperfect simulation allows me to make those scenarios measurably safer why wouldn’t I use less than perfect simulation to do so? Here is where I agree with you though. The simulation tech in the AV and automaker industry is inadequate. Real-time is not real enough. Many vehicle, tire and road models are not accurate enough. Things is the right tech has been in aerospace and DoD for 20 years.

    ReplyDelete
    Replies
    1. I don't believe I've ever expressed a public opinion on whether the driver handoff problem can be solved, or whether L3 is a good idea (especially since there is so much incorrect characterization of what L3 really means). If people say they've solved it and have evidence, good for them. If people say it's impossible, then they should be open to hearing that someone has figured it out (and has data to back that up). The jury is still out.

      This paper was written a year ago. I have already addressed a number of your points in my other writings and presentations that are on this very same blog. In fact, I thought I answered some of these questions in the paper (not necessarily on the slides due to limited time constraints). Some other points are reasonable and I just haven't gotten there yet due to limited time for writing. If you want to explain how you plan to solve these difficult problems you should do so in a public, accessible forum subject to peer review.

      Other points I don't agree with. In particular aerospace/DoD simulation is good, but not enough. (How do I know? I work extensively with organizations who are expert in that area. So clearly there is room for disagreement on this point.)

      If you have a way to make things work that I've missed you should spell it out in more detail. If you want to get more traction with your message I suggest refereed venues such as SAE World Congress (WCX) session AE403 to get more visibility and credibility for your thoughts.

      Also, you should look up PEGASUS if you are not familiar with that work. They have been going down the scenario taxonomy and enumeration path for many years.

      Delete
    2. Regarding this “I don't believe I've ever expressed a public opinion on whether the driver handoff problem can be solved, or whether L3 is a good idea (especially since there is so much incorrect characterization of what L3 really means)”. That is my point. Why haven’t you expressed an informed opinion? Isn’t that your professional and ethical responsibility if you are going to write about and leverage your safety credentials? Especially as an IEEE Barus Ethics Award recipient? Shouldn’t you be leading the charge?

      Regarding the handover issues being resolved. How have people said they solved all or even the most critical handover situational awareness issues? While I agree monitoring and notification systems help they in no way remedy the most critical of scenarios. What system or monitoring and notification system has solve those critical scenarios especially in under 6 seconds?

      Regarding aerospace/DoD simulation technology not being enough. Exactly what areas lag? Have you seen a DoD urban war game with networked FAA Level D simulation and simulators? What cannot be done in simulation that is worth the risk of losing lives to public shadow driving or trying to drive a trillion miles?

      I am aware of Pegasus thank you. They are formed by companies who do not get any of these issues either. The issue is not that no one is creating scenarios or using simulation. The issue is the depth, breadth and accuracy of that use.

      As for my spelling out the solution in detail I have done so through many articles and posts.

      I will repost some here

      Impediments to Creating an Autonomous Vehicle
      https://www.linkedin.com/pulse/impediments-creating-autonomous-vehicle-michael-dekort/

      Autonomous Levels 4 and 5 will never be reached without Simulation vs Public Shadow Driving for AI
      https://www.linkedin.com/pulse/autonomous-levels-4-5-never-reached-without-michael-dekort

      The Dangers of Inferior Simulation for Autonomous Vehicles
      https://www.linkedin.com/pulse/dangers-inferior-simulation-autonomous-vehicles-michael-dekort/

      Delete
    3. My opinion is that all safety-critical automotive software should have a written explanation as to why it is appropriately safe, backed by evidence, and subject to independent audit for completeness and correctness. In other words, it should be like other safety critical domains. That includes, but is not limited to, L3 driving and the related hand-off issue. Since such an approach is not currently required by regulators, I spend much of my time helping various companies improve their safety, educating policymakers, and doing extensive travel to advocate for safer automotive software across a wide range of stakeholders.

      I have read your various articles over the past months (I don't think these links work at the moment, but they look familiar. Perhaps you should post them on some other more public and more enduring forum.) They make some valid points in general as to problems that at this point are becoming acknowledged across the industry. (And thanks for contributing to that, along with others.) But they did not convince me to abandon the path I'm taking. It's OK for us to disagree, because a variety of opinions is healthy, and there can be multiple valid approaches to solving complex and difficult problems in this area.

      Engineers can disagree while remaining ethical. Preferably they engage in respectful debate on important points, and focus their attention on making the world a better place. I have found from personal experience that the debate works better when both sides are saying "here is a specific, concrete way to improve things" and the response is "that was good, but here is an even better approach" rather than negative back-and-forth comments or blanket dismissive statements regarding other people's work, ethics, or world-view. Everyone has something to offer. The way to succeed is to find the pearl in the oyster and feature it as a step forward.

      At this point this discussion has used up all the time I can afford to spend on it. So I won't be approving more posts on this thread. I again encourage you to publish in more visible and peer reviewed venues so that your message can be appropriately included in the public debate regarding this technology.

      Delete

All comments are moderated by a human. While it is always nice to see "I like this" comments, only comments that contribute substantively to the discussion will be approved for posting.

Automotive Safety Practices vs. Accepted Principles (SAFECOMP paper)

I'm presenting this paper at SAFECOMP this today 2018 SAFECOMP Paper Preprint Abstract. This paper documents the state of automotiv...