Saturday, April 28, 2018

Slides from US-China Transportation Forum Presentation

On Thursday I had the honor to presenting to two Secretaries of Transportation at the 2018 U.S.-China Transportation Forum in Beijing China.  (US Secretary Chao and China Secretary Yang were in the front row -- as well as a huge room full of US and China delegates.)  It was a really interesting experience, and I truly appreciate the support and hospitality shown by the organizers and both governments.  It's not often that a hard-core techie gets face time with cabinet members!

I was the US Autonomous Vehicle technical expert in one of two technology sessions.  My topic was safe autonomous vehicle testing safety.   I gave the short version of my PA AV Summit talk.  The slides are here for anyone who is interested in seeing how I tried to boil that message down to a 10 minute slot (with simultaneous translation to Chinese).



Wednesday, April 11, 2018

Toward a framework for Highly Automated Vehicle Safety Validation

I'm presenting a paper on AV safety validation at the 2018 SAE World Congress.  Here's the unofficial version of the presentation and a preprint of the paper.

Toward a Framework for Highly Automated Vehicle Safety Validation
Philip Koopman & Michael Wagner
2018 SAE World Congress / SAE 2018-01-1071

Abstract:
Validating the safety of Highly Automated Vehicles (HAVs) is a significant autonomy challenge. HAV safety validation strategies based solely on brute force on-road testing campaigns are unlikely to be viable. While simulations and exercising edge case scenarios can help reduce validation cost, those techniques alone are unlikely to provide a sufficient level of assurance for full-scale deployment without adopting a more nuanced view of validation data collection and safety analysis. Validation approaches can be improved by using higher fidelity testing to explicitly validate the assumptions and simplifications of lower fidelity testing rather than just obtaining sampled replication of lower fidelity results. Disentangling multiple testing goals can help by separating validation processes for requirements, environmental model sufficiency, autonomy correctness, autonomy robustness, and test scenario sufficiency. For autonomy approaches with implicit designs and requirements, such as machine learning training data sets, establishing observability points in the architecture can help ensure that vehicles pass the right tests for the right reason. These principles could improve both efficiency and effectiveness for demonstrating HAV safety as part of a phased validation plan that includes both a "driver test" and lifecycle monitoring as well as explicitly managing validation uncertainty.

Paper Preprint:        http://users.ece.cmu.edu/~koopman/pubs/koopman18_av_safety_validation.pdf



Monday, April 9, 2018

Ensuring The Safety of On-Road Self-Driving Car Testing (PA AV Summit Talk Slides)

This is the slide version of my op-ed on how to make self-driving car testing safe.

The take-away is create a test vehicle with a written safety case that addresses these topics:
  • Show that the safety driver is paying adequate attention
  • Show that the safety driver has time to react if needed
  • Show that AV disengagement/safing actually works when things go wrong
(An abbreviated version was also presented in April 2018 at the US-China Transportation Forum in Beijing China.)


Sunday, April 8, 2018

What can we learn from the UK guidelines on self-driving car testing?


The UK already has a pretty good answer for how to do self-driving car testing safely. US stakeholders could learn something from it.


You can see the document for yourself at: 
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/446316/pathway-driverless-cars.pdf

As industry and various governmental organizations decide what to do in response to the tragic Tempe Arizona pedestrian accident, it's worth looking abroad to see what others have done.  As it turns out, the UK Department for Transport issued a 14 page document in July 2015: "The Pathway to Driverless Cars: A Code of Practice for testing."   It covers test drivers, vehicle equipment, licensing, insurance, data recording, and more. So far so good, and kudos for specifically addressing the topic of test platform safety that long ago!

As I'd expect from a UK safety document, there is a lot to like.  I'm not going to try to summarize it all, but here are some comments on specific sections that are worth noting. Overall, I think that the content is useful and will be helpful to improve safety for testing Autonomous Vehicle (AV) technology on public roads.  My only criticism is that it doesn't go quite far enough in a couple places.

First, it is light on making sure that the safety process is actually performing as intended. For example, they say it's important to make sure that test drivers are not fatigued, which is good. But they don't explicitly say that you need to take operational data to make sure that the procedures intended to mitigate fatigue problems are actually resulting in alert drivers. Similarly, they say that the test drivers need time to react, but they don't require feedback to make sure that on-road vehicles are actually leaving the drivers enough time to react during operation.  (Note that this is a tricky bit to be sure you get right because distracted drivers take longer to react, so you really need to ensure that field operations are leaving sufficient reaction time margin.)

In fairness, they say "robust procedures" and for a safety person taking data to make sure the procedures are actually working should be obvious. Nonetheless I've found in practice that it's important to spell out the need for feedback to correct safety issues. If you have a high stakes environment such as the autonomy race to market, it's only natural that the testers will be under pressure to cut corners. The only way I know of to ensure that "aggressive" technology maturation doesn't cross the line to being unsafe is to have continual feedback from field operations to ensure that the assumptions and strategy underlying the safety plan are actually effective and working as intended. For example, you should detect and correct systemic problems with safety driver alertness long before you experience a pedestrian fatality.

Second, although they say it's important for the takeover mechanism to work, they don't specifically require designing it according to a suitable functional safety standard.  Again, for a safety person this should be obvious, and quite possibly it was so obvious to the authors of this document that they didn't bother mentioning it.  But again it's worth spelling out.

To be clear, it's important that any on-road testing of AV technology should be no more dangerous than the normal operation of a human-driven non-autonomous vehicle. That's the whole purpose of having a safety driver!  But getting safety drivers to be that good in practice can be a challenge. However, rather than succumb to pessimism about whether testing can actually be safe, I say let the AV developers prove that they can handle this challenge with a transparent, public safety argument.  (See also my previous posting on safe AV testing for a high level take on things.)

The UK testing safety document is well worth considering by any governmental agency or AV company contemplating how on-road testing of AV technology should be done.



Below are some more detailed notes. The bullets are from the source document, with some informal comments after each bullet:
  • 1.3: ensure that on-road testing "is carried out with the minimum practicable risk"
This appears to be invoking the UK legal concept of ALARP ("As Low As Reasonably Practicable") and SFAIRP ("So Far As Is Reasonably Practicable.")  This is a technical concept, not an intuitive concept.  You can't simply say "this ought to be OK because I think it's OK." Rather, that you need to demonstrate via a rigorous engineering process that you've done everything reasonably practicable to reduce risk. 
  • 3.4 Testing organisations should:
    • ... Conduct risk analysis of any proposed tests and have appropriate risk management strategies.
    • Be conscious of the effect of the use of such test vehicles on other road users and plan trials to manage the risk of adverse impacts. 
It's not OK to start driving around without having done some work to understand and mitigate risks.
  • 4.16 Testing organisations should develop robust procedures to ensure that test drivers and operators are sufficiently alert to perform their role and do not suffer fatigue. This could include setting limits for the amount of time that test drivers or operators perform such a role per day and the maximum duration of any one test period. 
The test drivers have to stay alert.  Simply setting the limits isn't enough. You have to actually make sure the limits are followed, that there isn't undue pressure for drivers to skip breaks, and in the end you have to make sure that drivers are actually alert.  Solving alertness issues by firing sleepy drivers doesn't fix any systemic problem with alertness -- it just gives you fresh drivers who will have just as much trouble staying alert as the drivers you just fired.
  • 4.20 Test drivers and operators should be conscious of their appearance to other road users, for example continuing to maintain gaze directions appropriate for normal driving. 
This appears to address the problem of other road users interacting with an AV. The theory seems to be that if for example the test driver makes eye contact with a pedestrian at a crosswalk, that means that even if the vehicle makes a mistake the test driver will intervene to give the pedestrian right of way. This seems like a sensible requirement, and could help the safety driver remain engaged with the driving task.
  • 5.3 Organisations wishing to test automated vehicles on public roads or in other public places will need to ensure that the vehicles have successfully completed in-house testing on closed roads or test tracks.
  • 5.4 Organisations should determine, as part of their risk management procedures, when sufficient in-house testing has been completed to have confidence that public road testing can proceed without creating additional risk to road users. Testing organisations should maintain an audit trail of such evidence.
You should not be doing initial development on public roads. You should be using extensive analysis and simulation to be pretty sure everything is going to work before you ever get near a public road.  On-road testing should be to check that things are OK and there are no surprises. (Moreover, surprises should be fed back to development to avoid similar surprises in the future.)  You should have written records that you're doing the right amount of validation before you ever operate on public roads. (emphasis added)
  • 5.5 Vehicle sensor and control systems should be sufficiently developed to be capable of appropriately responding to all types of road user which may typically be encountered during the test in question. This includes more vulnerable road users for example disabled people, those with visual or hearing impairments, pedestrians, cyclists, motorcyclists, children and horse-riders. 
Part of your development should include making sure the system can deal with at-risk road users.  This means there should be a minimal chance that a pedestrian or other at-risk road user will be put into danger by the AV even without safety driver intervention.  (The safety driver should be handling unexpected surprises, and not be relied upon as a primary control mechanism during road testing.)
  • 5.8 This data should be able to be used to determine who or what was controlling the vehicle at the time of an incident. The data should be securely stored and should be provided to the relevant authorities upon request. It is expected that testing organisations will cooperate fully with the relevant authorities in the event of an investigation
With regard to data recording, there should be no debate over whether the autonomy was in control a the time of the mishap.  (How can it possibly be that a developer says "we're not sure if the autonomy was in control at the time of the mishap?" Yet I've heard this on the news more than once.)  It's also important to be transparent about the role of autonomy at times just before any mishap.  For example, if autonomy disengages a fraction of a second before impact, it's unreasonable to just blame the human driver without a more thorough investigation.
  • 5.18 Ensuring that the transition periods between manual and automated mode involve minimal risk will be an important part of the vehicle development process and one which would be expected to be developed and proven during private track testing prior to testing on public roads or other public places. 
It's really important that manual takeover by a safety driver actually works. As mentioned above, the takeover system should be designed to a suitable level of safety (e.g., according to ISO 26262).
  • 5.21 ... All software and revisions have been subjected to extensive and well documented testing. This should typically start with bench testing and simulation, before moving to testing on a closed test track or private road. Only then should tests be conducted on public roads or other public places. 
Again, testing should be used to confirm that the design is right, not as an iterative drive-fix-drive approach to gradually beating the system into proper operation via brute force road testing.

These comments are based on a preliminary reading of the document. I might change my thoughts on this document over time.

Monday, April 2, 2018

A Way Forward for Safe Self-Driving Car Road Testing

The self-driving car industry's reputation has suffered a setback, but the resulting debate about whether autonomy is safe for road use is focused on solving the wrong problem.

Dashcam image from mishap released by Tempe AZ Police


The recent tragic fatality in Arizona in which a pedestrian was hit and killed by a self-driving car test vehicle has sent a shock wave through the industry. At least some companies have shut down their road testing while they see how things develop. Uber just had its Arizona road testing permission revoked, and apparently is not going to be testing in California any time soon. The longer term effects on public enthusiasm for the technology overall remain unclear.

Proponents of self-driving car testing point out that human drivers are far from perfect, and compare the one life lost to the many lives lost every day in conventional vehicle crashes. Opponents say that potentially avoidable fatalities due to immature technology are unacceptable. The debate about whether self-driving car technology can be trusted with human lives is important in the long term, but isn't actually what went wrong here.


What went wrong is that a test vehicle killed Elaine Herzberg -- not a fully autonomous vehicle. The vehicle involved wasn't supposed to have perfect autonomy technology at all. Rather, it had unproven systems still under development, and a safety driver who was supposed to ensure that failures in the technology did no harm.  Whether the autonomy failed to detect a mid-block pedestrian crossing at night isn't the real issue. The real issue is why the safety driver didn't avoid the mishap despite any potential technology flaw.


The expedient approach of blaming the safety driver (or the victim) won't make test vehicles safer. We've known for decades that putting a single human in charge of supervising a self-driving car with no way to ensure attentiveness is asking for trouble.  And we know that pedestrians don't always obey traffic rules. So to really be sure these vehicles are safe on public roads, we need to dig deeper.


Fortunately, there is a way forward that doesn't require waiting for the next death to point out any potential problems in some other company's prototype technology, and doesn't require developers to prove that their autonomy is perfect before testing. That way simply requires treating these vehicles as the test vehicles they are, and not as fully formed self-driving cars. Operators should not be required to prove their autonomy technology is perfect. But they should be required to explain why their on-road testing approach is adequately safe beyond simply saying that a safety driver is sitting in the vehicle.

A safety explanation for a self-driving test platform doesn't have to expose anything about proprietary autonomy technology, and doesn't have to be incredibly complicated. It might be as simple as a three-part safety strategy: proactively ensure the safety driver always pays attention, ensure that the safety driver has time to react if a problem occurs, and ensure that when the safety driver does react, the vehicle will obey the safety driver's commands. Note that whether the autonomy fails doesn't enter into it, other than ensuring that the autonomy gets out of the way when the safety driver needs to take over.

There is no doubt that making a credible safety explanation for a self-driving car test platform requires some effort. The fact that safety drivers can lose focus after only a few minutes must be addressed somehow. That could be as simple as using two safety drivers, as many companies do, if it can be shown that paired drivers actually achieve continuous attentiveness for a full test session. Possibly some additional monitoring technology is needed to track driver eye gaze, driver fatigue, or the like. It must also be clear that the safety driver has a way to know if the autonomy is making a mistake such as failing to detect unexpected pedestrians. For example, a heads-up display that outlines pedestrians as seen through the windshield could make it clear when one has been missed to the eyes-on-the-road safety driver. And the safety explanation has to deal with realities such as pedestrians who are not on crosswalks. Doing all this might be a challenge, but the test system doesn't have to be perfect. Arguably, the test vehicle with its safety driver(s) just has to be as good as single human driver in an ordinary vehicle -- which the self-driving car proponents remind us is far from perfect.

I live in Pittsburgh where I have seen plenty of self-driving car test vehicles, including encounters as a pedestrian. Until now I've just had to take it on faith that the test vehicles were safe.  Self-driving cars have tremendous long-term promise. But for now, I'd rather have some assurance, preferably from independent safety reviewers, that these companies are actually getting their test platform safety right.

Philip Koopman, Carnegie Mellon University.


Author info: Prof. Koopman has been helping government, commercial, and academic self-driving developers improve safety for 20 years.
Contact:  koopman@cmu.edu

This essay was originally published in EE Times:
https://www.eetimes.com/author.asp?section_id=36&doc_id=1333143