Tuesday, September 18, 2018

Automotive Safety Practices vs. Accepted Principles (SAFECOMP paper)

I'm presenting this paper at SAFECOMP this today

2018 SAFECOMP Paper Preprint

Abstract. This paper documents the state of automotive computer-based system safety practices based on experiences with unintended acceleration litigation spanning multiple vehicle makers. There is a wide gulf between some observed automotive practices and established principles for safety critical system engineering. While some companies strive to do better, at least some car makers in the 2002-2010 era took a test-centric approach to safety that discounted nonreproducible and “unrealistic” faults, instead blaming driver error for mishaps. Regulators still follow policies from the pre-software safety assurance era. Eight general areas of contrast between accepted safety principles and observed automotive safety practices are identified. While the advent of ISO 26262 promises some progress, deployment of highly autonomous vehicles in a nonregulatory environment threatens to undermine safety engineering rigor.

See the full paper here:
https://users.ece.cmu.edu/~koopman/pubs/koopman18_safecomp.pdf

Note that there is some pretty interesting stuff to be seen by following the links in the paper reference section.
Also see the expanded list of (potentially) deadly automotive defects.

Here are the accompanying slides:  https://users.ece.cmu.edu/~koopman/pubs/koopman18_safecomp_slides.pdf







Wednesday, September 12, 2018

Victoria Australia Is Winning the Race to ADS Testing Safety Regulations

Victoria Australia has just issued new guidelines regarding Automated Driving System (ADS) testing.  These should be required reading for anyone doing on-road testing elsewhere in the world. There is just too much good stuff here to miss.  And, the guidelines are accompanied by actual laws that are designed to make autonomy testing safe.

A look through the regulations and guidelines shows that there is a lot to like. The most intriguing points I noticed were:
  • It provides essentially unlimited technical flexibility to the companies building the ADS vehicles while still providing a way to ensure safety. The approach is a simple two-parter:
    1. The testing permit holders have to explain why they will be safe via a safety management plan.
    2. If the vehicle testing doesn't follow the safety management plan or acts unsafely on the roads, the testing permit can be revoked.
  • The permit holder rather than the vehicle supervisor (a.k.a. "safety driver" in the US) is liable when operating in autonomous mode.  In other words, if the safety driver fails to avoid a mishap, liability rests with the company running the tests, not the safety driver. That sounds like an excellent way to avoid a hypothetical strategy of companies using safety drivers as scapegoats (or expendable liability shields) during testing.
  • The permitting process requires a description of ODD/OEDR factors including not just geofencing, but also weather, lighting, infrastructure requirements, and types of other road users that could be encountered.
  • The regulators have broad, sweeping powers to inspect, assess, require tests, and in general do the right thing to ensure that on-road testing is safe. For example, a permit can be denied or revoked if the safety plan is inadequate or not being followed.
There are many other interesting and on-target discussions in the guidelines.  They include the need to reduce risk as low as reasonably practicable (ALARP); accounting for the Australian road safety approach of: safe speeds, safe roads, safe vehicles, safe people during testing; transition issues between ADS and supervisor; the need to drive in a predictable way to interact safely with human drivers; and a multi-page list of issues to be considered by the safety plan. There is also a list of other laws that come into play.

Here are some pointers for those who want to look further.
There are some legal back stories at work here as well. For example, it seems that under previous law a passenger in an ADS could have been found responsible for errors made by the ADS, and this has been rectified with the new laws.

The regulations were created according to the following criteria from a 2009 Transportation bill:
  • Transportation system objectives:
    • Social and economic inclusion
    • Economic prosperity
    • Environmental sustainability
    • Integration of transport and land use
    • Efficiency, coordination and reliability
    • Safety and health and well being
  •  Decision making principles:
    • Principle of integrated decision making
    • Principle of triple bottom line assessment
    • Principle of equity
    • Principle of the transport system user perspective
    • Precautionary principle
    • Principle of stakeholder engagement and community participation
    • Principle of transparency. 
(The principle of transparency is my personal favorite.)

Here is a list of key features of the Road Safety (Automated Vehicles) Regulations 2018:

  1. The purpose of an ADS permits scheme (see regulation 5):
    • For trials of automated driving systems in automated mode of public roads
    • To enable a road authority to monitor and  manage the use and impacts of the automated driving system on a highway
    • To enable VicRoads to perform its functions under the Act and the Transport Integration Act
  2. The permit scheme requires the applicant to prepare and maintain a safety management plan that (see regulation 9 (2)):
    • Identifies the safety risks of the ADS trials
    • Identifies the risks to the reliability, security and operation of the automated driving system to be used in the ADS trial
    • Specifies what the applicant will do to eliminate or reduce those risks so far as is reasonably practicable; and
    • Addresses the safety criteria set out in the ADS guidelines
  3. The regulations will require the ADS permit holder to submit a serious incident within 24 hours (see regulations 13 and 19). A serious incident means any:
    • accident
    • speeding, traffic light, give way and level crossing offence
    • theft or carjacking
    • tampering with, unauthorised access to, modification of, or impairment of an automated driving system
    • failure of an automated driving system of an automated vehicle that would impair the reliability, security or operation of that automated driving system.
I hope that US states (and the US DOT) have a look at these materials.  Right now I'd say VicRoads is ahead of the US in the race to comprehensive but reasonable autonomous vehicle safety regulations.

(I would not at all be surprised if there are issues with these regulations that emerge over time. My primary point is that it looks to me like responsible regulation can be done in a way that does not pick technology winners and does not unnecessarily hinder innovation. This looks to be excellent source material for other regions to apply in a way suitable to their circumstances.)


Wednesday, August 22, 2018

AAMVA Slides on Self-Driving Car Road Testing Safety

These are the slides I presented at the AAMVA International Conference, August 22, 2018 in Philadelphia PA.

It's an update of my PennDOT AV Summit presentation from earlier this year.  A key takeaway is that the lesson that we should be learning from the tragic Uber fatality in Tempe AZ earlier this year is:
- Do NOT blame the victim
- Do NOT blame the technology
- Do NOT blame the driver
INSTEAD -- figure out how to make sure the safety driver is actually engaged even during long, monotonous road testing campaigns.   AND actually measure driver engagement so problems can be fixed before there is another avoidable testing fatality.

Even better is to use simulation to minimize the need for road testing, but given that testers are out on the road operating, there needs to be credible safety argument that they will be no more dangerous than other conventional vehicles while operating on public roads.






Tuesday, August 14, 2018

ADAS Code of Practice

One of the speakers at AVS last month mentioned that there was a Code of Practice for ADAS design (basically, level 1 and level 2 autonomy).  And that there is a proposal to update it over the next few years for higher autonomy levels.

A written set of uniform practices is generally worth something worth looking into, so I took a look here:
https://www.acea.be/uploads/publications/20090831_Code_of_Practice_ADAS.pdf


The main report sets forth a development process with a significant emphasis on controllability. That makes sense, because for ADAS typically the safety argument ultimately ends up being that the driver will be responsible for safety, and that requires an ability for the driver to assert ultimate control over a potentially malfunctioning system.

The part that I actually found more interesting in many respects was the set of Annexes, which include quite a number of checklists for controllability evaluation, safety analysis, and assessment methods as well as Human-Machine Interface concept selection.

I'd expect that this is a useful starting point for those working on higher levels of autonomy, and most critically anyone trying to take on the very difficult human/machine issues involved with level 2 and level 3 systems.  (Whether it is sufficient on its own is not something I can say at this point, but starting with something like this is usually better than a cold start.)

If you have any thoughts about this document please let me know via a comment.

Monday, August 6, 2018

The Case for Lower Speed Autonomous Vehicle On-Road Testing

Every once in a while I hear about a self-driving car test or deployment program that plans to operate at lower speeds (for example, under 25 mph) to lower risk. Intuitively that sounds good, but I thought it would be interesting to dig deeper and see what turns up.

There have been a few research projects over the years looking into the probability of a fatality when a conventionally driven car impacts a pedestrian. As you might expect, faster impact speeds increase fatalities. But it's not linear -- it's an S-shape curve. And that matters a lot:

(Source: WHO http://bit.ly/2uzRfSI )

Looking at this data (and other similar data), impacts at less than 20 miles an hour have a flat curve near zero, and are comparatively survivable. Above 30 mph or so is a significantly bigger problem on a per-incident basis.  Hmm, maybe the city planners who set 25 mph speed limits have a valid point!  (And surely they must have known this already.) In conventional vehicles the flat curve at and below 20 has lead to campaigns to nudge urban speed limits lower, with slogans such as "20 is plenty."

For on-road autonomous vehicle testing there's a message here. Low speed testing and deployment carries dramatically less risk of a fatality. The risk of a fatality goes up dramatically as speed increases beyond that.

For systems with a more complex "above 25 mph" strategy there still ought to be plenty that is either reused from the slow system or able to be validated at low speeds.  Yes, slow is different than fast due to the physics of kinetic energy.  But a strategy that validates as much as possible below 25 mph and then reuses significant amounts of that validation evidence as a foundation for higher speed validation could present less risk to the public.  For example, if you can't tell the difference between a person riding a bike and a person walking next to a bike at 25 mph, you're going to have worse problems at 45 mph.  (You might say "but that's not how we do it."  My point is maybe the AV industry should be optimizing for validation, and this is the way it should get done.)

It's clear that many companies are on a "race" to autonomy. But sometimes slow and steady can win the race. Slow speed runs might be less flashy, but until the technology matures slower speeds could dramatically reduce the risk of pedestrian fatalities due to a test platform or deployed system malfunction. Maybe that's a good idea, and we ought to encourage companies who take that path now and in the future as the technology continues to mature.



The "above 25 mph" paragraph was added in response to social media comments 8/9/2018.  And despite that I still got comments saying that systems below 25 mph are completely different than higher speed systems.  So in case that point isn't clear enough, here is more on that topic:

I'm not assuming that slow and fast systems are designed the same. Nor am I advocating for limiting AV designs only to slow speeds (unless that fits the ODD).

I'm saying when you build a high-speed capable AV, it's a good idea to initially test at below 25 mph to reduce the risk to the public for when something goes wrong.  And something WILL go wrong.  There is a reason there are safety drivers.

If a system is designed to work properly at speeds of 0 mph to 55 mph (say), you'd think it would work properly at 25 mph.  And you could design it so that at 25 it's using most or all of the machinery that is being used at 55 mph (SW, HW, sensors, algorithms, etc.)  Yes, you can get away with something simpler at low speed.  But this is low speed testing, not deployment.  Why go tearing around town at high speed with a system that hasn't even been proven at lower speeds?  Then bump up speed once you've built confidence.

If you design to validate as much as possible at lower speeds, you lower the risk exposure.  Sure, investors probably want to see max. speed operation as soon as possible.  But not at the cost of dead pedestrians because testing was done in a hurry.


Notes for those who like details:

There is certainly room for reasonable safety arguments at speeds above 20 mph. I'm just pointing out that testing time spent at/below 20 mph is inherently less risky if a pedestrian collision does occur. So maximizing the exposure to high speed operation is a way to improve overall safety in the event a pedestrian impact does occur.

The impact speed is potentially different than vehicle speed. If the vehicle has time to shed even 5 or 10 mph of speed at the last second before impact that certainly helps, potentially a lot, even if the vehicle does not come to a complete stop before impact. But a slower vehicle is less dependent upon that last second braking (human or automated) working properly in a crisis.

The actual risk will depend upon circumstances. For example, since the 1991 data shown it seems likely that emergency medical services have improved, reducing fatality rates. On the other hand, increasing prevalence of SUVs might increase fatality rates depending upon impact geometries. And so on.   A study that compares multiple data sets is here:
https://nacto.org/docs/usdg/relationship_between_speed_risk_fatal_injury_pedestrians_and_car_occupants_richards.pdf
But, all that aside, all the data I've seen shows that traditional city speed limits (25 mph or less) help with reducing pedestrian fatalities.


Friday, July 27, 2018

Putting image manipulations in context: robustness testing for safe perception

UPDATE 8/17 -- added presentation slides!

I'm very pleased to share a publication from our NREC autonomy validation team that explains how computationally cheap image perturbations and degradations can expose catastrophic perception brittleness issues.  You don't need adversarial attacks to foil machine learning-based perception -- straightforward image degradations such as blur or haze can cause problems too.

Our paper "Putting image manipulations in context: robustness testing for safe perception" will be presented at IEEE SSRR August 6-8.  Here's a submission preprint:

https://users.ece.cmu.edu/~koopman/pubs/pezzementi18_perception_robustness_testing.pdf

Abstract—We introduce a method to evaluate the robustness of perception systems to the wide variety of conditions that a deployed system will encounter. Using person detection as a sample safety-critical application, we evaluate the robustness of several state-of-the-art perception systems to a variety of common image perturbations and degradations. We introduce two novel image perturbations that use “contextual information” (in the form of stereo image data) to perform more physically-realistic simulation of haze and defocus effects. For both standard and contextual mutations, we show cases where performance drops catastrophically in response to barely perceptible
changes. We also show how robustness to contextual mutators can be predicted without the associated contextual information in some cases.

Fig. 6: Examples of images that show the largest change in detection performance for MS-CNN under moderate blur and haze. For all of them, the rate of FPs per image required to detect the person increases by three to five orders of magnitude. In each image, the green box shows the labeled location of the person. The blue and red boxes are the detection produced by the SUT before and after mutation respectively, and the white-on-blue text is the strength of that detection (ranged 0 to 1). Finally, the value in whiteon-yellow text shows the average FP rate per image that a sensitivity threshold set at that value would yield. i.e., that is the required FP rate to still detect the person.




Alternate slide download link: https://users.ece.cmu.edu/~koopman/pubs/pezzementi18_perception_robustness_testing_slides.pdf

Citation:
Pezzementi, Z., Tabor, T., Yim, S., Chang, J., Drozd, B., Guttendorf, D., Wagner, M., & Koopman, P., "Putting image manipulations in context: robustness testing for safe perception," IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), Aug. 2018.

Tuesday, July 24, 2018

Pennsylvania's Autonomous Vehicle Testing Guidelines

PennDOT has just issued new Automated Vehicle Testing Guidance:
       July 2018 PennDOT AV Testing Guidance (link to acrobat document)
(also, there is a press release.)


It's only been a short three months since the PA AV Summit in which PennDOT took up a challenge to improve AV testing policy. Today PennDOT released a significantly revised policy as promised. And it looks like they've been listening to safety advocates as well as AV companies.

At a high level, there is a lot to like about this policy. It makes it clear that a written safety plan is required, and suggests addressing one way or another the big three items I've proposed for AV testing safety
  • Make sure that the driver is paying attention
  • Make sure that the driver is capable of safing the vehicle in time when something goes wrong
  • Make sure that the Big Red Button (disengagement mechanism) is actually safe

There are a number of items in the guidance that look like a good idea. Here is a partial list of ones that catch my idea as being on the right track (many other ideas in the document are also good):

Good Ideas:
  • Submission of a written safety plan
  • Must have a safety driver in the driver seat who is able to take immediate physical control as required
  • Two safety drivers above 25 mph to ensure that the safety drivers are able to tend to both the safety driving and the testing
  • Validation "under controlled conditions" before on-road testing
  • Disengagement technology complies with industry standards
  • Safety driver training is mandatory, and has a nice list of required topics
  • Data recording for post-mishap analysis
  • Mitigate cybersecurity risk
  • Quality controls to ensure that major items are "adhered to and measured to ensure safe operation"
There are also some ideas that might or might not work out well in practice. I'm not so sure how these will work out, and they seem in some cases to be compromises:

Not Sure About These:
  • Only one safety driver required below 25 mph. It's true that low speed pedestrian collisions are less lethal, and there can be more time to react, so the risk is somewhat lower. But time will tell if drivers are able to stay sufficiently alert to avoid mishaps even if they are lower speed.
  • It's not explicit about the issue of ensuring that there is enough time for a safety driver to intervene when something goes wrong. It's implicit in the parts about a driver being able to safe the vehicle. It's possible that this was considered a technical issue for developers rather than regulators, but in my mind it is a primary concern that can easily be overlooked in a safety plan. This topic should be more explicitly called out in the safety plan.
  • The data reporting beyond crashes is mostly just tracking drivers, vehicles, and how much testing they are doing.  I'd like to see more reporting regarding how well they are adhering to their own safety plan. It's one thing to say things look good via hand waving and "trust us, we're smart." It's another to report metrics such as how often drivers drop out during testing and what corrective actions are taken in response to such data. (The rate won't be a perfect zero; continual improvement should be the goal, as well as mishap rates no worse than conventional vehicles during testing.) I realize picking metrics can be a problem -- so just let each company decide for themselves what they want to report. The requirement should be to show evidence that safety is actually being achieved during testing. To be fair, there is a bullet in the document requiring quality controls. I'd like that bullet to have more explicit teeth to get the job done.
  • The nicely outlined PennDOT safety plan can be avoided by instead submitting something following the 2017 NHTSA AV Guidance. That guidance is a lot weaker than the 2016 NHTSA AV Guidance was. Waymo and GM have already created such public safety disclosures, and others are likely coming. However, it is difficult for a reader to know if AV vendors are just saying a lot of buzzwords or are actually doing the right things to be safe. Ultimately I'm not comfortable with "trust us, we're safe" with no supporting evidence. While some disclosure is better than no disclosure, the public deserves better than NHTSA's rather low bar in safety plan transparency, which was not intended to deal specifically with on-road testing. We'll have to see how this alternative option plays out, and what transparency the AV testers voluntarily provide. Maybe the new 2018 NHTSA AV Guidance due later this summer will raise the bar again.
Having said nice things for the most part, there are a few areas which really need improvement in a future revision. I realize they didn't have time to solve everything in three months, and it's good to see the progress they made. But I hope these areas are on the list for the next iteration:

Not A Fan:
  • Only one safety driver above 25 mph after undergoing "enhanced driver safety training." It's unclear what this training might really be, or if more training will really result in drivers that can do solo testing safely. I'd like to see something more substantive demonstrating that solo drivers will actually be safe in practice. Training only goes so far, and no amount of hiring only experienced drivers will eliminate the fact that humans have trouble staying engaged when supervising autonomy for long stretches of time. I'm concerned this will end up being a loophole that puts solo drivers in an untenable safety role.
  • No independent auditing. This is a big one, and worth discussing at length.
The biggest issue I see is no requirement for independent auditing of safety. I can understand why it might be difficult to get testers on board with such a requirement, especially a requirement for third party auditing. The AV business is shrouded in secrecy. Nobody wants PennDOT or anyone else poking around in their business. But every other safety-critical domain is based on an approach of transparent, independent safety assessment.

A key here is that independent auditing does NOT have to include public release of information.  The "secret sauce" doesn't even have to be revealed to auditors, so long as the system is safe regardless of what's in the fancy autonomy parts of the system. There are established models to keep trade secrets a secret used in other industries while still providing independent oversight of safety. There's no reason AVs should be any different. After all, we're all being put at risk by AV testing when we share public roads with them, even as pedestrians. AV testing ought to have transparent, independent safety oversight.

Overall, I think this guidance is excellent progress from PennDOT that puts us ahead of most, if not all locations in the US regarding AV safety testing. I hope that AV testers take this and my points above to heart, and get ahead of the safety testing problem.

Automotive Safety Practices vs. Accepted Principles (SAFECOMP paper)

I'm presenting this paper at SAFECOMP this today 2018 SAFECOMP Paper Preprint Abstract. This paper documents the state of automotiv...