Friday, December 29, 2023

My Automated Vehicle Safety Prediction for 2024

 My 2024 AV industry prediction starts with a slide show with a sampling of the many fails for automated vehicles in 2023 (primarily Cruise, Waymo, Tesla). Yes, some hopeful progress in many regards. But so very many fails.



At a higher level, the real event was a catastrophic failure of the industry's strategy of relentlessly shouting as loud as they can "Hey, get off our case, we're busy saving lives here!" The industry' lobbyists and spin doctors can only get so much mileage out of that strategy, and it turns out it is far less (by a factor of 10+) than the miles required to get statistical validity on a claim of reducing fatality rates.

My big prediction for 2024 is the industry (if it is to succeed) will get a more enlightened strategy for both deployment criterion and messaging. Sure, on a technical basis, indeed it needs to be safer than comparable human driver outcomes.

But on a public-facing basis it needs to optimize for fewer embarrassments like the 30 photos-with-stories in this slide show. The whole industry needs to pivot into this priority. The Cruise debacle of the last few months proved (once again; remember Uber ATG?) that it only takes one company doing one ill-advised thing to hurt the entire industry.

I guess the previous plan was they would be "done" faster than people could get upset about the growing pains. Fait accompli. That was predictably incorrect. Time for a new plan.

Saturday, December 23, 2023

2023: Year In Review

 2023 was a busy year!  Here is a list of my presentations, podcasts, and formal publications from 2023 in case you missed any and want to catch up.

2023 written in beach sand

Presentations:

Podcasts:
Publications:

Tuesday, December 19, 2023

Take Tesla safety claims with about a pound of salt

This morning when giving a talk to a group of automotive safety engineers I was once again asked what I thought of Tesla claims that they are safer than all the other vehicles. Since I have not heard that discussed in a while, it bears repeating why that claim should be taken with many pounds of salt (i.e., it seems to be marketing puffery).

(1) Crash testing is not the only predictor of real-world outcomes. And I'd prefer my automated vehicle to not crash in the first place, thank you!  (Crash tests have been historically helpful to mature the industry, but have become outdated in the US: https://www.consumerreports.org/car-safety/federal-car-crash-testing-needs-major-overhaul-safety-advocates-say/ )

(2) All data I've seen to date, when normalized (see Noah Goodall's  paper: https://engrxiv.org/preprint/view/1973 ) suggests that any steering automation safety gains (e.g., Level 2 autopilot) are approximately negated by driver complacency, with autopilot on being slightly worse than autopilot off:  "analysis showed that controlling for driver age would increase reported crash rates by 11%" with autopilot turned on vs. lower crash rates with autopilot turned off on the same vehicle fleet.

(3) Any true safety improvement for Tesla is good to have, but is much more likely due to comparison against an "average" vehicle (12 years old in the US) which is much less safe than any recent high-end vehicle regardless of manufacturer, and probably not driven on roads as safe on average as where Teslas are more popular. (Also, see Noah Goodall's point that Tesla omits slow speed crashes under 20 kph whereas comparison data includes those. If you're not counting all the crashes, it should not be a huge surprise that your number is lower than average -- especially if AEB type features are helping mitigate the crash speed to be below 20 kph.)  If there is a hero here it is AEB, not Autopilot.

(4) If you look at IIHS insurance data, Tesla does not rate in the top 10 in any category. So in practical outcomes they are not anywhere near number one. When I did the comparison last year I found out a new Tesla was about the same as my 10-year-old+ Volvo based on insurance outcomes. (Which I have since sold to get a vehicle with newer safety features). That suggests their safety outcomes are years behind the market leaders in safety.  However, it is important to realize that insurance outcomes are limited because they incorporate "blame" into the equation. So they provide a partial picture. IIHS Link: https://www.iihs.org/ratings/insurance-losses-by-make-and-model

(5) The NHTSA report claiming autopilot was safer was thoroughly debunked: https://www.thedrive.com/tech/26455/nhtsas-flawed-autopilot-safety-study-unmasked 

(6) Studies that show ADAS features improve safety are valid -- but the definition of ADAS they include does not include sustained steering of the type involved with Autopilot.  Autopilot and FSD are not actually ADAS. So ADAS studies do not prove they have a safety benefit.  Yep, AEB is great stuff. And Teslas have AEB which likely provides a safety benefit (same as all the other new high-end cars). But Autopilot is not ADAS, and is not AEB.
For example:  https://www.iihs.org/topics/advanced-driver-assistance  lists ADAS features but makes it clear that partial driving automation (e.g., Autopilot) is not a safety feature: "While partial driving automation may be convenient, we don’t know if it improves safety." Followed by a lot of detail about the issues, with citations.

(7) A 2023 study by Lending Tree showed that Tesla was the absolute worst at crashes per 1000 drivers. There are confounders to be sure, but no reason to believe this does not reflect a higher crash rate for Teslas than other vehicles even on a level playing field: https://www.lendingtree.com/insurance/brand-incidents-study/

(8) In December 2023, more than two million Teslas were recalled due to safety issues with autopilot (NHTSA Recall 23V-838). The recall document specifically noted a need for improved driver monitoring and enforcement of operation only on intended roads (for practical purposes, only limited access freeways, with no cross-traffic). The fact that a Part 573 Safety Recall Report was issued means by definition the vehicles had been operating with a safety defect for many years (the first date in the chronology is August 13, 2021, at which time there had been eleven incidents of concern). Initial follow-up investigations by the press indicate the problems were not substantively resolved. (Note that per NHTSA terminology, a "recall" is the administrative process of documenting a safety defect and not the actual fix. The over-the-air-update is the "remedy".  OTA updates do not in any way make such a remedy not a "recall" even if people find the terminology less than obvious.)

(updated 1/1/2024)



Sunday, December 17, 2023

Social vs. Interpersonal Trust and AV Safety

Bruce Schneier has written a thought-provoking piece covering the social fabric vs. human behaviors aspects of trust. Just like "safety," the word "trust" means different things in different contexts, and those differences matter in a very pressing way right now to the larger topic of AI.

Robot and person holding hands

Exactly per Bruce's article, the car companies have guided the public discourse to be about interpersonal trust. They want us to trust their drivers as if they were super-human people driving cars, when the computer drivers are in fact not people, do not have a moral code, and do not fear jail consequences for reckless behavior. (And as news stories constantly remind us, they have a long way to go for the super-human driving skills part too.)

While not specifically about self-driving cars, his theme is about how companies exploit our tendency to make category errors between interpersonal trust and social trust. Interpersonal trust is, for example, the other car will try as hard as it can to avoid hitting me because the other driver is behaving competently or perhaps because that driver has some sort of personal connection to me as a member of my community. Social trust is, for example, the company who designed that car has strict regulatory requirements and a duty of care for safety, both of which incentivize them to be completely sure about acceptable safety before they start to scale up their fleet. Sadly, that social trust framework for computer drivers is weak to the point of being more apparition than reality. (For human drivers the social trust framework involves jail time and license points, neither of which currently apply to computer drivers.)

The Cruise debacle highlights, once again (see also Telsa and Uber ATG, not to mention conventional automotive scandals), the real issue is the weak framework to create social trust of the corporations that build the cars. That lack of framework is a direct result of the corporation's lobbying, messaging, regulatory capture efforts, and other actions.

Interpersonal trust doesn't scale. Social trust is the tool our society uses to permit scaling goods, services, and benefits. Despite the compelling localized incentives for corporations to game social trust for their own benefit, having the entire industry succeed spectacularly at doing so invites long-term harm to the industry itself, as well as all those who do not actually get the promised benefits. We're seeing that process play out now for the vehicle automation industry.

There is no perfect solution here -- it is a balance. But at least right now, the trust situation is way off balance for vehicle automation technology. Historically it has taken a horrific front-page news mass casualty event to restore balance for safety regulations. Even then, to really foster change it needs to involve someone "important" or an especially vulnerable and protection-worthy group.

Industry can still change if it wants to. We'll have to see how it plays out for this technology.

The piece you should read is here:  https://www.belfercenter.org/publication/ai-and-trust

Monday, December 4, 2023

Video: AV Safety Lessons To Be Learned from 2023 experiences

Here is a retrospective video of robotaxi lessons learned in 2023

  • What happened to robotaxis in 2023 in San Francisco.
  • The Cruise crash and related events.
  • Lessons the industry needs to learn to take a more expansive view to safety/acceptability:
    • Not just statistically better than a human driver
    • Avoid negligent driving behavior
    • Avoid risk transfer to vulnerable populations
    • Fine-grain regulatory risk management
    • Conform to industry safety standards
    • Address ethical & equity concerns
    • Build sustainable trust.
Preprint with more detail about these lessons here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4634179

Archive.org alternate video source: https://archive.org/details/l-141-2023-12-av-safety