Friday, December 29, 2023

My Automated Vehicle Safety Prediction for 2024

 My 2024 AV industry prediction starts with a slide show with a sampling of the many fails for automated vehicles in 2023 (primarily Cruise, Waymo, Tesla). Yes, some hopeful progress in many regards. But so very many fails.



At a higher level, the real event was a catastrophic failure of the industry's strategy of relentlessly shouting as loud as they can "Hey, get off our case, we're busy saving lives here!" The industry' lobbyists and spin doctors can only get so much mileage out of that strategy, and it turns out it is far less (by a factor of 10+) than the miles required to get statistical validity on a claim of reducing fatality rates.

My big prediction for 2024 is the industry (if it is to succeed) will get a more enlightened strategy for both deployment criterion and messaging. Sure, on a technical basis, indeed it needs to be safer than comparable human driver outcomes.

But on a public-facing basis it needs to optimize for fewer embarrassments like the 30 photos-with-stories in this slide show. The whole industry needs to pivot into this priority. The Cruise debacle of the last few months proved (once again; remember Uber ATG?) that it only takes one company doing one ill-advised thing to hurt the entire industry.

I guess the previous plan was they would be "done" faster than people could get upset about the growing pains. Fait accompli. That was predictably incorrect. Time for a new plan.

Saturday, December 23, 2023

2023: Year In Review

 2023 was a busy year!  Here is a list of my presentations, podcasts, and formal publications from 2023 in case you missed any and want to catch up.

2023 written in beach sand

Presentations:

Podcasts:
Publications:

Tuesday, December 19, 2023

Take Tesla safety claims with about a pound of salt

This morning when giving a talk to a group of automotive safety engineers I was once again asked what I thought of Tesla claims that they are safer than all the other vehicles. Since I have not heard that discussed in a while, it bears repeating why that claim should be taken with many pounds of salt (i.e., it seems to be marketing puffery).

(1) Crash testing is not the only predictor of real-world outcomes. And I'd prefer my automated vehicle to not crash in the first place, thank you!  (Crash tests have been historically helpful to mature the industry, but have become outdated in the US: https://www.consumerreports.org/car-safety/federal-car-crash-testing-needs-major-overhaul-safety-advocates-say/ )

(2) All data I've seen to date, when normalized (see Noah Goodall's  paper: https://engrxiv.org/preprint/view/1973 ) suggests that any steering automation safety gains (e.g., Level 2 autopilot) are approximately negated by driver complacency, with autopilot on being slightly worse than autopilot off:  "analysis showed that controlling for driver age would increase reported crash rates by 11%" with autopilot turned on vs. lower crash rates with autopilot turned off on the same vehicle fleet.

(3) Any true safety improvement for Tesla is good to have, but is much more likely due to comparison against an "average" vehicle (12 years old in the US) which is much less safe than any recent high-end vehicle regardless of manufacturer, and probably not driven on roads as safe on average as where Teslas are more popular. (Also, see Noah Goodall's point that Tesla omits slow speed crashes under 20 kph whereas comparison data includes those. If you're not counting all the crashes, it should not be a huge surprise that your number is lower than average -- especially if AEB type features are helping mitigate the crash speed to be below 20 kph.)  If there is a hero here it is AEB, not Autopilot.

(4) If you look at IIHS insurance data, Tesla does not rate in the top 10 in any category. So in practical outcomes they are not anywhere near number one. When I did the comparison last year I found out a new Tesla was about the same as my 10-year-old+ Volvo based on insurance outcomes. (Which I have since sold to get a vehicle with newer safety features). That suggests their safety outcomes are years behind the market leaders in safety.  However, it is important to realize that insurance outcomes are limited because they incorporate "blame" into the equation. So they provide a partial picture. IIHS Link: https://www.iihs.org/ratings/insurance-losses-by-make-and-model

(5) The NHTSA report claiming autopilot was safer was thoroughly debunked: https://www.thedrive.com/tech/26455/nhtsas-flawed-autopilot-safety-study-unmasked 

(6) Studies that show ADAS features improve safety are valid -- but the definition of ADAS they include does not include sustained steering of the type involved with Autopilot.  Autopilot and FSD are not actually ADAS. So ADAS studies do not prove they have a safety benefit.  Yep, AEB is great stuff. And Teslas have AEB which likely provides a safety benefit (same as all the other new high-end cars). But Autopilot is not ADAS, and is not AEB.
For example:  https://www.iihs.org/topics/advanced-driver-assistance  lists ADAS features but makes it clear that partial driving automation (e.g., Autopilot) is not a safety feature: "While partial driving automation may be convenient, we don’t know if it improves safety." Followed by a lot of detail about the issues, with citations.

(7) A 2023 study by Lending Tree showed that Tesla was the absolute worst at crashes per 1000 drivers. There are confounders to be sure, but no reason to believe this does not reflect a higher crash rate for Teslas than other vehicles even on a level playing field: https://www.lendingtree.com/insurance/brand-incidents-study/

(8) In December 2023, more than two million Teslas were recalled due to safety issues with autopilot (NHTSA Recall 23V-838). The recall document specifically noted a need for improved driver monitoring and enforcement of operation only on intended roads (for practical purposes, only limited access freeways, with no cross-traffic). The fact that a Part 573 Safety Recall Report was issued means by definition the vehicles had been operating with a safety defect for many years (the first date in the chronology is August 13, 2021, at which time there had been eleven incidents of concern). Initial follow-up investigations by the press indicate the problems were not substantively resolved. (Note that per NHTSA terminology, a "recall" is the administrative process of documenting a safety defect and not the actual fix. The over-the-air-update is the "remedy".  OTA updates do not in any way make such a remedy not a "recall" even if people find the terminology less than obvious.)

(9) In November 2024, iSeeCars released a study based on NHTSA FARS data showing that Tesla was the top brand in terms of fatalities per mile for 2018-2022 model year cars. This was largely because a significant fraction of their fleet is the model Y, which had the #6 highest fatality rate per mile of all models studied. (The model S also featured in the top worst performers.) Top marks in crash safety still does not prevent high fatality rates if people are driving recklessly, distracted, or subject to automation complacency.

(updated 11/16/2024)



Sunday, December 17, 2023

Social vs. Interpersonal Trust and AV Safety

Bruce Schneier has written a thought-provoking piece covering the social fabric vs. human behaviors aspects of trust. Just like "safety," the word "trust" means different things in different contexts, and those differences matter in a very pressing way right now to the larger topic of AI.

Robot and person holding hands

Exactly per Bruce's article, the car companies have guided the public discourse to be about interpersonal trust. They want us to trust their drivers as if they were super-human people driving cars, when the computer drivers are in fact not people, do not have a moral code, and do not fear jail consequences for reckless behavior. (And as news stories constantly remind us, they have a long way to go for the super-human driving skills part too.)

While not specifically about self-driving cars, his theme is about how companies exploit our tendency to make category errors between interpersonal trust and social trust. Interpersonal trust is, for example, the other car will try as hard as it can to avoid hitting me because the other driver is behaving competently or perhaps because that driver has some sort of personal connection to me as a member of my community. Social trust is, for example, the company who designed that car has strict regulatory requirements and a duty of care for safety, both of which incentivize them to be completely sure about acceptable safety before they start to scale up their fleet. Sadly, that social trust framework for computer drivers is weak to the point of being more apparition than reality. (For human drivers the social trust framework involves jail time and license points, neither of which currently apply to computer drivers.)

The Cruise debacle highlights, once again (see also Telsa and Uber ATG, not to mention conventional automotive scandals), the real issue is the weak framework to create social trust of the corporations that build the cars. That lack of framework is a direct result of the corporation's lobbying, messaging, regulatory capture efforts, and other actions.

Interpersonal trust doesn't scale. Social trust is the tool our society uses to permit scaling goods, services, and benefits. Despite the compelling localized incentives for corporations to game social trust for their own benefit, having the entire industry succeed spectacularly at doing so invites long-term harm to the industry itself, as well as all those who do not actually get the promised benefits. We're seeing that process play out now for the vehicle automation industry.

There is no perfect solution here -- it is a balance. But at least right now, the trust situation is way off balance for vehicle automation technology. Historically it has taken a horrific front-page news mass casualty event to restore balance for safety regulations. Even then, to really foster change it needs to involve someone "important" or an especially vulnerable and protection-worthy group.

Industry can still change if it wants to. We'll have to see how it plays out for this technology.

The piece you should read is here:  https://www.belfercenter.org/publication/ai-and-trust

Monday, December 4, 2023

Video: AV Safety Lessons To Be Learned from 2023 experiences

Here is a retrospective video of robotaxi lessons learned in 2023

  • What happened to robotaxis in 2023 in San Francisco.
  • The Cruise crash and related events.
  • Lessons the industry needs to learn to take a more expansive view to safety/acceptability:
    • Not just statistically better than a human driver
    • Avoid negligent driving behavior
    • Avoid risk transfer to vulnerable populations
    • Fine-grain regulatory risk management
    • Conform to industry safety standards
    • Address ethical & equity concerns
    • Build sustainable trust.
Preprint with more detail about these lessons here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4634179

Archive.org alternate video source: https://archive.org/details/l-141-2023-12-av-safety



Saturday, October 28, 2023

A Snapshot of Cruise Crash Reporting Transparency: July & August 2023

A comparison of California Cruise robotaxi crash reports between the California DMV database and the NHTSA SGO database reveals significant discrepancies in reporting. 31 crashes reported to NHTSA do not appear in the California DMV database. This includes seven unreported injury crashes. Of special note is the Cruise crash with a fire truck that caused serious injury to an occupant of the Cruise robotaxi does not appear as a California DMV crash report. To be sure, Cruise might not be legally required to file these reports, but the situation reveals an apparent lack of transparency.

Comparison Results:

39 crashes were identified across both databases for the date-of-crash months of July 2023 through August 2023. The comparison was performed on October 28, 2023, so there was adequate time for all such crashes to have been reported.

Each database was missing one or more crashes found in the other database:

  • 39 crashes in the NHTSA base, including 8 also found in the CA DMV database.
  • 31 crashes reported to NHTSA were not in the California DMV database
  • The California DMV database was in particular missing SEVEN (7) crash reports which indicated an injury had occurred or might have occurred.
  1. NHTSA 30412-5968: Other car ran a red light striking Cruise; passenger of other vehicle treated on scene for minor injury.
  2. NHTSA 30412-5982: Other car ran into Cruise; passenger of other vehicle transported by EMS for further evaluation. Possible injury ("unknown" injury status).
  3. NHTSA 30412-6144: Cruise crash with fire truck; serious injury reported to passenger
  4. NHTSA 30412-6145: Cruise reversing contacted cyclist; minor injury reported to cyclist
  5. NHTSA 30412-6167: Cruise rear-ended after braking; minor injury reported to other vehicle driver
  6. NHTSA 30412-6175: Cruise hit pedestrian crossing in front of it (said to be crossing against light); moderate injury to pedestrian
  7. NHTSA 30412-6270: Cruise hit from behind after stopping to yield to pedestrian in crosswalk; minor injury to passengers inside AV
Two crashes involved non-motorists:
  • 30412-6145 with a cyclist
  • 30412-6175 with a pedestrian

        Why the Disparity?

        The thing that makes this complicated is that CA DMV does not require reporting crashes for "deployment" operation -- just for "testing" operation. Apparently when the regulations were written they did not anticipate that companies would "deploy" immature technology, but that is exactly what has happened.

        It is difficult from available information to check how Cruise is determining which crashes must be reported to California DMV (testing) and which do not have to be reported (deployment). In practice it might boil down to a management decision which ones they want to report, although there might be some less arbitrary internal decision criterion in use.

        CA DMV should require all companies to provide them with unredacted copies of all NHTSA SGO reports to provide improved transparency. For the foreseeable future, making a distinction between "testing" and "deployment" with no driver in the vehicle serves no useful purpose, and impairs transparency. If there is no driver it is a deployment, and should be held to the standards of a production vehicle, including reporting both crashes and driving behavior that puts other road users at undue risk. This is true for all companies, not just Cruise.

        Other notes:

        • CA DMV reports have the street names, yet Cruise redacts this same information from reports filed with NHTSA claiming it is "confidential business information." It is difficult to understand how information publicly reported by California can be classified as "confidential."
        • The NHTSA database does not have the date of the crash, although the California database has that information.
        • Crashes considered were for reported incident dates of July & August 2023, considering only uncrewed (no safety driver) operation.
        • It is our understanding that Cruise is not required to report all crashes that occur during deployment to California DMV. So it is possible that these reporting inconsistencies are still in accordance with applicable regulations.
        • All crashes on this spreadsheet in the NHTSA database list the "driver/operator type" as "Remote (Commercial / Test)" so it is not possible to distinguish whether the vehicle was considered in commercial service at the time of the crash. 
        • At the time of this posting the tragic Oct. 2nd severe injury crash that involved a Cruise robotaxi dragging a pedestrian who had been trapped under the vehicle has also not been reported, while another crash on Oct 6th has. There is nothing on the Oct 6th CA DMV form to indicate that the reported crash was specific to a testing permit vs. deployment permit.

        Review status: 

        This data has not been peer reviewed. Corrections/additions/clarifications are welcome to improve accuracy. The data analysis results are included below.

        Google Spreadsheet link:  https://docs.google.com/spreadsheets/d/1o9WWzMpiuum-QHZk9goY68gnBZuRC1InxUSMa7-h4DU/edit?usp=sharing

        Data sources: 






        Updated 10/30/2023 to incorporate three more crash reports found in a wider search of the SGO database. All CA DMV crash reports have now been identified in the SGO database.






        Friday, October 27, 2023

        The Cruise Safety Stand-Down -- What Happens Next?

        Cruise has announced a fleet-wide safety stand-down. This involves suspending driverless operations in all cities, reverting to operation only with in-vehicle safety drivers.

        I'm glad to see this step taken. But it is crucial to realize that this is the first step in what it likely to prove a long journey. The question is, what should happen next?


        Loss of public trust is an issue as they say. And perhaps there was an imminent ban in another state forcing their hand to be proactive. But the core issues almost certainly run deeper than mismanaging disclosure of the details of a tragic mishap and doing damage control with regulatory trust. 

        The real issues will be found to have their roots in the culture of the company. Earnest, smart employees with the best of intentions can be ineffective at achieving acceptable safety if the corporate culture undermines their official company slogan of "safety first, always."

        This is the time to ask the hard questions. The answers might be even harder, but they need to be understood for Cruise to survive long term. It is questionable whether they could survive a ban in another state. But escaping that via a stand-down only to implement a quick fix won't be enough. If we see business as usual restored in the next few weeks, that is almost certainly a shallow fix. It will simply be a matter of time before a future adverse situation happens from which there will be no recovery. 

        This is the moment for Cruise to decide to lean into safety.

        The details are too much for a post like this, but the topics alone indicate the scope of what has to be considered:
        • Safety engineering -- Have they effectively identified and mitigated risks?
        • Operational safety -- Safety procedures, inspections, maintenance, management of Operational Design Domain limits responsive to known issues, field data feedback, etc.  This includes ensuring their Safety Management System (SMS) is effective.
        • System engineering -- Do the different pieces work together effectively? This includes all the way from software in 3rd party components to vehicle integration to ability of remote operators to effectively manage gaps in capabilities ... and more
        • Public messaging and regulatory interface -- Building genuine trust, starting with more transparency. Stop the blame game; accept accountability. Own it.
        • Investor expectations -- Determine a scaling plan that is sustainable, and figure out how to fund it in the likely case it is longer than what was previously promised
        • Definition of acceptable safety -- More concrete than seeing how it turns out based on crash data, with continual measurement of predictive metrics
        • Safety culture -- Which underlies all of the above, and needs to start at the top.
        And I'm sure there are more; this is just a start.

        Near-term, the point of a safety stand-down is to stabilize a situation during uncertainty. The even more important part comes next: the plan to move forward. It will take weeks to take stock and create a plan, with the first days simply used to organize how that is going to happen. And months to execute that plan. Fortunately for Cruise there is an existing playbook that can be adapted from Uber ATG's experience with their testing fatality in 2018. Cruise should already have someone digging into that for initial ideas.

        An NTSB-style investigation into this mishap could be productive. I think such an investigation would be likely to bring to light new issues that will be a challenge to the whole industry involving expectations for defensive driving behaviors and post-crash safety. If NTSB is unable to take that on, Cruise should find an independent organization who can do something close. But such an investigation is not the fix, and cultural improvements at Cruise should not wait for one to conclude. However, an independent investigation can be the focal point for deeper understanding of the problems that need to be addressed.

        ------------------------------------------------

        Philip Koopman is a professor at Carnegie Mellon University in Pittsburgh Pennsylvania, USA, who has been working on self-driving car safety for more than 25 years.   https://users.ece.cmu.edu/~koopman/

        Wednesday, October 25, 2023

        My Talk at a SF Transportation Authority Board meeting

        On October 24th I had the privilege of testifying at a San Francisco CTA board meeting regarding autonomous vehicle safety. The session started with SFCTA and SFMTA describing the challenges they have had with robotaxis, my talk, and then a series of Q&As about additional relevant topics. The robotaxi companies were invited to participate but declined. The news of the CA DMV suspension of Cruise permits broke as the hearing was starting, adding some additional context to the hearing.

        We have all heard the hype and marketing from the robotaxi companies. This event gave air time to the challenges faced by cities that must be resolved for this technology to succeed at scale.

        • Information about the event:  (link)
        • Full hearing video:  (link
          • Start at 33:48 (click on item 11 in the index to fast forward to that point) for the full segment including SFCTA and SFDMV.
          • My talk starts at 1:10:25
        • My talk only, live recording:  Youtube | Archive.org
        • Slides for my talk (link to acrobat)
        The robotaxi companies were invited, but declined to participate in this event. You can see what they have to say from the transcript of a CPUC status conference held on August 7, 2023: https://docs.cpuc.ca.gov/PublishedDocs/Efile/G000/M517/K407/517407815.PDF

        Direct link to YouTube talk:





        Saturday, October 21, 2023

        Safety Analysis of Two Cruise Robotaxi Pedestrian Injuries

        Cruise has now had two pedestrian injuries in San Francisco, with the more severe one being complicated because it involved a pedestrian first hit by another vehicle.  NHTSA has launched an investigation based on those injuries and at least two other public video reports of close encounters. This makes available the relevant crash reports, so we have more direct information about what happened. The question asked in this piece is what can be done to avoid similar crashes in the future.


        On a numbers basis, two pedestrian injuries in a span of fewer than six weeks for a fleet of a couple hundred vehicles in San Francisco is a concern, so this is worth some analysis based on available information.

        • First injury: Aug. 26, 2023.  A pedestrian stepped off the curb into a crosswalk right in front of a Cruise vehicle at the change of a traffic light. The Cruise swerved, then braked. Impact at 1.4 mph. Pedestrian transported by EMS.
        • Second injury: Oct 2, 2023. A pedestrian crosses on the opposite side of a cross-street in front of the Cruise vehicle and another vehicle next to it. Both vehicles proceeded through the intersection as a pedestrian was in a crosswalk across their paths. The other vehicle struck the pedestrian at an undisclosed speed, who was then run over by the Cruise vehicle and trapped under it with severe injuries.
        In both cases the injuries were severe enough to require transport. For the second crash the pedestrian was almost completely underneath the rear of the vehicle.  (It is worth noting these descriptions are written 100% by Cruise. The reader should assume the most favorable-to-Cruise possible interpretation of events has been presented. If something obviously relevant is omitted, such as the impact speed for the second injury, one is justified in assuming it would be unfavorable to Cruise if disclosed.)

        Cruise, predictably, blames others for both crashes, although in both cases without review of the video it is difficult to be sure that is really true. However, we set blame aside and instead ask the question: what can be done to avoid the next pedestrian injury in similar circumstances.

        First Pedestrian Crash


        For the first crash, the question is whether a reasonable human driver would have had contextual clues that this pedestrian was about to enter the crosswalk even though the light had changed. For example, were they running to catch a bus pulling up to a stop across the street?  Were they "distracted walking?" Or were they at a complete stop on the curb and literally jumped out into the street? Opportunities for improvement include asking these questions:
        • Were there obvious contextual clues that the pedestrian would attempt a last second crossing? What are common cases, and are they covered by the Cruise AV design?
        • Why did the vehicle swerve before stopping instead of doing both at once?
        • Could/should the Cruise vehicle have followed a less aggressive acceleration profile given the likely risk of a pedestrian entry into the crosswalk in that type of circumstance?

        Second Pedestrian Crash


        For the second crash, things are more complicated. Let's break down the sequence, taking into account the initial setup sketched below (note that both vehicles are in the middle of an intersection, but the sketch tool I used did not make this easy to represent):

        1. There are two vehicles starting through an intersection, side by side, with two lanes in that direction of travel. From a top view the other, human driven, dark-colored vehicle is on the left (faster lane) and the lighter-colored Cruise is on the right (curb lane).
        2. A pedestrian is walking across the far side of the intersection in the crosswalk. At the same time, both vehicles accelerate into the intersection. The most likely situation is the Cruise vehicle was a bit behind the other vehicle (although this is an educated guess based on the description of the events).
        3. Cruise says the pedestrian entered the crosswalk after the light changed, crossed in front of the Cruise vehicle, then stopped in the other vehicle's lane. The other driver presumably thought the pedestrian would clear the travel lane in time, and did not slow down.
        4. The other vehicle hit the pedestrian. Cruise says the pedestrian was deflected back into the Cruise vehicle's lane.
        5. The Cruise vehicle "braked aggressively" in response to a surprise pedestrian appearing in its lane, but hit the pedestrian shortly after.
        6. The Cruise vehicle had sufficient forward speed that it ran over the pedestrian and came to a stop with the pedestrian trapped under the rear axle. Both of the pedestrian's feet protruded from under the vehicle by the left rear tire, with that tire on top of one leg. (Photo link below.)
        7. The pedestrian was severely injured by a combination of the two vehicle strikes. Information about the ultimate outcome for that pedestrian is not currently available, although we hope that a recover is quick and as complete as possible.

        California Rules of the Road have an interesting requirement for crosswalks:

        "(c) The driver of a vehicle approaching a pedestrian within any marked or unmarked crosswalk shall exercise all due care and shall reduce the speed of the vehicle or take any other action relating to the operation of the vehicle as necessary to safeguard the safety of the pedestrian."  (emphasis added)

        It is interesting to ask if the Cruise vehicle actually exhibited "all due care."  It likely did not reduce speed from its normal green light acceleration, or Cruise would have taken credit for having done so.  (If they want to provide more details I will gladly update this statement.)

        Of note is the Cruise position that their vehicle stopped as quickly as possible once the pedestrian was in their lane, in effect claiming the collision was unavoidable. But that position is not necessarily true in the larger context, especially if one learns from this crash for the next potential pedestrian crosswalk collision. The question is when the Cruise AV could have stopped. There are at least three possible decision points for stopping to avoid this collision with the pedestrian, and the Cruise vehicle appears not to have exercised the first two:

        • The light changes green, but there is a pedestrian still in the crosswalk in the Cruise vehicle's direction of travel in front of the Cruise vehicle. Did it slow down?  Or execute a normal acceleration because it predicted the pedestrian would be clear by the time it got there?   A prudent human driver would have waited, or more likely crept forward while waiting to signal cars behind it not to honk for failing to recognize a green light.
        • The pedestrian clears the Cruise lane, but the Cruise vehicle clearly sees the pedestrian about to be hit by the adjacent vehicle. The Cruise vehicle could have (I would argue should have) stopped to avoid being close to an injury event. Expecting it to predict a pedestrian collision trajectory is asking a lot -- but it should have stopped precisely because it cannot predict what will happen after such a collision. Safety demands not going fast past a pedestrian who is about to be hit by another vehicle in an adjacent lane. But this is precisely what the Cruise vehicle did.
        • The pedestrian lands in the Cruise lane and the Cruise vehicle has not slowed down yet. By then it is too late, and it runs over the pedestrian.  This could likely have been avoided by a prudent driving strategy that addresses the previous two decision points.

        The Redacted Confidential Business Information

        (This section added October 25, 2023 based on new information.)

        California DMV issued an order suspending the driverless operating permits for Cruise robotaxis on October 24, 2023 as a response to the circumstances of this second crash.  Link to order here.

        This order brought to light that after the vehicle had stopped post-crash, it started movement again with the pedestrian still under the vehicle, dragging that victim about 20 feet at a speed up to 7 mph, which was said to contribute to severe injuries. This strongly suggests the vehicle did not account for a pedestrian being trapped underneath it when deciding to move. (It is possible a remote operator was unaware of the trapped pedestrian and remotely commanded a pull-to-side maneuver. We'll have to see what is revealed during any investigation.)

        Cruise also published a blog post with additional information that day. A straightforward update to the crash report is to add at the end as at least part of the "redacted confidential business information" the following (quoted from the Cruise blog post): 
        "The AV detected a collision, bringing the vehicle to a stop; then attempted to pull over to avoid causing further road safety issues, pulling the individual forward approximately 20 feet."

        This certainly makes Cruise look bad, but that is not an acceptable reason for a redcation. It is difficult to understand how this can reasonably be characterized as "confidential business information" in a mandatory crash report.

        Calling Emergency Services

        Also crucial for practical safety, but barely talked about, is notification of emergency services ("call 911"). News reports indicate that a passer-by called 911, not Cruise. In fact, in neither collision report do they take credit for notifying emergency services. This is a glaring omission that needs to be addressed.

        Consider: they had a vehicle tire on top of a pedestrian's leg and did not call 911. (Again, if this is incorrect I will update this statement when I get that information.) That's a HUGE problem nobody is talking about. A human driver would have realized they just ran someone over and either called 911 or asked someone to do so. If there had been no passer-by, how many minutes would that pedestrian have been trapped under the car before help was summoned?

        The Cruise AV and its support team need to realize an injury has happened and take immediate action. It would be no surprise if the remote operators had no idea what the vehicle had run over. By the time they download and review video logs (or whatever) that pedestrian has been trapped under the vehicle for a while. That's not acceptable. They need to be able to do better.

        Cruise Safety Record

        The first pedestrian injury happened just over two weeks after the August 10th California PUC meeting that granted operating permits to Cruise. That report was overshadowed by the crash apparently due to failure to yield to a fire truck on August 17th. That night also saw another injury involving a collision to a different vehicle driver.  So we are seeing a steady stream of injuries.

        Cruise blames crashes on other parties to the maximum degree possible, and ignores injuries where it is less than 50% at fault (there have been others; notably a very ill-advised left turn maneuver by a Cruise robotaxi that resulted in multiple injuries).  Safety is not achieved by blaming others. If Cruise vehicles are crashing and injuring people more often than other vehicles, then that is an increased rate of injury regardless of blame.

        A company with a responsible safety culture would be asking what they can do to reduce the risk of future injuries -- regardless of blame. We will have to wait to see the outcome of this NHTSA investigation, and whether Cruise proactively improves safety or waits for NHTSA to force the issue.

        As a note to likely responses to this analysis: comparisons to human driver errors are not productive. Indeed, another driver hit the pedestrian first in the second crash. But another driver being negligent does not forgive imprudent driving behavior from a robotaxi that is being relentlessly touted as safer than human drivers. They should be continuously improving, and our hope is that this analysis highlights areas that they and other robotaxi companies need to improve.

        Supporting Information


        Thursday, October 5, 2023

        Defining a Computer Driver for Automated Vehicle Accountability

        This 25-minute video talks about the concept of a Computer Driver to resolve the ambiguity and industry tapdancing we're seeing about who might be negligent for a crash when automation is driving the car. This creates a framework for both SAE Level 2 and Level 3 features that makes it explicit which of the computer driver or the human driver has the duty of care for safety at any given time.

        Topics:

        • Why product liability is not enough to get the job done
        • Tort law for engineers
        • Assigning a duty of care to promote accountability
        • Implications of defining a computer driver
        • Duty of care for conventional, fully autonomous, and testing features
        • The awkward middle: transferring the duty of care for supervisory mode
        • The urgency of defining a computer driver while we wait for longer-term regulations
        Materials:

        YouTube Video




        Saturday, September 30, 2023

        Cruise publishes a baseline for their safety analysis

        Summary: a Cruise study suggests they are better than a young male ride hail driver in a leased vehicle. However, this result is an estimate, because there is not yet enough data to have a firm conclusion.


        I am glad to see Cruise release a paper describing the methodology for computing the human driver  baseline, which they had not previously done. Same too for their "meaningful risk of injury" estimation method. And it is good to see a benchmark that is specific to a deployment rather than a US average.

        Cruise has published a baseline study for their safety analysis here:
         blog post:  https://getcruise.com/news/blog/2023/human-ridehail-crash-rate-benchmark/
         baseline study: https://deepblue.lib.umich.edu/handle/2027.42/178179
        (note that the baseline study is a white paper and not a peer reviewed publication)

        The important take-aways from this in terms of their robotaxi safety analysis are:
        • The baseline is leased ride hail vehicles, not ordinary privately owned vehicles
        • The drivers of the baseline are young males (almost a third are below 30 years old)
        • A "meaningful risk of injury" threshold is defined, but somewhat arbitrary. They apparently do not have enough data to measure actual injury rates with statistical confidence. Given that we have seen two injuries to Cruise passengers so far (and at least one other injury crash), this is not a hypothetical concern.
        It should be no surprise if young males driving leased vehicles as Uber/Lyft drivers have a higher crash rate than other vehicles. That is their baseline comparison. In fairness, if their business model is to put all the Uber and Lyft drivers out of work, perhaps that is a useful baseline. But it does not scale to the general driving population.

        A conclusion that a Cruise robotaxi is safer (fewer injuries/fatalities) than an ordinary human driver is not quite supported by this study.
        • It is not an "average" human driver unless you only care about Uber/Lyft. If that is the concern, then OK, yes, that is a reasonable comparison baseline.
        • I did not see control for weather, time of day, congestion, and other conditions in the baseline. Road type and geo-fence were the aspects of ODD being used.
        • There is insufficient data to have a conclusion about injury rates, although that will come somewhat soon
        • We are a long way away from insight into how fatality rates will turn out, since the study and Cruise have about 5 million miles and San Francisco fatality rate is more like one per 100 million miles
        • The Cruise emphasis on "at fault" crashes is a distraction from crash outcomes that must necessarily include the contribution of defensive driving behavior (avoiding not-at-fault crashes)
        This study could support a Cruise statement that they are on track to being safe according to their selected criteria. But we still don't know how that will turn out. This is not the same as a claim of proven safety in terms of harm reduction.

        A different report does not build a model and estimate, but rather compares actual crash reports for robotaxis with crash reports for ride hail cars and comes to the conclusion that Cruise and Waymo operated at 4 to 8 times as many crashes as average US drivers, but that their crash rate is comparable to ride hail vehicles in California.

        https://www.researchgate.net/publication/373698259_Assessing_Readiness_of_Self-Driving_Vehicles

        Thursday, September 28, 2023

        No, Mercedes-Benz will NOT take the blame for a Drive Pilot crash

        Summary: Anyone turning on Mercedes-Benz Level 3 Drive Pilot should presume they will be blamed for any crash, even though journalists are saying Mercedes-Benz will take responsibility.

        Driver playing Tetris while using Drive Pilot

        Drive Pilot usage while playing Tetris. (Source)

        There seems to be widespread confusion about who will take the blame if the shiny new Mercedes-Benz Drive Pilot feature is involved in a crash in the US. The media is awash with non-specific claims that amount to "probably Mercedes-Benz will take responsibility."  (See here, herehere, and here)

        But the short answer is: it will almost certainly be the human driver taking the initial blame, and they might well be stuck with it -- unless they can pony up serious resources to succeed at a multi-year engineering analysis effort to prove a design defect.

        This one gets complicated. So it is understandable that journalists on deadline simply repeat misleading Mercedes-Benz (MB) marketing claims without necessarily understanding the nuances. This is a classic case of "the large print giveth, and the small print taketh away" lawyer phrasing.  The large print in this case is "MERCEDES-BENZ TAKES RESPONSIBILITY" and the small print is "but we're not talking about negligent driving behavior that causes a crash." 

        The crux of the matter is that MB takes responsibility for product defect liability (which they have to any way -- they have no choice in the matter). But they are clearly not taking responsibility for tort liability related to a crash (i.e., "blame" and related concepts), which is the question everyone is trying to ask them.

        Like I said, it is complicated.

        The Crash

        Here is a hypothetical scenario to set the stage. Our hero is the proud owner of a Drive Pilot equipped vehicle, and has activated the feature in a responsible way. Now they are merrily playing Tetris on the dashboard, reading e-mail, watching a movie, or otherwise using the MB-installed software according to the driver manual. The car is driving itself, and will notify our hero if it needs help, as befits an SAE Level 3 "self-driving" car feature.

        A crash to another vehicle happens ahead in the road, but that other, crashed vehicle is out of the flow of traffic. So our hero's car sees a clear road ahead and does not issue a takeover request to the driver. Our hero is currently engrossed in watching a Netflix movie on the dashboard (using the relevant MB-approved app) full of explosions in an action scene, and does not notice.

        Meanwhile, in the real world, a dazed crash victim child wanders out of their wrecked vehicle onto the roadway. But our hero's MB vehicle has a hypothetical design defect in that it can't detect children reliably. 

        Our hero's vehicle hits and kills that child. (I emphasize this is a hypothetical, but plausible, defect. The point is that we, and the owner, have no way of knowing if such a defect is present right now. Other crash scenarios will play out in a similar way.)

        Our hero then is charged by the police with negligent homicide (or the equivalent charge depending on the jurisdiction) for watching a movie while driving. Additionally a lawsuit for $100M is filed both against the driver and Mercedes-Benz for negligent driving under tort law. The judge determines that Mercedes-Benz did not have a duty of care to other road users under the relevant state law, so the tort lawsuit is changed to be just against the driver.

        What happens to our hero next? 

        Will MB will step up and pay to settle the $100M lawsuit? Will they also volunteer to go to jail instead of our hero? They have not actually said they will do either of these things, because their idea of "responsibility" is talking about something entirely different.

        Tort Law

        I will say right here I am not a lawyer. So this is an engineer's understanding of how the law works. But we are doing pretty basic legal stuff here, so probably this is about right. (If you are a journalist, contact law professor William Widen for his take.)

        Tort law deals with compensation for a "harm" done to others. Put very simply, every driver has a so-called "duty of care" to other road users to avoid harming them. Failing to exercise reasonable care in a way that proximately causes harm can lead to a driver owing compensation under tort law. Unless there is a clear and unambiguous transfer of duty of care from our hero to the MB Drive Pilot feature, our hero remains on the hook under tort law.

        The problem is that the duty of care remains with our hero even after activating Drive Pilot. Pressing a button does not magically make a human driver (who is still required to remain somewhat attentive in the driver seat) somehow not an actual driver under current law.

        But, wait, you say. Mercedes says They Take Responsibility!  (This message is currently being splashed across the Internet in breathless reviews of how amazing this technology is. And the technology is truly amazing.)

        Well no, not according to tort law they don't take responsibility. Instead, the MB position is that the human driver retains the duty of care for potential harm to other road users when using their Level 3 Drive Pilot system -- even while playing Tetris. Their representative admitted this on stage a couple weeks ago in Vienna Austria. I was in the crowd to hear it. Quoting from Junko Yoshida in that article:  "Yes, you heard it right. Despite its own hype, Mercedes-Benz is saying that the driving responsibility still rests on the human driver in today’s L3 system."

        So what does MB mean when they "accept responsibility?"   The answer is -- they are not accepting the responsibility you think they are. In particular, they are not accepting liability that stems from negligent driving behavior.

        Product Defect/Liability

        A statement from Mercedes-Benz issued to the press makes it clear that MB accepts responsibility for product defects, but not tort law liability.  A key phrase is:  "In the context of Drive Pilot, this means that if a customer uses the system as intended and instructed and the system fails to perform as designed, we stand behind our product."  (Source here.)

        This means that in our hypothetical scenario, our hero can hire a lawyer, who then hires a team of engineers to look at the engineering of Drive Pilot to see if it "performs as designed." Expect someone to pay $1M+ for this effort, because this is a full-on, multi-year product defect investigation. Maybe they find a defect. Maybe they don't, even if a defect is really there. Or maybe they find a reproducible defect, and it is so complex the jury doesn't buy the story. And maybe the performance as designed is that sensors will only detect 98% of pedestrians, and the victim in this tragic scenario is just one of that unlucky 2%. 

        Perhaps the owner manual says our hero should have been alert to dazed crash victims wandering in the travel lane. Even while playing Tetris.  (That owner manual is not released yet, so we'll just have to wait and see what it says.) In which case MB will blame our hero for being negligent -- as if one is likely to notice a crash while watching the latest war movie with a high end sound system. And what if the owner didn't bother to read the manual and instead just took the dealer and dozens of car journalists saying "MB accepts responsibility" at face value. (OK, well maybe that's a different lawsuit. But we're talking a legal mess here, not a clear acceptance of the duty of care for driving by MB.)

        Who is holding the bag here?

        It's probably our hero, and not Mercedes-Benz holding the bag. Even though our hero believed the MB marketing that said it was OK to play video games instead of looking out the front window, and believed their claims of superior engineering quality, and so on.

        Maybe a product defect can be found. Finding one might influence a jury in a tort law case to blame MB instead of our hero. (But that's not guaranteed. And what if the jury finds a reason to blame both?) Perhaps our hero has a robust umbrella insurance coverage instead of the state minimum that covers such a loss. Perhaps our hero is so broke they are "judgement proof" (no assets to collect -- but owning a Mercedes-Benz makes that less likely I'd think).

        In effect, what we'll see is that the human driver will almost certainly be presumed guilty of negligence in a crash if watching the dashboard instead of the road, even if the vehicle operational mode is telling them it is OK to do exactly that. The onus will be upon the driver to prove a design defect as a defense. This is difficult, expensive, time consuming, and overall not an experience I would wish on anyone.

        And there is the specter of criminal liability for negligent homicide (or the equivalent). Depending on the state, our hero might be charged criminally based on being the operator or even just the vehicle owner (even if someone else is driving!). It depends. Nobody really knows what will happen until we see a case go through the system. But outcomes so far in Level 2 vehicle automation cases are not encouraging for our hero's prospects.

        Driving a Level 3 vehicle is only for the adventurous

        Perhaps you believe any claim by MB that their feature will never cause a crash while engaged. Ever. Pinkie swear. But they have had recalls for software defects before, and there is no reason to believe this new, complex software is exempt from defects or malfunctions.

        We don't know how this will turn out until a few such crashes make their way through the court system. But there is little doubt that it will be a rough ride for both the drivers and the crash victims after a crash while we see how this sorts out.

        Anyone turning on Drive Pilot should presume they will be blamed for any crash, no matter what Mercedes-Benz says. As even MB admits, the human driver retains the duty of care for safety to other road users at all times, just as in a Level 2 "autopilot"-style system. Marketing statements made by MB about giving time back to the driver don't change that.

        Who knows, maybe MB will decide to stand behind their product and pay out on the behalf of customers who find themselves embroiled in tort law cases. But they have not said they will do so. And maybe you believe that MB engineers are so good, that this particular software will have no defects AND will never mistakenly cause a crash.  But that's not a set of dice I'm not eager to roll.

        This legal ambiguity is completely avoidable by establishing a statutory duty of care for computer drivers that assigns the manufacturer as the responsible party under tort and criminal law. But until that happens, our hero is going to be left holding the bag.

        (Things get even worse if you want to dig into the complexities of the handoff process... but that is a story for another post.)


        Update --  The MB Drive Pilot US manual is out, and it doubles down on these issues.  Drivers are apparently required to notice "irregularities .. in the traffic situation" -- which means paying attention to the road.  And for "obvious circumstances"

        Wednesday, September 27, 2023

        Five questions cities should ask when robotaxis come to town

        If your city is getting robotaxis, here are five questions you should be asking yourself if you are in local government or an advocate for local stakeholders. Some of the answers to these questions might not be easy if you are operating under a state-imposed municipal preemption clause, which is quite common. But you should at least think through the situations up front instead of having to react later in crisis mode.

        (There might well be benefits to your city. But the robotaxi companies will be plastering them over as much media as they can, so there is no real need to repeat them here. Instead, we're going to consider things that will matter if the rollout does not go ideally, informed by lessons San Francisco has learned the hard way.)

        Local citizens have questions for the robotaxi / Dall-E 2

        (1) How will you know that robotaxis are causing a disruption?

        In San Francisco there has been a lot of concern about disruption of fire trucks, emergency response scenes, blocked traffic, and so on. The traffic blockages show up pretty quickly in other cities as well. While companies say "stopping the robotaxi is for safety" that is only half the story. A robotaxi that stays stopped for tens of minutes causes other safety issues, such as blocking emergency responders, as well as disrupts traffic flow.

        How will you know this is happening? Do you plan to proactively collect data from emergency responders and traffic monitoring? Or wait for twitter blow-ups and irate citizens to start coning cars? What is your plan if companies cause excessive disruption?

        (2) How will you share data with companies that they should be using to limit testing?

        For example, you might wish to ask companies not to test near parades, first amendment events, active school zones, or construction areas. Is it easy for companies to access this information, preferably electronically? Are they interested in doing that? Will they follow your requested testing/operational exclusion areas? What do you plan to do if they ignore your requests to restrict testing in sensitive areas?

        (3) How will you ensure testing and operational equity?

        What if disruption, testing incidents, and other issues are concentrated in historically disadvantaged areas? (This might happen due to company policy, but might instead be an unintended emergent result due to higher-than-average population density and emergency response activity in such areas.)

        How will you know whether exposure to testing risk is being imposed in an equitable manner, especially if robotaxi companies claim that where they test is a trade secret?

        If robotaxis are being sold based on service to the disabled and for other social goods, how will you be able to measure whether companies are living up to their promises?

        (4) How will you issue traffic tickets to a robotaxi?

        Some states require a moving violation citation to be issued to a natural person, but robotaxis don't have a driver. Consider proactively moving to get state-level regulations fixed sooner rather than later to correct this. Without the ability to ticket robotaxis you might find yourself without viable enforcement tools for the worst robotaxi behaviors that might occur.

        (5) How can you productively engage with companies despite municipal preemption laws?

        Apparently in some cities there is a good working relationship with between city government and robotaxi operators. In San Francisco the city and the companies are practically at war. There are no magic solutions, but trying hard up front to build bridges before things get tense is better than reacting to bad news.

        -----------------------------