The TuSimple crash raises safety culture questions

A  WSJ scoop confirms the TuSimple April 6th crash video and FMCSA investigation.   See: This article validates what we saw in this video showing the crash: TuSimple blames human error, but it sounds more like a  #moralcrumplezone  situation, with serious safety culture concerns if the story reflects what is actually going on there. A left turn command was pending from a previous disengagement minutes before. When re-engaged the system started executing a sharp left turn while at 65 mph on a multi-lane highway. That resulted in crossing another traffic lane, a shoulder, and then hitting a barrier. The safety driver reacted as quickly as one could realistically hope -- but was not able to avoid the crash. A slightly different position of an adjacent vehicle would have meant crashing into another road user (the white pickup truck in an adjacent lane can

Blame should not be a factor in AV incident reports

A proposal: Any mention of blame in an autonomous vehicle incident report should immediately discredit the reporting entity's claims of caring about safety and tar them with the brush of "safety theater" Reading mishap reports from California and NHTSA data it is obvious that so many AV companies are trying as hard as they can to blame anyone and anything other than themselves for crashes. That's not how you get safety -- that's how you do damage control. Sure, in the near term people might buy the damage control. And for many who just see a mention it sticks permanently (think "pedestrian jumped out of the shadows" narrative for the Uber ATG fatality -- the opposite of what actually happened). But that only gets short term publicity benefit at the expense of degrading long term safety. If companies spin and distort safety narratives for each crash, they do not deserve trust for safety. If a crash report sounds like it was written by lawyers defending th

PA Autonomous Vehicle Testing Legislation Still Needs Work

PA HB 2398 would legalize autonomous vehicles (AVs) without human drivers in Pennsylvania. Having passed the PA House, it is pending in the PA Senate Transportation Committee . While the bill has improved, my 25 years of experience working on AV safety at Carnegie Mellon University leave me with significant remaining concerns: A municipal preemption clause would prevent Pittsburgh from restricting the testing of immature self-driving vehicle technology in active school zones and other high risk locations. A loophole regarding vehicles “approved for noncommercial use” apparently exempts most AVs from certification when using conventional vehicle retrofits, potentially rendering the bill toothless. Test drivers are not required to conform to an established industry standard for testing safety as done elsewhere in the US. Argo AI is

PA House HAV bill progress & issues

This past week the PA House Transportation Committee significantly revised and then passed a bill granting sweeping authority to operate Highly Automated Vehicles (HAVs) in the Commonwealth of Pennsylvania. That includes light vehicles, heavy trucks, and platoons of heavy trucks. This bill has evolved over time and seems a better candidate for a final law than the older, much more problematic Senate bill . It has some good points compared to what we've seen in other states, such as an insurance minimum of $1M, and placing PennDOT in regulatory control instead of Public Safety. By way of contrast, in other states the State Police are in charge of regulating (they have no real ability to do so, and realize this, but the HAV industry pushed to have it this way), and insurance minimums are as low as $25K or $50K. So we're doing better than some other states.  The PA bill establishes an advisory committee, but it is unclear whether it will have much power, and its current mandate is

Cruise robotaxi struggles with real-world emergency vehicle situation

A Cruise robotaxi failed to yield effectively to a fire truck, delaying it. Sub-headline: Garbage truck driver saves the day when Cruise autonomous vehicle proves itself to not be autonomous. This referenced article explains the incident in detail, which involves a garbage truck blocking one lane and the Cruise vehicle pulling over into a position that did not leave enough room for the fire truck to pass. But it also argues that things like this should be excused because it is in the cause of developing life saving technology. I have to disagree. Real harm done now to real people should not be balanced against theoretical harm potentially saved in the future. Especially when there is no reason (other than business incentives) to be doing the harm today, and the deployment continues once it is obvious that near-term harm is likely. I would say that if the car can't drive in the city like a human driver, it should have a human driver to take over when the car can't. Whatever rem

Tesla emergency door releases -- what a mess!

The Tesla manual door releases -- and lack thereof in some cases -- present unreasonable risk. What in the world were they thinking? Really bad human interface design. Cool design shouldn't come at expense of life critical peril. This article this week sums up the latest, but this has been going on for a long time. Tesla fans seem to be saying that it is the driver's responsibility to know where the manual release latch is to escape in case of fire. Anyone who doesn't is (and has in past fires) been ridiculed on-line for not knowing where the manual release is hidden. Even if they died due to not successfully operating the control, or having to kick the window out, somehow they are the idiots and it is their fault, not Tesla's. (If someone you love has died or been injured in this way you have my sympathy, and it is the trolls who are idiots, not your loved one.) On-line articles saying "here's how to operate the door release so you don't die in a Tesla fi

A gentle introduction to autonomous vehicle safety cases

I recently ran into this readable article about AV safety cases by Thomas & Vandenberg from 2019. While things have changed a bit, it still is a reasonable introduction for anyone asking "what exactly would an AV safety case look like." A real industry-strength safety case is going to be complicated in many ways. In particular, there are many different approaches for breaking down G1 which will significantly affect things. On the other hand all the pieces will need to be there somewhere, so choosing this high level breakdown is more of an architectural choice (for the safety case, not necessarily the system). We do not yet have a consensus on an optimal strategy for building such safety cases, but this is not a bad starting place from safety folks who were previously at Uber ATG. Thomas & Vandenberg, Harnessing Uncertainty in Autonomous Vehicle Safety, Journal of System Safety, Vol. 55, No. 2 (2019) (Uber ATG also published a  muc