Friday, May 27, 2022

Cruise robotaxi struggles with real-world emergency vehicle situation

A Cruise robotaxi failed to yield effectively to a fire truck, delaying it.

Sub-headline: Garbage truck driver saves the day when Cruise autonomous vehicle proves itself to not be autonomous.

This referenced article explains the incident in detail, which involves a garbage truck blocking one lane and the Cruise vehicle pulling over into a position that did not leave enough room for the fire truck to pass. But it also argues that things like this should be excused because it is in the cause of developing life saving technology. I have to disagree. Real harm done now to real people should not be balanced against theoretical harm potentially saved in the future. Especially when there is no reason (other than business incentives) to be doing the harm today, and the deployment continues once it is obvious that near-term harm is likely.


I would say that if the car can't drive in the city like a human driver, it should have a human driver to take over when the car can't. Whatever remote system Cruise has is clearly inadequate, because we've seen three problems recently (the article mentions them; an important one is driving with headlights off at night and Cruise's reaction to that incident). The article attributes the root cause of this incident to Cruise not having worked through all interactions with emergency vehicles, which is a reasonable analysis as far as it goes. But why are they operating in a major city with half-baked emergency vehicle interaction?


This time no major harm was done because the garbage truck driver was able to move that vehicle instead. (Realize that garbage trucks stopped in-lane with no driver in the vehicle is business as usual for them, as a human driver would know.) The fire did in fact cause property damage and injuries, so things could have been a lot worse due to a longer delay if not for a quick-acting truck driver. (Who by the way has my admiration for acting quickly and decisively.) What if the garbage truck had been a disabled vehicle or a the truck had been in the middle of an operation it could not be moved during? Then the fire truck would have been stuck. The article says the situation was complex, but driving in the real world is complex. I've personally been in situations where I needed to do something unconventional to let an emergency vehicle pass. A competent human driver understands the situation and acts. Yep, it's complex. If you can't handle complex, don't get on the road without a human backup driver.


The safety driver should not be removed until the vehicle can 100% conform to safety relevant traffic laws and practical resolution of related situations such as this. "Testing" without a human safety driver when the vehicle isn't safe is not testing -- it's just plain irresponsible. There is no technical reason preventing Cruise from keeping a safety driver in their vehicles while they continue testing. Doing so wouldn't delay the technology development in the slightest -- if what they care about is safety. If the safety driver literally has to do nothing, you've still done your testing and your multi-billion dollar company is out a few bucks for safety driver wages. If safety is truly #1, why would you choose to cut costs and remove that safety driver if you know your system isn't at 100% safe yet? Removing the safety driver is pure theater playing to public opinion and, one assumes, investors.


Cruise says that they apply a Safety Management System (SMS) "rigorously across the company." A main point of an SMS is to recognize operational hazards and alter operations in response to discovered hazards. In this case, it is clear that interaction with emergency vehicles requires more sophistication and presents a public safety hazard as currently implemented. Safety drivers should go back into the vehicles until they fix all such issues (not just this one particular interaction scenario) and their vehicle can really drive safely. Unless they simply decide that letting fire trucks on the way to a burning building pass is low priority for them.


Cruise is lucky the delayed fire truck arrival was not attributable to a death -- this time. This incident happened at 4 AM, which shows even in a nearly empty city you need to have a very sophisticated driver to avoid safety issues. At the very least they should halt no-human-driver operations until they can attest that they can handle every possible emergency vehicle interaction without causing more delay to the emergency vehicle than a proficient human driver, including situations in which a human driver would normally get creative to allow emergency vehicle progress. City officials wrote in a filing to the California Public Utilities Commission: "This incident slowed SFFD response to a fire that resulted in property damage and personal injuries,” and were concerned that frequent in-lane stops by Cruise vehicles could have a "negative impact" on fire department response times. Every safety related incident needs to be addressed. It is a golden opportunity to improve before you get unlucky. Cruise says they have a "rigorous" SMS, but I'm not seeing it. Will Cruise learn? Or will they keep rolling dice without safety drivers? Cruise shouldn't wait for something worse to happen before getting the message that they need to do better if they want to operate without a safety driver.


Wednesday, May 25, 2022

Tesla emergency door releases -- what a mess!

The Tesla manual door releases -- and lack thereof in some cases -- present unreasonable risk. What in the world were they thinking? Really bad human interface design. Cool design shouldn't come at expense of life critical peril. This article this week sums up the latest, but this has been going on for a long time.

Tesla fans seem to be saying that it is the driver's responsibility to know where the manual release latch is to escape in case of fire. Anyone who doesn't is (and has in past fires) been ridiculed on-line for not knowing where the manual release is hidden. Even if they died due to not successfully operating the control, or having to kick the window out, somehow they are the idiots and it is their fault, not Tesla's. (If someone you love has died or been injured in this way you have my sympathy, and it is the trolls who are idiots, not your loved one.)

On-line articles saying "here's how to operate the door release so you don't die in a Tesla fire" tell you there is a problem. This design is unreasonably risky for real world use. A "bet you didn't know -- so here is how to not die" article in social media means there is unacceptable risk. Example:  "Tesla Model Y fire incident: remember, there's a manual door release, here's how to use it in an emergency."

Front doors you have to lift up a not particularly obvious lever in front of the window switches that is easy to miss if you don't know it is there. Maybe if you have used it a few times -- but if you never realized it is there or you have rented/borrowed the car, good luck with that.  I'd probably have trouble finding it even if I weren't suffocating from smoke from a battery fire. (Have you ever had to consult the owner manual to find your hood release? Imagine doing that to find out how to open the door when your car is literally on fire -- oh, but if it is an electronic manual and you've lost power, you can't do that on the center console, can you?)

And if you're a passenger and driver is unconscious you will have issues. Etc. Do you read all the safety instructions in the driver manual when you catch a quick ride as a passenger with a friend? Does your friend brief you on escape safety features so you can exit before a 5 minute ride? Thought not.

But wait, there's more:

  • Model S rear door: "fold back the edge of the carpet" to find a pull cable
  • Model X falcon wing doors: "carefully remove the speaker grille from the door and pull the mechanical release cable..."
  • Model 3 rear door -- NOT EQUIPPED WITH MANUAL RELEASE (from manual: "Only the front doors are equipped with a manual door release")
So I guess the passengers in the back are kind of expendable. For many that will be the kids.

This is stunningly bad human interface design. It is entirely unreasonable to expect an ordinary car owner to know where a hidden/non-obvious emergency control is and activate it when they are trapped inside a burning car. Let alone passengers. Apparently without mandatory training and mandatory periodic refresher training.


Anyone who thinks it is reasonable to expect someone not trained in military/aviation/etc. to get this right probably has not served or been through that type of training. I have been through tons of training. Emergency drills that might give some nightmares (sealed inside a tank with broken pipes and told to plug the flooding is extra-special). And a few times the real thing. Not always with perfect execution, because there is compelling data showing humans suck at performing complicated, non-reflex-trained tasks under stress (and thus, more practice, more drills). After all that, I wouldn't want to risk my life on this hot mess of an egress system. 

Education and shaming won't prevent the next death from this unreasonable risk. 

I can't imagine why NHTSA wouldn't want to do a recall on this.

(To the extent this is true of other brands that is equally problematic. I don't have info on them.)

EDIT: a Linkedin commenter pointed me to this story about a Corvette fatality related to a similar issue. From what I can tell repair parts for Corvettes indicated a clearly marked egress pull that is on the floorboards. So not ideal, and possibly difficult to see if you are already in the seat. Worth reconsidering. But not literally hidden (or missing) as in Teslas, and certainly not in vehicles being sold as family cars.  Perhaps now that Tesla has pushed the enveloped past any reasonable limits it's time for standards on egress actuator visibility and accessibility.

This has been a known issue at least since a 2019 crash, summarized here: https://www.autoblog.com/2019/02/28/tesla-fiery-crash-closer-look-door-locks/     That fatality also had to do with door handles not popping up after a crash, so a rescuer was unable to open doors from the outside. It's time to pay attention before more people get trapped inside burning cars.

Friday, May 20, 2022

A gentle introduction to autonomous vehicle safety cases

I recently ran into this readable article about AV safety cases by Thomas & Vandenberg from 2019. While things have changed a bit, it still is a reasonable introduction for anyone asking "what exactly would an AV safety case look like."

A real industry-strength safety case is going to be complicated in many ways. In particular, there are many different approaches for breaking down G1 which will significantly affect things. On the other hand all the pieces will need to be there somewhere, so choosing this high level breakdown is more of an architectural choice (for the safety case, not necessarily the system). We do not yet have a consensus on an optimal strategy for building such safety cases, but this is not a bad starting place from safety folks who were previously at Uber ATG.

Thomas & Vandenberg, Harnessing Uncertainty in Autonomous Vehicle Safety, Journal of System Safety, Vol. 55, No. 2 (2019)

https://doi.org/10.56094/jss.v55i2.46


(Uber ATG also published a much more complex safety case. However, I recommend this overview paper rather than that more complex safety case to get insight if you are just getting started.)

Wednesday, May 18, 2022

SEAMS Keynote talk: Safety Performance Indicators and Continuous Improvement Feedback

Abstract: Successful autonomous ground vehicles will require a continuous improvement strategy after deployment. Feedback from road testing and deployed operation will be required to ensure enduring safety in the face of newly discovered rare events. Additionally, the operational environment will change over time, requiring the system design to adapt to new conditions. The need for ensuring life critical safety is likely to limit the amount of real time adaptation that can be relied upon. Beyond runtime responses, lifecycle safety approaches will need to incorporate significant field engineering feedback based on safety performance indicator monitoring.

A continuous monitoring and improvement approach will require a fundamental shift in the safety world-view for automotive applications. Previously, a useful fiction was maintained that vehicles were safe for their entire lifecycle when deployed, and any safety defect was an unwelcome surprise. This approach too often provoked denial and minimization of the risk presented by evidence of operational safety issues so as to avoid expensive recalls and blame. In the future, the industry will need to embrace a model in which issues are proactively detected and corrected in a way that avoids most loss events, and that uses field incident data as a primary driver of improvement. Responding to automatically generated field incident reports to avoid later losses should be a daily practice in the normal course of business rather than evidence of an engineering mistake for which blame is assigned. This type of engineering feedback approach should complement any on-board runtime adaptation and fault mitigation.





Thursday, May 12, 2022

ICSE keynote: Autonomous Vehicles and Software Safety Engineering

Abstract: Safety assurance remains a significant hurdle for widespread deployment of autonomous vehicle technology. The emphasis for decades has been on getting the technology to work well enough on everyday situations. However, achieving safety for these life-critical systems requires more. While safety encompasses correct operation for the mundane, it also requires special attention to mitigating the risk presented by rare but high consequence potential loss events. In this talk I'll cover some history of autonomous vehicle development and safety at the Carnegie Mellon National Robotics Engineering Center that led over the years to the development of the ANSI/UL 4600 standard for autonomous vehicle safety. I'll also touch upon activities specific to safety engineering, why a heavy tail distribution of rare events makes ensuring safety so difficult, why brute force road testing won't ensure safety, and the emergence of safety assurance cases as the approach of choice for autonomous vehicle safety.