Posts

Correct use of terms: regression, bug, glitch, testing, and beta

Image
I j ust saw another misuse of critical system terminology today by someone who is working on scholarly publications. Because they picked up that habit by working with an autonomous vehicle company. The misuse and abuse of terminology to desensitize people to life critical system defects gets worse all the time. Perhaps it is time to refresh vocabulary for those who have only heard the misuses and don't realize they sound dumb as a box of rocks when they simply repeat what they hear others saying at work. (Caution -- pet peeve meets yelling at clouds here, because the tech industry has invested a decade degrading the meaning of these terms for PR value. If that bothers you just move to the next posting...).  Here are some key terms in play. (And it's not just me -- I point to wikipedia entries for each.) Regression: This is not the general term for "bug". It is a very specific defect in which a previously operational feature stops working now. Even further back, it wa

The case for AVs being 10 to 100 times safer than human drivers

Image
There is a case to be made that at-scale AV deployments should be at least ten times safer than human drivers, and perhaps even safer than that. The rationale for this large margin is leaving room for the effects of uncertainty via incorporating a safety factor of some sort. [1] Consider all the variables and uncertainty discussed in this chapter. We have seen significant variability in fatality and injury rates for baseline human drivers depending on geographic area, road type, vehicle type, road user types, driver experience, and even passenger age. All those statistics can change year by year as well. Additionally, even if one were to create a precise model for acceptable risk for a particular AV’s operational profile within its ODD, there are additional factors that might require an increase: ·         Human biases to both want an AV safer than their own driving and to over-estimate their own driving ability as discussed in a previous section. In short, drivers want an AV dri

The Tesla Autopilot Crashes Just Keep Coming

Image
Picture from a Tesla AP-related crash just before impact into a disabled vehicle: ( Video here on twitter ) Tesla autopilot crashes are still happening when drivers (apparently) succumb to automation complacency. It seems they've just stopped being news. The above picture is from a Tesla camera a fraction of a second before impact. (Somehow it seems there was no injury.) The Tesla was said to have initiated AEB (and disabled AP) about two seconds before impact. The video shows clear sightline to the disabled vehicle for at least 5 seconds, but the driver apparently did not react. Tesla fans can blame the driver all they want -- but that won't stop the next similar crash from happening. Pontificating about personal responsibility and that the driver should have known better won't change things either. And we're far, far past the point where "education" is going to move the needle on this issue. It's time to get serious about: - Requiring effective driver mo

Computer-Based System Safety Essential Reading List

 Don't miss my permanent page on this blog!  Here is the link:  Computer-Based System Safety Essential Reading List

Holiday AV Safety Video Viewing

Daily video & reading suggestions for the holiday season for those into autonomous vehicle safety. Primarily based on new materials from 2022 that you might have missed. Dec. 22: PBS Frontline: Boeing 737 Max: Boeing's Fatal Flaw If there is a loss of independence in safety oversight, bad things will happen. Sometimes they take a while to show up, but they will happen. A cautionary tale about the erosion of Boeing's engineering safety culture. Youtube video:  https://youtu.be/wXMO0bhPhCw Dec. 23: Phil Koopman, Trust & Governance for Autonomous Vehicle Deployment Reports of degraded trust in tech companies in general, especially those deploying autonomous vehicles are widespread. Here's a look at why, and how we might fix that.  YouTube:  https://youtu.be/hZQyFc9ETCE   Blog info: https://safeautonomy.blogspot.com/2022/01/trust-governance-for-autonomous-vehicle.html Dec. 24: Peter Norton, Fighting Traffic How the automotive industry repurposed streets, people'

I have to get out NOW from my autonomous vehicle: urgent egress and passenger overrides

Image
What if you need to get out of the vehicle RIGHT NOW in a robotaxi -- is that allowed? What are the implications? In any automated system there will be times when an occupant wants to override the automation, and especially when they want to exit a moving automated vehicle. Reasons might include: wanting to re-open transit vehicle doors if a passenger was unable to exit in time at their stop; an attack of claustrophobia; wanting to get away from another passenger due to personal safety concerns; or even needing to escape a cabin fire. Some egress requests might constitute misuse or abuse, such as stopping a vehicle to intentionally block traffic, or intentionally accessing an off-limits area such as a bridge with no pedestrian infrastructure. Creating a complete list of all possible motivations is difficult, and weighing the merits of all such egress attempts in advance seems intractable. Nonetheless, there are times when a passenger desire to exit a moving vehicle should be honored,

SCSC Talk: Bootstrapping Safety Assurance

Image
Bootstrapping Safety Assurance Abstract: The expense and general impracticability of doing enough real-world testing to demonstrate safety for autonomous systems motivates finding some sort of shortcut. A bootstrapped testing approach is often proposed, using evidence from initial mishap-free testing to argue that continued testing is safe enough. In this talk I'll explain why pure bootstrapping based on testing exposure as well as arguments involving "probably perfect" bootstrapping expose public road users to undue risk. Moreover, phased deployments often used to argue safe update release have the same problem. An approach that bootstraps on the safety case rather than on vehicle testing is proposed as a potentially better alternative. While the examples given involve autonomous ground vehicles, the principles involved apply to any argument that safety will be demonstrated via a bootstrap testing process. This talk was recorded as part of the SCSC Future o