Showing posts with label ethics. Show all posts
Showing posts with label ethics. Show all posts

Friday, November 18, 2022

The effect of AV company business models on safety

The business model and exit plan for an AV company can powerfully incentivize behavior that is at odds with public safety and transparency. This is probably not news regarding any private company, but it is especially a problem for AV safety.

Business meeting at table with laptops

An AV developer with a plan to develop, deploy, and long-term sustain their technology should be incentivized to reach at least some level of safety subject to all the ethical issues discussed already in this chapter. If they do not, they will probably not have a viable long-term business. Arguments for a light regulatory touch often make this argument that companies will act in their own long-term best interest. But what if the business incentive model is optimized for something shorter than the “long-term” outcomes?

Short-term aspects of the business objectives and the business structure itself can add pressure that might tend to erode any commitment to acceptable safety. Factors include at least the following, several of which can interact with each other:

·        Accepting money from traditional venture capital sources can commit a company to a five-year timeline to produce products. Thus far we have seen that five-year timelines are far too aggressive to develop and deploy an AV at scale. Re-planning and raising more funding can lengthen the timeline, but there remains risk that funding incentivizes aggressive milestones to show increased functionality and, in particular, remove safety drivers rather than core efforts on safety. Some companies will likely be better at resisting this pressure than others.

·        A business exit plan of an Initial Public Offering (IPO), going public via a Special Purpose Acquisition Company (SPAC), or being bought out by a competitor historically emphasize perceived progress on functionality rather than safety. If the exit plan is to make safety someone else’s problem post-exit, it is more difficult to justify spending resources on safety rather than functionality until the company goes public.[1]

·        The AV industry as a whole takes an aggressively non-regulatory posture, with that policy approach historically enabled by US DOT.[2] This situation forces little, if any, accountability for safety until crashes happen on public roads. There is a tendency for at least some companies seem to treat safety more as a public relations and risk management function than a substantive safety engineering function. Short-term incentives can align with a dysfunctional approach.

·        Founders of AV companies with a primarily research, consumer software, or other non-automotive background might not appreciate what is involved in safety at scale for such systems. They might earnestly – but incorrectly – believe that when bugs are removed that automatically bestows safety, or otherwise have a limited view of the different factors of safety discussed in chapter 4. They might also earnestly believe some of the incorrect talking points discussed in section 4.10 regarding safety myths promoted by the AV industry.

·        The mind-boggling amount of money at stake and potential winnings for participants in this industry would make it difficult for anyone to stay the course in ensuring safety in the face of rich rewards for expediency and ethical compromise. No matter how pure of spirit and well intentioned.

It is impossible to know the motivations, ethical framework, and sincerity of every important player in the AV industry. Many participants, especially rank and file engineers, are sincere in their desire to build AVs and believe they are helping to build a better, safer world. Regardless of that sincerity, it is important to have checks and balances in place to ensure that those good intentions translate into good outcomes for society.

One has to assume that outcomes will align with incentives. Without checks and balances, dangerous incentives can be expected to lead to dangerous outcomes. Checks and balances need to be a combination of internal corporate controls and government regulatory oversight. A profit incentive is insufficient to ensure acceptable safety, especially if it is associated with a relatively short-term business plan.


[1] Safety theater money spent to impart an aura of safety is a different matter, and spending on this area can bring good return on investment. But we are talking about real safety here. In the absence of a commitment to conform to industry safety standards it can be difficult to tell the difference without a deep dive into company practices and culture.

[2] The official US Department of Transportation policy still in effect at the time of this writing states: “In this document, NHTSA offers a nonregulatory approach to automated vehicle technology safety.” See page ii of:            https://www.nhtsa.gov/document/automated-driving-systems-20-voluntary-guidance

Sunday, November 13, 2022

Book: How Safe is Safe Enough? Measuring and Predicting Autonomous Vehicle Safety

How Safe Is Safe Enough for Autonomous Vehicles? 
The Book


The most pressing question regarding autonomous vehicles is: will they be safe enough? The usual metric of "at least as safe as a human driver" is more complex than it might seem. Which human driver, under what conditions? And are fewer total fatalities OK even if it means more pedestrians die? Who gets to decide what safe enough really means when billions of dollars are on the line? And how will anyone really know the outcome will be as safe as it needs to be when the technology initially deploys without a safety driver?

This book is written by an internationally known expert with more than 25 years of experience in self-driving car safety. It covers terminology, autonomous vehicle (AV) safety challenges, risk acceptance frameworks, what people mean by "safe," setting an acceptable safety goal, measuring safety, safety cases, safety performance indicators, deciding when to deploy, and ethical AV deployment. The emphasis is not on how to build machine learning based systems, but rather on how to measure whether the result will be acceptably safe for real-world deployment. Written for engineers, policy stakeholders, and technology enthusiasts, this book tells you how to figure out what "safe enough" really means, and provides a framework for knowing that an autonomous vehicle is ready to deploy safely.

Currently available for purchase from Amazon, with international distribution via their print-on-demand network. (See country-specific distribution list below.)

See bottom of this post for e-book information, from sources other than Amazon, as well as other distributors for the printed book.

Media coverage and bonus content:

Chapters:

  1. Introduction
  2. Terminology and challenges
  3. Risk Acceptance Frameworks
  4. What people mean by "safe"
  5. Setting an acceptable safety goal
  6. Measuring safety
  7. Safety cases
  8. Applying SPIs in practice
  9. Deciding when to deploy
  10. Ethical AV deployment
  11. Conclusions
368 pages.
635 footnotes.
On-line clickable link list for the footnotes here: https://users.ece.cmu.edu/~koopman/SafeEnough/

Koopman, P., How Safe Is Safe Enough? Measuring and Predicting Autonomous Vehicle Safety, September 2022.
ISBN: 9798846251243 Trade Paperback
ISBN: 9798848273397 Hardcover   (available only in marketplaces supported by Amazon)

Also see my other recent book: The UL 4600 Guidebook

For those asking about distribution -- it is served by the Amazon publishing network. Expanded distribution is selected, so other distributors might pick it up in 6-8 weeks to serve additional countries (e.g., India) or non-Amazon booksellers, especially in US and UK. How that goes is beyond my control, but in principle a bookstore anywhere should be able to order it by about mid-November 2022. Alternately, you can order it direct from Amazon in the closest one of these countries for international delivery: US, UK, DE, FR, ES, IT, NL, PL, SE, JP, CA, AU.


You can also buy it from some Amazon country web sites via distributors. A notable example is:

Your local bookstore should also be able to order it through their US or UK distributor.

E-book available from distributors as they pick it up over time: 

Tuesday, May 28, 2019

Ethical Problems That Matter for Self Driving Cars

It's time to get past the irrelevant Trolley Problem and talk about ethical issues that actually matter in the real world of self driving cars.  Here's a starter list involving public road testing, human driver responsibilities, safety confidence, and grappling with how safe is safe enough.


  • Public Road Testing. Public road testing clearly puts non-participants such at pedestrians at risk. Is it OK to test on unconsenting human subjects? If the government hasn't given explicit permission to road test in a particular location, arguably that is what is (or has been) happening. An argument that simply having a "safety driver" mitigates risk is clearly insufficient based on the tragic fatality in Tempe AZ last year. 
  • Expecting Human Drivers to be Super-Human. High-end driver assistance systems might be asking the impossible of human drivers. Simply warning the driver that (s)he is responsible for vehicle safety doesn't change the well known fact that humans struggle to supervise high-end autonomy effectively, and that humans are prone to abusing highly automated systems. This gives way to questions such as:
    • At what point is it unethical to hold drivers accountable for tasks that require what amount to super-human abilities and performance?
    • Are there viable ethical approaches to solving this problem? For example, if a human unconsciously learns how to game a driver monitoring system (e.g., via falling asleep with eyes open -- yes, that is a thing) should that still be the human driver's fault if a crash occurs?
    • Is it OK to deploy technology that will result in drivers being punished for not being super-human if result is that the total death rate declines?
  • Confidence in Safety Before Deployment.  There is work that advocates even slightly better than a human is acceptable (https://www.rand.org/blog/articles/2017/11/why-waiting-for-perfect-autonomous-vehicles-may-cost-lives.html). But there isn't a lot of discussion about the next level of what that really means. Important ethical sub-topics include:
    • Who decides when a vehicle is safe enough to deploy? Should that decision be made by a company on its own, or subject to external checks and balances? Is it OK for a company to deploy a vehicle they think is safe based just on subjective criteria alone: "we're smart, we worked hard, and we're convinced this will save lives"
    • What confidence is required for the actual prediction of casualties from the technology? If you are only statistically 20% confident that your self-driving car will be no more dangerous than a human driver, is that enough?
    • Should limited government resources that could be used for addressing known road safety issues (drunk driving, driving too fast for conditions, lack of seat belt use, distracted driving) be diverted to support self-driving vehicle initiatives using an argument of potential public safety improvement?
  • How Safe is Safe Enough? Even if we understand the relationship between an aggregate safety goal and self-driving car technology, where do we set the safety knob?  How will the following issues affect this?
    • Will risk homeostatis apply? There is an argument that there will be pressure to turn up the speed/traffic volume dials on self-driving cars to increase permissiveness and traffic flow until the same risk as manual driving is reached. (Think more capable cars resulting in crazier roads with the same net injury and fatality rates.)
    • Is it OK to deploy initially with a higher expected death rate than human drivers under an assumption that systems will improve over time, long term reducing the total number of deaths?  (And is it OK for this improvement to be assumed rather than proven to be likely?)
    • What redistribution of demographics for victims is OK? If fewer passengers die but more pedestrians die, is that OK if net death rate is the same? Is is OK if deaths disproportionately occur to specific sub-populations? Did any evaluation of safety before deployment account for these possibilities?
I don't purport to have the definitive answers to any of these problems (except a proposal for road testing safety, cited above). And it might be that some of these problems are more or less answered. The point is that there is so much important, relevant ethical work to be done that people shouldn't be wasting their time on trying to apply the Trolley Problem to AVs. I encourage follow-ups with pointers to relevant work.

If you're still wondering about Trolley-esque situations, see this podcast and the corresponding paper. The short version from the abstract of that paper: Trolley problems are "too contrived to be of practical use, are an inappropriate method for making decisions on issues of safety, and should not be used to inform engineering or policy." In general, it should be incredibly rare for a safely designed self-driving car to get into a no-win situation, and if it does happen they aren't going to have information about the victims and/or aren't going to have control authority to actually behave as suggested in the experiments any time soon, if ever.

Here are some links to more about applying ethics to technical systems in general (@IEEESSIT) and autonomy in particular (https://ethicsinaction.ieee.org/), as well as the IEEE P7000 standard series (https://www.standardsuniversity.org/e-magazine/march-2017/ethically-aligned-standards-a-model-for-the-future/).