Tuesday, May 28, 2019

Ethical Problems That Matter for Self Driving Cars

It's time to get past the irrelevant Trolley Problem and talk about ethical issues that actually matter in the real world of self driving cars.  Here's a starter list involving public road testing, human driver responsibilities, safety confidence, and grappling with how safe is safe enough.


  • Public Road Testing. Public road testing clearly puts non-participants such at pedestrians at risk. Is it OK to test on unconsenting human subjects? If the government hasn't given explicit permission to road test in a particular location, arguably that is what is (or has been) happening. An argument that simply having a "safety driver" mitigates risk is clearly insufficient based on the tragic fatality in Tempe AZ last year. 
  • Expecting Human Drivers to be Super-Human. High-end driver assistance systems might be asking the impossible of human drivers. Simply warning the driver that (s)he is responsible for vehicle safety doesn't change the well known fact that humans struggle to supervise high-end autonomy effectively, and that humans are prone to abusing highly automated systems. This gives way to questions such as:
    • At what point is it unethical to hold drivers accountable for tasks that require what amount to super-human abilities and performance?
    • Are there viable ethical approaches to solving this problem? For example, if a human unconsciously learns how to game a driver monitoring system (e.g., via falling asleep with eyes open -- yes, that is a thing) should that still be the human driver's fault if a crash occurs?
    • Is it OK to deploy technology that will result in drivers being punished for not being super-human if result is that the total death rate declines?
  • Confidence in Safety Before Deployment.  There is work that advocates even slightly better than a human is acceptable (https://www.rand.org/blog/articles/2017/11/why-waiting-for-perfect-autonomous-vehicles-may-cost-lives.html). But there isn't a lot of discussion about the next level of what that really means. Important ethical sub-topics include:
    • Who decides when a vehicle is safe enough to deploy? Should that decision be made by a company on its own, or subject to external checks and balances? Is it OK for a company to deploy a vehicle they think is safe based just on subjective criteria alone: "we're smart, we worked hard, and we're convinced this will save lives"
    • What confidence is required for the actual prediction of casualties from the technology? If you are only statistically 20% confident that your self-driving car will be no more dangerous than a human driver, is that enough?
    • Should limited government resources that could be used for addressing known road safety issues (drunk driving, driving too fast for conditions, lack of seat belt use, distracted driving) be diverted to support self-driving vehicle initiatives using an argument of potential public safety improvement?
  • How Safe is Safe Enough? Even if we understand the relationship between an aggregate safety goal and self-driving car technology, where do we set the safety knob?  How will the following issues affect this?
    • Will risk homeostatis apply? There is an argument that there will be pressure to turn up the speed/traffic volume dials on self-driving cars to increase permissiveness and traffic flow until the same risk as manual driving is reached. (Think more capable cars resulting in crazier roads with the same net injury and fatality rates.)
    • Is it OK to deploy initially with a higher expected death rate than human drivers under an assumption that systems will improve over time, long term reducing the total number of deaths?  (And is it OK for this improvement to be assumed rather than proven to be likely?)
    • What redistribution of demographics for victims is OK? If fewer passengers die but more pedestrians die, is that OK if net death rate is the same? Is is OK if deaths disproportionately occur to specific sub-populations? Did any evaluation of safety before deployment account for these possibilities?
I don't purport to have the definitive answers to any of these problems (except a proposal for road testing safety, cited above). And it might be that some of these problems are more or less answered. The point is that there is so much important, relevant ethical work to be done that people shouldn't be wasting their time on trying to apply the Trolley Problem to AVs. I encourage follow-ups with pointers to relevant work.

If you're still wondering about Trolley-esque situations, see this podcast and the corresponding paper. The short version from the abstract of that paper: Trolley problems are "too contrived to be of practical use, are an inappropriate method for making decisions on issues of safety, and should not be used to inform engineering or policy." In general, it should be incredibly rare for a safely designed self-driving car to get into a no-win situation, and if it does happen they aren't going to have information about the victims and/or aren't going to have control authority to actually behave as suggested in the experiments any time soon, if ever.

Here are some links to more about applying ethics to technical systems in general (@IEEESSIT) and autonomy in particular (https://ethicsinaction.ieee.org/), as well as the IEEE P7000 standard series (https://www.standardsuniversity.org/e-magazine/march-2017/ethically-aligned-standards-a-model-for-the-future/).


2 comments:

  1. Nice article. I believe a lot of the answers will come from the litigation that is sure to be on the horizon as this technology matures. Who is ultimately legally responsible for a car and its actions today? (The driver and/or owner) Going forward who will be? The company who manufactured the car/software that made the decision that lead to the source of litigation?

    Other good questions/observations:

    As humans would we feel okay taking legal responsibility for a many ton machine we don't have control over any more?

    Do we really expect people to pay any attention to the "driving" when the car's pilot software does 99% of the driving for you? (We can't pay attention now when we control 100% of it)

    ReplyDelete
  2. With respect to this topic, I believe that humans work under a confusing paradigm:
    1. Most people are overly confident in their own driving skills (hubris at work).
    2. Those who are not overly confident dismiss their lack of driving skill as insignificant.
    3. People have a more realistic rate of confidence in other drivers.

    Therefore, we as a society have an implicit understanding that driving on the road with other drivers carries risk and society has knit together a framework for dealing with the many imperfections therein.

    However, go back to point #1 and our hubris - it comes from the illusion of control. "I am in control, therefore I can control the events on the road and am therefore a great driver." Not very true, but that's a very common mindset.

    So if self-driving takes away the illusion of control, then I think people won't accept it unless the bar for success is set far and above that of a normal driver.

    Think aviation - aviation is now enormously safe compared to roads, and it required decades of intensive engineering to make it that way because passengers must cede their illusion of control at the aircraft door. "If I am going to get on this airplane over which I have no control, it must be orders of magnitude safer than what I am accustomed to on the road" - is a common thought.

    However, my pragmatic side says that the issue of self-driving cars will be largely hashed out in court cases, or the risk thereof. A company is going to look at its legal advisors before releasing self-driving technology and use them as the market entry test.

    ReplyDelete

All comments are moderated by a human. While it is always nice to see "I like this" comments, only comments that contribute substantively to the discussion will be approved for posting.