Tuesday, July 7, 2020

Safe Autonomy Podcast

I've recorded a podcast series on safe autonomy here:

Each episode has both an audio track and a written transcript.

Tuesday, February 11, 2020

Positive Trust Balance for Self Driving Car Deployment

By Philip Koopman and Michael Wagner, Edge Case Research

Self-driving cars promise improved road safety. But every publicized incident chips away at confidence in the industry’s ability to deliver on this promise, with zero-crash nirvana nowhere in sight. We need a way to balance long term promise vs. near term risk when deciding that this technology is ready for deployment. A “positive trust balance” approach provides a framework for making a responsible deployment decision by combining testing, engineering rigor, operational feedback, and transparent safety culture.


Too often, discussions about why the public should believe a particular self-driving car platform is well designed center around number of miles driven. Simply measuring the number of miles driven has a host of problems, such as distinguishing “easy” from “hard” miles and ensuring that miles driven are representative of real world operations. That aside, accumulating billions of road miles to demonstrate approximate parity to human drivers is an infeasible testing goal. Simulation helps, but still leaves unresolved questions about including enough edge cases that pose problems for deployment at scale.

By the time a self-driving car design is ready to deploy, the rate of potentially dangerous disengagements and incidents seen in on-road testing should approach zero. But that isn’t enough to prove safety. For example, a hypothetical ten million on-road test miles with no substantive incidents would still be a hundred times too little to prove that a vehicle is as safe as a typical human driver. So getting to a point that dangerous events are too rare to measure is only a first step.

In fact, competent human drivers are so good that there is no practical way to measure that a newly developed self-driving car has a suitably low fatality rate. This should not be news. We don’t fly new aircraft designs for billions of hours before deployment to measure the crash rate. Instead, we count on a combination of thorough testing, good engineering, and safety culture. Self-driving cars typically rely on machine learning to sense the world around them, so we will also need to add significant feedback from vehicles operating in the field to plug inevitable gaps in training data.


The self-driving car industry is invested in achieving a “positive risk balance” of being safer than a human driver. And years from now actuarial data will tell us if we succeeded. But there will be significant uncertainty about risk when it’s time to deploy. So we’ll need to trust development and deployment organizations to be doing the right things to minimize and manage that risk.

To be sure, developers already do better than brute force mileage accumulation. Simulations backed up by comprehensive scenario catalogs ensure that common cases are covered. Human copilots and data triage pipelines flag questionable self-driving behavior, providing additional feedback. But those approaches have their limits.

Rather than relying solely on testing, other industries use safety standards to ensure appropriate engineering rigor. While traditional safety standards were never intended to address self-driving aspects of these vehicles, new standards such as Underwriters Laboratories 4600 and ISO/PAS 21448 are emerging to set the bar on engineering rigor and best practices for self-driving car technology.

The bad news is that nobody knows how to prove that machine learning based technology will actually be safe. Although we are developing best practices, when deploying a self-driving car we’ll only know whether it is apparently safe, and not whether it is actually as safe as a human driver. Going past that requires real world experience at scale.

Deploying novel self-driving car technology without undue public risk will involve being able to explain why it is socially responsible to operate these systems in specific operational design domains. This requires addressing all of the following points:

Is the technology as safe as we can measure? This doesn’t mean it will be perfect when deployed. Rather, at some point we will have reached the limits of viable simulation and testing.

Has sufficient engineering rigor been applied? This doesn’t mean perfection. Nonetheless, some objective process such as establishing conformance to sufficiently rigorous engineering standards that go beyond testing is essential.

Is a robust feedback mechanism used to learn from real world experience? There must be proactive, continual risk management over the life of each vehicle based on extensive field data collection and analysis.

Is there a transparent safety culture? Transparency is required in evolving robust engineering standards, evaluating that best practices are followed, and ensuring that field feedback actually improves safety. A proactive, robust safety culture is essential. So is building trust with the public over time.

Applying these principles will potentially change how we engineer, regulate, and litigate automotive safety. Nonetheless, the industry will be in a much better place when the next adverse news event occurs if their figurative public trust account has a positive balance.

Philip Koopman is the CTO of Edge Case Research and an expert in autonomous vehicle safety. Including his role as a faculty member at Carnegie Mellon University, Koopman has been helping government, commercial and academic self-driving developers improve safety for over 20 years. He is a principal contributor to the Underwriters Laboratories 4600 safety standard.

Michael Wagner is the CEO of Edge Case Research. He started working on autonomy at Carnegie Mellon over 20 years ago.

(Original post here:  https://medium.com/@pr_97195/positive-trust-balance-for-self-driving-car-deployment-ff3f04a7ef93)

Sunday, December 8, 2019

The Lesson Learned from the Tempe Arizona Autonomous Driving System Testing Fatality NTSB Report

Now that the press flurry over the NTSB's report on the Autonomous Driving System (ADS) fatality in Tempe has subsided, it's important to reflect on the lessons to be learned. Hats off to the NTSB for absolutely nailing this. Cheers to the Press who got the messaging right. But not everyone did. The goal of this essay is to help focus on the right lessons to learn, clarify publicly stated misconceptions, and emphasize the most important take-aways.

I encourage everyone in the AV industry to watch the first 5 and a half minutes of the NTSB board meeting video ( Youtube: Link // NTSB Link). Safety leadership should watch the whole thing. Probably twice. Then present a summary at your company's lunch & learn.

Pay particular attention to this part from Chairman Sumwalt: "If your company tests automated driving systems on public roads, this crash -- it was about you.  If you use roads where automated driving systems were being tested, this crash -- it was about you."

I live in Pittsburgh and these public road tests happen near my place of work and my home. I take the lessons from this crash personally. In principle, every time I cross a street I'm potentially placed at risk by any company that might be cutting corners on safety. (I hope that's none. All the companies testing here have voluntarily submitted compliance reports for the well-drafted PennDOT testing guidelines. But not every state has those, and those guidelines were developed largely in response to the fatality we’re discussing.)

I also have long time friends who have invested their careers in this technology. They have brought a vibrant and promising industry to Pittsburgh and other cities.  Negative publicity resulting from a major mishap can threaten the jobs of those employed by those companies.

So it is essential for all of us to get safety right.

The first step: for anyone in charge of testing who doesn't know what a Safety Management System (SMS) is: (A) Watch that NSTB hearing intro. (B) Pause testing on public roads until your company makes a good start down that path. (Again, the PennDOT guidelines are a reasonable first step accepted by a number of companies. LINK)  You’ll sleep better having dramatically improved your company’s safety culture before anyone gets hurt unnecessarily.

Clearing up some misconceptions
I’ve seen some articles and commentary that missed the point of all of this. Large segments of coverage emphasized technical shortcomings of the system - That's not the point. Other coverage highlighted test driver distraction - That's not the point either.  The fatal mishap involved technical shortcomings, and the test driver was not paying adequate attention. Both contributed to the mishap, and both were bad things.

But the lesson to learn is that solid safety culture is without a doubt necessary to prevent avoidable fatalities like these. That is the Point.

To make the most of this teachable moment let's break things down further. These discussions are not really about the particular test platform that was involved. The NTSB report gave that company credit for significant improvement. Rather, the objective is to make sure everyone is focused on ensuring they have learned the most important lesson so we don’t suffer another avoidable ADS testing fatality.

A self-driving car killed someone - NOT THE POINT
This was not a self-driving car. It was a test platform for Automated Driving System (ADS) technology. The difference is night and day.  Any argument that this vehicle was safe to operate on public roads hinged on a human driver not only taking complete responsibility for operational safety, but also being able to intervene when the test vehicle inevitably made a mistake. It's not a fully automated self-driving car if a driver is required to hover with hands above the steering wheel and foot above the brake pedal the entire time the vehicle is operating.

It's a test vehicle. The correct statement is: a test vehicle for developing ADS technology killed someone.

The pedestrian was initially said to jump out of the dark in front of the car - NOT THE POINT
I still hear this sometimes based on the initial video clip that was released immediately after the mishap. The pedestrian walked across almost 4 lanes of road in view of the test vehicle before being struck. The test vehicle detected the pedestrian 5.6 seconds before the crash. That was plenty of time to avoid the crash, and plenty of time to track the pedestrian crossing the street to predict that a crash would occur. Attempting to claim that this crash was unavoidable is incorrect, and won't prevent the next ADS testing fatality.

It's the pedestrian's fault for jaywalking - NOT THE POINT
Jaywalking is what people do when it is 125 yards to the nearest intersection and there is a paved walkway on the median. Even if there is a sign saying not to cross.  Tearing up the paved walkway might help a little on this particular stretch of road, but that's not going to prevent jaywalking as a potential cause of the next ADS testing fatality.

Victim's apparent drug use - NOT THE POINT
It was unlikely that the victim was a fully functional, alert pedestrian. But much of the population isn't in this category for other reasons. Children, distracted walkers, and others with less than perfect capabilities and attention cross the street every day, and we expect drivers to do their best to avoid hitting them.

There is no indication that the victim’s medical condition substantively caused the fatality. (We're back to the fact that the pedestrian did not jump in front of the car.) It would be unreasonable to insist that the public has the responsibility to be fully alert and ready to jump out of the way of an errant ADS test platform at all times they are outside their homes.

Tracking and classification failure - NOT THE POINT
The ADS system on the test vehicle suffered some technical issues that prevented predicting where the pedestrian would be when the test vehicle got there, or even recognizing the object it was sensing was a pedestrian walking a bicycle. However, the point of operating the test vehicle was to find and fix defects.

Defects were expected, and should be expected on other ADS test vehicles. That's why there is a human safety driver. Forbidding public road testing of imperfect ADS systems basically outlaws road testing at this stage. Blaming the technology won't prevent the next ADS testing fatality, but it could hurt the industry for no reason.

It's the technology's fault for ignoring jaywalkers - NOT THE POINT
This idea has been circulating, but apparently this isn't quite true. Jaywalkers aren't ignored, but rather according to the information presented by the NTSB a pedestrian isn't expected to cross the street at first. Once the pedestrian moves for a while a track is built up that could indicate street crossing, but until then movement into the street is considered unexpected if the pedestrian is not at a designated crossing location. A deployment-ready ADS could potentially use a more sophisticated approach to predict when a pedestrian would enter the roadway.

Regardless of implementation, this did not contribute to the fatality because the system never actually classified the victim as a pedestrian. Again, improving this or other ADS technical features won't prevent the next ADS testing fatality. That’s because testing safety is about the safety driver, not which ADS prototype functions happen to be active on any particular test run.

ADS emergency braking behavior - NOT THE POINT
The ADS emergency braking function had behaviors that could hinder its ability to provide backup support to the safety driver. Perhaps another design could have done better for this particular mishap. However, it wasn't the job of the ADS emergency braking to avoid hitting a pedestrian. That was the safety driver's job. Improving ADS emergency braking capabilities might reduce the probability of an ADS testing fatality, but won't entirely prevent the next fatality from happening sooner than it should.

Native emergency braking disabled - NOT THE POINT
It looks bad to have disabled the built-in emergency braking system on the passenger vehicle used as the test platform. The purpose of such systems is to help out after the driver makes a mistake. In this case there is a good, but not perfect, chance it would have helped. But as with the ADS emergency braking function, this simply improves the odds. Any safety expert is going to say your odds are better with both belt and suspenders, but enabling this function alone won't entirely prevent the next ADS testing fatality from happening before it should.

Inattentive safety driver - NOT THE POINT
There is no doubt that an inattentive safety driver is dangerous when supervising an ADS test vehicle. And yet, driver complacency is the expected outcome of asking a human to supervise an automated system that works most of the time. That’s why it’s important to ensure that driver monitoring is done continually and used to provide feedback. (In this case a form of driver monitoring equipment was installed, but data was apparently not used in a way that assured effective driver alertness.)

While enhanced training and stringent driver selection can help, effective analysis and action taken upon monitoring data is required to ensure that drivers are actually paying attention in practice. Simply firing this driver without changing anything else won't prevent the next ADS testing fatality from happening to some other driver who has slipped into bad operational habits.

A fatality is regrettable, but human drivers killed about 100 people that same day with minimal news attention - NOT THE POINT
Some commentators point out the ratio of fatalities caused by test vehicles vs. general automotive fatality rates. They then generally argue that a few deaths in comparison to the ongoing carnage of regular cars is a necessary and appropriate price to pay for progress. However, this argument is not statistically valid.

Consider a reasonable goal that ADS testing (with highly qualified, alert drivers presumed) should be no more dangerous than the risk presented by normal cars. For normal US cars that's ballpark 500 million road miles per pedestrian fatality. This includes mishaps caused by drunk, distracted, and speeding drivers. Due to the far smaller number of miles being driven by current test platform fleet sizes, the "budget" for fatal accidents due to ADS road testing phase should, at this early stage, still be zero.

The fatality somehow “proves” that self-driving car technology isn't viable - NOT THE POINT
Some have tried to draw conclusions about the viability of ADS technology from the fact that there was a testing fatality. However, the issues with ADS technical performance only prove what we already knew. The technology is still maturing, and a human needs to intervene to keep things safe. This crash wasn't about the maturity of the technology; it was about whether the ADS public road testing itself was safe.

Concentrating on technology maturity (for example, via disclosing disengagement rates) serves to focus attention on a long term future of system performance without a safety driver. But the long term isn’t what’s at issue.

The more pressing issue is ensuring that the road testing going on right now is sufficiently safe. At worst, continued use of disengagement rates as the primary metric of ADS performance could hurt safety rather than help. This is because disengagements, if gamed, could incentivize safety drivers to take chances by avoiding disengagements in uncertain situations to make the numbers look better. (Some companies no doubt have strategies to mitigate this risk. But those are probably the companies with an SMS, which is back to the point that matters.)

THE POINT: The safety culture was broken
Safety culture issues were the enabler for this particular crash. Given the limited number of miles that can be accumulated by any current test fleet, we should see no fatalities occur during ADS testing. (Perhaps a truly unavoidable fatality will occur. This is possible, but given the numbers it is unlikely if ADS testing is reasonably safe. So our goal should be set to zero.) Safety culture is critical to ensure this.

The NTSB rightly pushes hard for a safety management system (SMS). But be careful to note that they simply say that this is a part of safety culture, not all of it. Safety culture means, among other things, taking responsibility for ensuring that their safety drivers are actually safe despite the considerable difficulty of accomplishing that. Human safety drivers will make mistakes, but a strong safety culture accounts for such mistakes in ensuring overall safety.

It is important to note that the urgent point here is not regulating self-driving car safety, but rather achieving safe ADS road testing. They are (almost) two entirely different things. Testing safety is about whether the company can consistently put an alert, able-to-react safety driver on the road. On the other hand, ADS safety is about the technology. We need to get to the technology safety part over time, but ADS road testing is the main risk to manage right now.

Perhaps dealing with ADS safety would be easier if the discussions of testing safety and deployment safety were more cleanly separated.


Chairman Sumwalt summed it up nicely in the intro. (You did watch that 5 and half minute intro, right?)  But to make sure it hits home, this is my take:

One company's crash is every company's crash.  You'll note I didn't name the company involved, because really that's irrelevant to preventing there from being a next fatality and the potential damage it could do to the industry’s reputation.

The bigger point is every company can and should institute good safety culture before further fatalities take place if they have not done so already. The NTSB credited the company at issue with significant change in the right direction.  But it only takes one company who hasn’t gotten the message to be a problem for everyone. We can reasonably expect fatalities involving ADS technology in the future even if these systems are many times safer than human drivers. But there simply aren’t that many vehicles on the road yet for a truly unavoidable mishap to be likely to occur. It’s far too early.

If your company is testing (or plans to test) autonomous vehicles, get a Safety Management System in place before you do public road testing. At least conform to the details in the PennDOT testing guidelines, even if you’re not testing in Pennsylvania. If you are already testing on public roads without an SMS, you should stand down until you get one in place.

Once you have an SMS, consider it a down-payment on a continuing safety culture journey.

Prof. Philip Koopman, Carnegie Mellon University

Author Note: The author and his company work with a variety of customers helping to improve safety. He has been involved with self-driving car safety since the late 1990s. These opinions are his own, and this piece was not sponsored.

Monday, July 22, 2019

Autonomous Vehicle Testing Safety Needs More Transparency


Last week there were two injuries involving human-supervised autonomous test shuttles on different continents, with no apparent connection other than random chance.  (For example: Link) As deployment of this work-in-progress technology scales up in public, we know that we can expect more high-profile accidents. Fortunately this time nobody was killed or suffered life-altering injuries. (But we still need to find out what actually happened.) And to be sure, human-driven vehicles are far from accident-free. But what about next time?

The bigger issue for the industry is: will the next autonomous vehicle testing mishap be due to rare and random problems that are within the bounds of reasonable risk? Or will they be due to safety issues that could and should have been addressed beforehand?

Public trust in autonomous vehicle technology has already eroded in the past year. Each new mishap has the unfortunate potential to make that situation worse, regardless of the technical root cause. While no mode of transportation is perfectly safe, it's important that the testing of experimental self driving car technology not expose the public to reasonably avoidable risk. And it's equally important that the public's perception matches the actual risk.

Historically the autonomous vehicle industry has operated under a cloak of secrecy. As we've seen, that can lead to boom and bust cycles of public perception, with booms of optimism followed by backlash after each publicized accident. But in fairness, if there is no information about public testing risk other than hype about an accident-free far-flung future, what is the public supposed to think? Self-driving cars won't be perfect. The goal is to make them better than the current situation. One hopes that along the way things won't actually get worse.

Some progress in public safety disclosure has been made, albeit with low participation rates. One of the two vehicles involved in injuries this past week has a public safety report available. The other does not. In fact, a significant majority of testing organizations have not taken the basic step of making a Voluntary Safety Self-Assessment report available to NHTSA. And to be clear, that disclosure process is more about explaining progress toward production maturity rather than the specific topic of public testing safety.

The industry needs to do better at providing transparent, credible safety information while testing this still-experimental technology. Long term public education and explanation are important. But the more pressing need revolves around what's happening on our roads right now during testing operations. That is what is making news headlines, and is the source of any current risk.

At some point either autonomous vehicle testers are actually doing safe, responsible public operations or they aren't. If they aren't, that is bound to catch up with them as operations scale up. From the point of view of a tester:

- It's a problem if you can't explain to yourself why you are acceptably safe in a methodical, rigorous way. (In that case, probably you're unsafe.) 
- It's a problem if you expect human safety drivers to perform with superhuman ability. They can't.
- It's a problem if you aren't ready to explain to authorities why are still acceptably safe after a mishap.
- It's a problem if you can't explain to a jury that you used reasonable care to ensure safety. 
- It's a problem if your company's testing operations get sidelined by an accident investigation.
- It's a problem for mishap victims if accidents occur that were reasonably avoidable, especially involving vulnerable road users. 
- It's a problem for the whole industry if people lose trust in the technology's ability to operate safely in public areas. 

- It's a problem if you can't explain to the public -- with technical credibility -- why they should believe you are safe. Preferably before you begin testing operation on public roads. 

Some companies are working on transparent safety more aggressively than others. Some are working on safety cases that contain detailed chains of reasoning and evidence to ensure that they have all the bases covered for public road testing. Others might not be.  But really, we don't know. 

And that is an essential part of the problem -- we really have no idea who is being diligent about safety. Eventually the truth will out, but bad news is all too likely to come in the form of deaths and injuries. We as an industry need to keep that from happening.

It only takes one company to have a severe mishap that, potentially, dramatically hurts the entire industry. While bad luck can happen to anyone, it's more likely to happen to a company that might be cutting corners on safety to get to market faster. 

The days of "trust us, we're smart" are over for the autonomous vehicle industry. Trust has to be earned. Transparency backed with technical credibility is a crucial first step to earning trust. The industry has been given significant latitude to operate on public roads, but that comes with great responsibility and a need for transparency regarding public safety.

Safety should truly come first. To that end, every company testing on public roads should immediately make a transparent, technically credible statement about their road testing safety practices. A simple "we have safety drivers" isn't enough. These disclosures can form a basis for uniform practices across the industry to make sure that this technology survives its adolescence and has a chance to reach maturity and the benefits that promises.

Dr. Philip Koopman is co-founder and CTO of Edge Case Research, which helps companies make their autonomous systems safer. He is also a professor at Carnegie Mellon University. He is a principle technical author of the UL 4600 draft standard for autonomous system safety, and has been working on self-driving car safety for more than 20 years. Contact: pkoopman@ecr.ai

Monday, July 1, 2019

Autonomous Vehicle Fault and Failure Management

When you build an autonomous vehicle you can't count on a human driver to notice when something's wrong and "do the right thing." Here is a list of faults, system limitations, and fault responses AVs will need to get right. Did you think of these?


System Limitations:

Sometimes the issue isn't that something is broken, but rather simply that all vehicles have limitations. You have to know your system's limitations.
  • Current capabilities of sensors and actuators, which can depend upon the operational state space.
  • Detecting and handling a vehicle excursion outside the operational state space for which it was validated, including all aspects of {ODD, OEDR, Maneuver, Fault} tuples.
  • Desired availability despite fault states, including any graceful degradation plan, and any limits placed upon the degraded operational state space.
  • Capability variation based on payload characteristics (e.g. passenger vehicle overloaded with cargo, uneven weight distribution, truck loaded with gravel, tanker half filled with liquid) and autonomous payload modification (e.g. trailer connect/disconnect).
  • Capability variation based on functional modes (e.g. pivot vs. Ackerman vs. crab steering, rear wheel steering, ABS or 4WD engaged/disengaged).
  • Capability variation based on ad-hoc teaming (e.g. V2V, V2I) and planned teaming (e.g. leader-follower or platooning vehicle pairing).
  • Incompleteness, incorrectness, corruption or unavailability of external information (V2V, V2I).

System Faults:
  • Perception failure, including transient and permanent faults in classification and pose of objects.
  • Planning failures, including those leading to collision, unsafe trajectories (e.g., rollover risk), and dangerous paths (e.g., roadway departure).
  • Vehicle equipment operational faults (e.g., blown tire, engine stall, brake failure, steering failure, lighting system failure, transmission failure, uncommanded engine power, autonomy equipment failure, electrical system failure, vehicle diagnostic trouble codes).
  • Vehicle equipment maintenance faults (e.g., improper tire pressure, bald tires, misaligned wheels, empty sensor cleaning fluid reservoir, depleted fuel/battery).
  • Operational degradation of sensors and actuators including temporary (e.g., accumulation of mud, dirt, dust, heat, water, ice, salt spray, smashed insects) and permanent (e.g., manufacturing imperfections, scratches, scouring, aging, wear-out, blockage, impact damage).
  • Equipment damage including detecting and mitigating catastrophic loss (e.g., vehicle collisions, lighting strikes, roadway departure), minor losses (e.g., sensor knocked off, actuator failures), and temporary losses (e.g., misalignment due to bent support bracket, loss of calibration).
  • Incorrect, missing, stale, and inaccurate map data.
  • Training data incompleteness, incorrectness, known bias, or unknown bias.

Fault Responses:

Some of the faults and limitations fall within the purview of safety standards that apply to non-autonomous functions. However, a unified system-level view of fault detection and mitigation can be useful to ensure that no faults are left unaddressed. More importantly, to the degree that credit has been taken for a human driver participating in fault mitigation by safety standards, that places fault mitigation obligations upon the autonomy.
  • How the system behaves when encountering an exceptional operational state space, experiencing a fault, or reaching a system limitation.
  • Diagnostic gaps (e.g., latent faults, undetected faults, undetected faulty redundancy).
  • How the system re-integrates failed components, including recovery from transient faults and recovery from repaired permanent faults during operation and/or after maintenance.
  • Response and policies for prioritizing or otherwise determining actions in inherently risky or certain-loss situations.
  • Withstanding an attack (system security, compromised infrastructure, compromised other vehicles), and deterring inappropriate use (e.g., malicious commands, inappropriately dangerous cargo, dangerous passenger behavior).
  • How the system is updated to correct functional defects, security defects, safety defects, and addition of new or improved capabilities.
Is there anything we missed?

(This is an excerpt of Koopman, P. & Fratrik, F., "How many operational design domains, objects, and events?" SafeAI 2019, AAAI, Jan 27, 2019.)

Wednesday, June 26, 2019

Event Detection Considerations for Autonomous Vehicles (OEDR -- part 2)

Object and Event Detection and Recognition (OEDR) also involves making predictions about what might happen next. Is that pedestrian waiting for a bus? or about to walk out into the crosswalk right in front of my car? Did you think of all of these aspects?

The infamous Pittsburgh Left. The first vehicle at a red light
will (sometimes) turn left when the light turns to green.

Some factors to consider when deciding what events and behaviors your system needs to predict include:

  • Determining expected behaviors of other objects, which might involve a probability distribution and is likely to be based on object classification.
  • Normal or reasonably expected movements by objects in the environment.
  • Unexpected, incorrect, or exceptional movement of other vehicles, obstacles, people, or other objects in the environment.
  • Failure to move by other objects which are reasonably expected to move.
  • Operator interactions prior to, during, and post autonomy engagement including: supervising driver alertness monitoring, informing occupants, interaction with local or remote operator locations, mode selection and enablement, operator takeover, operator cancellation or redirect, operator status feedback, operator intervention latency, single operator supervision of multiple systems (multi-tasking), operator handoff, loss of operator ability to interact with vehicle.
  • Human interactions including: human commands (civilians performing traffic direction, police pull-over, passenger distress), normal human interactions (pedestrian crossing, passenger entry/egress), common human rule-breaking (crossing mid-block when far from an intersection, speeding, rubbernecking, use of parking chairs, distracted walking), abnormal human interactions (defiant jaywalking, attacks on vehicle, attempted carjacking), and humans who are not able to follow rules (children, impaired adults).
  • Non-human interactions including: animal interaction (flocks/herds, pets, dangerous wildlife, protected wildlife) and delivery robots.

Is there anything we missed?   (Previous post had the "objects" part of OEDR.)

(This is an excerpt of Koopman, P. & Fratrik, F., "How many operational design domains, objects, and events?" SafeAI 2019, AAAI, Jan 27, 2019.)

Safe Autonomy Podcast

I've recorded a podcast series on safe autonomy here:     https://edge-case-research.com/podcasts/ Each episode has both an audio trac...