It's common for autonomous vehicles to use road map data, sign data, and so on for their operation. But what if that data has a problem?
Consider that while some data is being mapped by the vehicle manufacturers, they might be relying upon other data as well. For example, some companies are encouraging cities to build a database of local road signs (https://www.wired.com/story/inrix-road-rules-self-driving-cars?mbid=nl_071718_daily_list3_p4&CNDID=23351989)
It's important to understand the integrity of the data. What if there is a stop sign missing from the database and the vehicle decides to believe the database if it's not sure whether a stop sign in the real world is valid? (Perhaps it's hard to see the real world stop sign due to sun glare and the vehicle just goes with the database.) If the vehicle blows through a stop sign because it's missing from the database, whose fault is that? And what happens next?
Hopefully such databases will be highly accurate, but anyone who has worked with any non-trivial database knows there is always some problem somewhere. In fact, there have been numerous accidents and even deaths due to incorrect or corrupted data over the years.
Avoiding "death by road sign database" requires managing the safety critical integrity of the road sign data (and map data in general). If your system uses it for guidance but assumes it is defective with comparatively high probability, then maybe you're fine. But as soon as you trust it to make a safety-relevant decision, you need to think about how much you can trust it and what measures are in place to ensure it is not only accurately captured, but also dependably maintained, updated, and delivered to consumers.
Fortunately you don't need to start from scratch. The Safety-Critical Systems Club has been working on this problem for a while, and recently issued version 3 of their guidlines for safety critical data. You can get it for free as a download here: https://scsc.uk/scsc-127c
The guidance includes a broad range of information, guidance, and a worked example. It also has quite a number of data integrity issues in Appendix H that are worth looking at if you need some war stories about what happens if you get data integrity wrong. Highly recommended.
Last week there were two injuries involving human-supervised autonomous test shuttles on different continents, with no apparent connecti...
It's time to get past the irrelevant Trolley Problem and talk about ethical issues that actually matter in the real world of self drivi...
When you build an autonomous vehicle you can't count on a human driver to notice when something's wrong and "do the right thing...
Dealing with Edge Cases: Some failures are neither random nor independent. Moreover, safety is typically more about dealing with unusual ...