Putting image manipulations in context: robustness testing for safe perception
UPDATE 8/17 -- added presentation slides! I'm very pleased to share a publication from our NREC autonomy validation team that explains how computationally cheap image perturbations and degradations can expose catastrophic perception brittleness issues. You don't need adversarial attacks to foil machine learning-based perception -- straightforward image degradations such as blur or haze can cause problems too. Our paper "Putting image manipulations in context: robustness testing for safe perception" will be presented at IEEE SSRR August 6-8. Here's a submission preprint: https://users.ece.cmu.edu/~koopman/pubs/pezzementi18_perception_robustness_testing.pdf Abstract —We introduce a method to evaluate the robustness of perception systems to the wide variety of conditions that a deployed system will encounter. Using person detection as a sample safety-critical application, we evaluate the robustness of several state-of-the-art perception systems to a variety