One of the key reasons for the success of deep learning in robot control is the ability of neural networks to elegantly process rich visual information and output useful information for control. Unfortunately, vision-based controllers can also be very brittle when faced with out-of-distribution inputs, potentially leading to catastrophic system failures. In this work, we propose an approach for stress testing these controllers using photorealistic simulators. We formulate the closed-loop failure discovery as an optimal control problem, which allows us to do a targeted and efficient search for visual inputs that might trigger system failures. Our findings reveal intriguing and unexpected situations that could compromise state-of-the-art visual controllers, such as pedestrian markings confusing an autonomous aircraft or light-colored walls misleading indoor navigation robots.
[Paper] [Project Website] [Video]Modern autonomous systems often rely on machine learning to operate intelligently in uncertain or a priori unknown environments, making it even trickier to obtain robust safety assurances outside of the training regime. In this research thrust, we focus on understanding how a robot can safely learn in such settings, adapt its safety assurances during operation time in light of system or environment evolution, and continuously improve the learning components to new risks and safety hazards.