Modern autonomous systems often rely on machine learning to operate intelligently in uncertain or a priori unknown environments, making it even trickier to obtain robust safety assurances outside of the training regime. In this research thrust, we focus on understanding how a robot can safely learn in such settings, adapt its safety assurances during operation time in light of system or environment evolution, and continuously improve the learning components to new risks and safety hazards.

Safety Guided Imitation Learning

Controlling Covariate Shift with Stable Behavior Cloning

Learning Robot Safety Representations from Natural Language Feedback

Enhancing Safety and Robustness of Vision-Based Controllers via Reachability Analysis

System-Level Safety Monitoring and Recovery for Perception Failures in Autonomous Vehicles

Discovering Closed-Loop Failures of Vision-Based Controllers via Reachability Analysis

Online Update of Safety Assurances for a Reliable Human-Robot Interaction

An Efficient Reachability-Based Framework for Provably Safe Autonomous Navigation in Unknown Environments

Combining Optimal Control and Learning for Visual Navigation in Novel Environments

Visual Navigation Among Humans With Optimal Control as a Supervisor

For a more exhaustive list of our research work please go HERE!