Autonomous robotic systems have the potential to fundamentally transform our society. They hold the promise for a world where drones routinely inspect and repair our cities’ infrastructure, legged robots aid in search and rescue missions in disaster-stricken scenarios, and flying taxis drop us off at the airport during rush hours. However, as robotic systems transition from controlled test environments to widespread deployment, assuring the safety of learning-enabled autonomy has emerged as a foundational challenge. Machine learning (ML) is increasingly central to decision-making in robotics; yet, these methods can be brittle in the face of uncertainty and novel conditions. This is further confounded by the fact that, unlike traditional robotic systems, ML-based systems are non-static – they may adapt during deployment or retrain over time. This makes robotic safety a dynamic and life-long concern and not just a pre-deployment artifact.

The central question our research aims to answer is “How can we enable robots to leverage the capabilities offered by modern ML methods while ensuring rigorous safety guarantees?” Towards this goal, we develop theoretical and computational frameworks for robot safety verification and apply them to ML-enabled robotic systems. Our long-term vision is to develop a continual safety framework for ML-emabled robotic systems where safety is provisionally established during design and training, actively monitored and adapted during operation, and refined through failure discovery post-deployment, enabling autonomy that becomes safer through use.

Please click on one of the following links to learn more about our research.