✨ TL;DR
This paper introduces Adaptive Conformal Filtering (ACoFi), which combines learned safety filters with adaptive conformal inference to provide soft safety guarantees for control systems. The method dynamically adjusts switching criteria between nominal and safe policies based on prediction uncertainty, achieving better safety performance than fixed-threshold approaches.
Safety filters are used to ensure control systems remain safe even when nominal policies are unsafe, but traditional synthesis methods face scalability issues with high-dimensional systems. Learning-based safety filters have been proposed as alternatives, but they suffer from inevitable prediction errors that compromise reliability and safety guarantees. The key challenge is how to account for these errors while maintaining safety assurances in real-world applications where the learned models may encounter distribution shifts or make incorrect predictions about action safety.
ACoFi combines Hamilton-Jacobi reachability-based safety filters with adaptive conformal inference to create a dynamic switching mechanism. The method uses conformal prediction to quantify uncertainty in the learned safety filter's predictions by computing a range of possible safety values for the nominal policy's actions. When this uncertainty range suggests potential unsafety, the filter switches from the nominal policy to a learned safe policy. The switching threshold adapts over time based on observed prediction errors, allowing the system to learn from its mistakes and adjust its conservativeness accordingly. This adaptive mechanism is grounded in conformal inference theory, which provides statistical guarantees on the miscoverage rate.