Jianqiang Ding


Topic
Learning for provably safe and robust robotic systems
The cornerstone for enabling robotic autonomy in safety-critical applications is the development of universal control algorithms that provide rigorous, formal guarantees of performance. Data-driven strategies, particularly learning-based approaches, have emerged as a powerful paradigm due to their remarkable ability to bypass the often-intractable process of system identification, offering excellent practical applicability. However, their ”black-box” nature prevents them from offering the verifiable safety guarantees required for deployment. This creates a critical gap where most powerful data-driven control techniques are simultaneously the least trustworthy in real applications. Therefore, there is an urgent and significant need for a control paradigm that can merge the adaptive power of data-driven methods, such as learning-based techniques, with the mathematical certainty of formal methods, enabling robots to be both intelligent and provably safe.
My research addresses this need by developing a framework directly bridges raw observation data with formal safety guarantees, leveraging foundational concepts like Koopman theory to establish the link between measurements and the underlying system state. This research basically focus on two parallel objectives. First, I aim to develop methods to analyze a system’s adherence to formal properties, such as quantifying the degree of safety violation, directly from observational data without requiring explicit system identification. Second, I aim to design a controller synthesizing frameworks that come with provable guarantees based on observations. Furthermore, to ensure the transition of my research into practice, I will also investigate how uncertainties in the observation data impact these formal guarantees, thus ensuring the developed algorithms are robust enough to handle the dynamical environment
in real-world applications.
The primary outcome of this research will be the enablement of provably trustworthy autonomy, fostering the confidence and interpretability required for deploying intelligent systems in safety-critical environments. The benefits of my research including enhanced reliability in unpredictable conditions and quantifiable safety margins that can be certified by regulators. Furthermore, by proving safety through its intrinsic design rather than exhaustive empirical testing, our framework significantly reduces the time and cost associated with validating complex robotic systems, thereby accelerating the transition of advanced AI from the laboratory to the real world.
My research is strategically positioned to overcome the limitations of the two prevailing paradigms. Unlike classical model-based control, which demands precise system models and is often too rigid for complex environments, our method aims to robust to the inherent uncertainties of real-world systems. Conversely, when compared to end-to-end machine learning techniques like reinforcement learning, which lack interpretability and formal guarantees, our framework targets to provide undeniable mathematical proof of safety. The key competitive advantage lies in this synthesis: we uniquely bridge the gap between performance and reliability, providing a solution that embeds the rigor of formal methods into the flexible, powerful core of data-driven control.

