
Talk Title
Safe Reinforcement Learning for Critical Applications
Talk Summary
Reinforcement Learning (RL) is a powerful method for controlling dynamic systems, but its learning mechanism can lead to unpredictable actions that undermine the safety of critical systems. In this talk, we discuss RL with Adaptive Control Regularization (RL-ACR) that ensures RL safety by combining the RL policy with a control regularizer that hard-codes safety constraints over forecasted system behaviors. The adaptability is achieved by using a learnable” focus” weight trained to maximize the cumulative reward of the policy combination. As the RL policy improves through off-policy learning, the focus weight improves the initial sub-optimum strategy by gradually relying more on the RL policy. We show the effectiveness of RL-ACR in a critical medical control application and further investigate its performance in classic control environments.
Speaker’s Bio
Pietro is a lecturer in Artificial Intelligence at the Dyson School of Design Engineering and at I-X. He obtained his Ph.D. in Electrical and Control Engineering from University of Pisa in 2018, with a thesis focused on the analysis and control of Microgrids. He is currently Lecturer in Deep Learning as well as the coordinator for year three at Design Engineering. Moreover, he is one of the founders of Goeve, an ICL spin-off that focuses on EV charging. His research interests include control theory applied to the sharing economy domain and distributed ledger technology and machine learning.