Citation

BibTex format

@inproceedings{Wang:2024,
author = {Wang, Y and Qian, Q and Boyle, D},
pages = {51303--51327},
publisher = {MLResearchPress},
title = {Probabilistic constrained reinforcement learning with formal interpretability},
url = {http://hdl.handle.net/10044/1/114815},
year = {2024}
}

RIS format (EndNote, RefMan)

TY  - CPAPER
AB - Reinforcement learning can provide effective reasoning for sequential decision-making problems with variable dynamics. Such reasoning in practical implementation, however, poses a persistent challenge in interpreting the reward function and the corresponding optimal policy. Consequently, representing sequential decision-making problems as probabilistic inference can have considerable value, as, in principle, the inference offers diverse and powerful mathematical tools to infer the stochastic dynamics whilst suggesting a probabilistic interpretation of policy optimization. In this study, we propose a novel Adaptive Wasserstein Variational Optimization, namely AWaVO, to tackle these interpretability challenges. Our approach uses formal methods to achieve the interpretability for convergence guarantee, training transparency, and intrinsic decision-interpretation. To demonstrate its practicality, we showcase guaranteed interpretability with a global convergence rate Θ(1/√T) in simulation and in practical quadrotor tasks. In comparison with state-of-the-art benchmarks, including TRPO-IPO, PCPO, and CRPO, we empirically verify that AWaVO offers a reasonable trade-off between high performance and sufficient interpretability.
AU - Wang,Y
AU - Qian,Q
AU - Boyle,D
EP - 51327
PB - MLResearchPress
PY - 2024///
SN - 2640-3498
SP - 51303
TI - Probabilistic constrained reinforcement learning with formal interpretability
UR - http://hdl.handle.net/10044/1/114815
ER -

Contact us

Dyson School of Design Engineering
Imperial College London
25 Exhibition Road
South Kensington
London
SW7 2DB

design.engineering@imperial.ac.uk
Tel: +44 (0) 20 7594 8888

Campus Map