Imperial College London

Professor Aldo Faisal

Faculty of EngineeringDepartment of Bioengineering

Professor of AI & Neuroscience
 
 
 
//

Contact

 

+44 (0)20 7594 6373a.faisal Website

 
 
//

Assistant

 

Miss Teresa Ng +44 (0)20 7594 8300

 
//

Location

 

4.08Royal School of MinesSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@article{Festor:2022:10.1136/bmjhci-2022-100549,
author = {Festor, P and Jia, Y and Gordon, A and Faisal, A and Habil, I and Komorowski, M},
doi = {10.1136/bmjhci-2022-100549},
journal = {BMJ Health & Care Informatics},
title = {Assuring the safety of AI-based clinical decision support systems: a case study of the AI Clinician for sepsis treatment},
url = {http://dx.doi.org/10.1136/bmjhci-2022-100549},
volume = {29},
year = {2022}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - Study objectives: Establishing confidence in the safety of AI-based clinical decision support systems is important prior to clinical deployment and regulatory approval for systems with increasing autonomy. Here, we undertook safety assurance of the AI Clinician, a previously published reinforcement learning-based treatment recommendation system for sepsis. Methods: As part of the safety assurance, we defined four clinical hazards in sepsis resuscitation based on clinical expert opinion and the existing literature. We then identified a set of unsafe scenarios and created safety constraints, intended to limit the action space of the AI agent with the goal of reducing the likelihood of hazardous decisions.Results: Using a subset of the MIMIC-III database, we demonstrated that our previously published “AI Clinician” recommended fewer hazardous decisions than human clinicians in three out of our four pre-defined clinical scenarios, while the difference was not statistically significant in the fourth scenario. Then, we modified the reward function to satisfy our safety constraints and trained a new AI Clinician agent. The retrained model shows enhanced safety, without negatively impacting model performance.Discussion: While some contextual patient information absent from the data may have pushed human clinicians to take hazardous actions, the data was curated to limit the impact of this confounder.Conclusion: These advances provide a use case for the systematic safety assurance of AI-based clinical systems, towards the generation of explicit safety evidence, which could be replicated for other AI applications or other clinical contexts, and inform medical device regulatory bodies.
AU - Festor,P
AU - Jia,Y
AU - Gordon,A
AU - Faisal,A
AU - Habil,I
AU - Komorowski,M
DO - 10.1136/bmjhci-2022-100549
PY - 2022///
SN - 2632-1009
TI - Assuring the safety of AI-based clinical decision support systems: a case study of the AI Clinician for sepsis treatment
T2 - BMJ Health & Care Informatics
UR - http://dx.doi.org/10.1136/bmjhci-2022-100549
UR - https://informatics.bmj.com/content/29/1/e100549
UR - http://hdl.handle.net/10044/1/97781
VL - 29
ER -