Safeguarding LLMs in health: a socio-technical approach


Abstract
Evidence shows that Large Language Models carry a plethora of risks in health applications, including fairness, biases, privacy, and reliability.

These tools are becoming essential to organisations which require high reliability, like the NHS. Purely technical approaches have shown serious limitations in other high-risk industries. This project will advance approaches to safeguarding LLMs using socio-technical approaches, particularly supporting attributes of High-Reliability Organisations (HRO) which operate error-free in complex environments, even under highly error-prone conditions. This project will pioneer the design of frameworks (software tools + design methods) which incorporate principles of HRO.

How to Apply 
Please contact Prof. Rafael Calvo if you are interested in applying for this project. 

For more information on submitting a PhD application, please consult our application pages here. When you submit your application, please cite this project title in the 'how you are planning to fund your PhD' section.   

Contact us

Dyson School of Design Engineering
Imperial College London
25 Exhibition Road
South Kensington
London
SW7 2DB

design.engineering@imperial.ac.uk
Tel: +44 (0) 20 7594 8888

Campus Map