A new mathematical principle has been designed to combat AI bias towards making unethical and costly commercial choices.
Researchers from the University of Warwick, Imperial College London, EPFL (Lausanne) and Sciteb Ltd have found a mathematical means of helping regulators and businesses manage artificial intelligence (AI) systems’ biases towards making unethical, and potentially very costly and damaging, commercial choices.
It may be necessary to rethink the way AI operates in very large strategy spaces, so that unethical outcomes are rejected by the optimisation process. Dr Heather Battey
AI is increasingly deployed in commercial situations, for example to set the prices of insurance products to be sold to specific customers. The AI will choose from many potential strategies, some of which may be discriminatory or may otherwise misuse customer data in ways that later lead to severe penalties for the company. For example, regulators may levy significant fines and customers may boycott the company.
Ideally, unethical methods such as these would be removed from the pool of potential strategies beforehand, but as the AI does not have a moral sense it cannot distinguish between ethical and unethical strategies.
In an environment in which decisions are increasingly made without human intervention, there is therefore a very strong incentive to know under what circumstances AI systems might adopt an unethical strategy and reduce that risk or eliminate it entirely if possible.
Unethical Optimization Principle
Co-author of the paper Dr Heather Battey, from the Department of Mathematics at Imperial, said: “Our work shows that certain types of commercial artificial intelligence systems can significantly amplify the risk of choosing unethical strategies relative to a less sophisticated system that would pick a strategy arbitrarily.
"This suggests that it may be necessary to rethink the way AI operates in very large strategy spaces, so that unethical outcomes are rejected by the optimisation process.”
The research team discovered that even though there may only be a few unethical strategies in a pool of possibilities, AI systems may be more likely to choose them, because they are profitable.
They therefore created a new ‘Unethical Optimization Principle’ and provided a simple formula to estimate its impact, published in Royal Society Open Science. The Principle is designed to help regulators and companies find problematic strategies hidden among a large pool of potential strategies and suggest how the AI search algorithm should be modified to avoid them.
Co-author Professor Robert MacKay of the Mathematics Institute of the University of Warwick said: “Our suggested ‘Unethical Optimization Principle’ can be used to help regulators, compliance staff and others to find problematic strategies that might be hidden in a large strategy space. Optimisation can be expected to choose disproportionately many unethical strategies, inspection of which should show where problems are likely to arise and thus suggest how the AI search algorithm should be modified to avoid them in future.”
-
'An unethical optimization principle' by Nicholas Beale, Heather Battey, Anthony C. Davison and Robert S. MacKay is published in Royal Society Open Science.
Based on press releases by the University of Warwick and EPFL (Lausanne).
Top image credit: enzozo/Shutterstock
Supporters
Article text (excluding photos or graphics) © Imperial College London.
Photos and graphics subject to third party copyright used with permission or © Imperial College London.
Reporter
Hayley Dunning
Communications Division
Contact details
Tel: +44 (0)20 7594 2412
Email: h.dunning@imperial.ac.uk
Show all stories by this author
Leave a comment
Your comment may be published, displaying your name as you provide it, unless you request otherwise. Your contact details will never be published.
Comments
Comments are loading...