Which way forward?

 

With the advent of technology that can learn and change itself, and the integration of vast data sources tracking every detail of human lives, engineering now entails decision-making with complex moral implications and global impact.  As part of daily practice, technologists face values-laden tensions concerning privacy, justice, transparency, wellbeing, human rights, and questions that strike at the very nature of what it is to be human.

We recently edited a Special Issue of IEEE Transaction on Technology and Society about “After Covid-19: Crises, Ethics, and Socio-Technical Change”

"Our research works to understand the paths toward a future in which technology benefits all of humankind and the planet. We collaborate with social scientists to develop practical methods and socio-technical solutions to equip engineers and designers with the tools necessary for practicing responsibly through every step of the development process. "

Projects

Responsible Tech Design Library

Find out more about tools and methods for more ethical practice in technology design

Staff

Prof. Rafael Calvo

Prof. Rafael Calvo

Dr Celine Mougenot

Dr Celine Mougenot

Prof. Sebastian Deterding

Prof. Sebastian Deterding

Dr Fangzhou You

Dr Fangzhou You

Laura Moradbakhti

Laura Moradbakhti

Dr Juan Pablo Bermudez

Dr Juan Pablo Bermudez

Marco Da Re

Marco Da Re

Citation

BibTex format

@inbook{Calvo:2020:10.1007/978-3-030-50585-1_2,
author = {Calvo, RA and Peters, D and Vold, K and Ryan, RM},
booktitle = {Philosophical Studies Series},
doi = {10.1007/978-3-030-50585-1_2},
pages = {31--54},
title = {Supporting Human Autonomy in AI Systems: A Framework for Ethical Enquiry},
url = {http://dx.doi.org/10.1007/978-3-030-50585-1_2},
year = {2020}
}

RIS format (EndNote, RefMan)

TY  - CHAP
AB - Autonomy has been central to moral and political philosophy for millennia, and has been positioned as a critical aspect of both justice and wellbeing. Research in psychology supports this position, providing empirical evidence that autonomy is critical to motivation, personal growth and psychological wellness. Responsible AI will require an understanding of, and ability to effectively design for, human autonomy (rather than just machine autonomy) if it is to genuinely benefit humanity. Yet the effects on human autonomy of digital experiences are neither straightforward nor consistent, and are complicated by commercial interests and tensions around compulsive overuse. This multi-layered reality requires an analysis that is itself multidimensional and that takes into account human experience at various levels of resolution. We borrow from HCI and psychological research to apply a model (“METUX”) that identifies six distinct spheres of technology experience. We demonstrate the value of the model for understanding human autonomy in a technology ethics context at multiple levels by applying it to the real-world case study of an AI-enhanced video recommender system. In the process we argue for the following three claims: (1) There are autonomy-related consequences to algorithms representing the interests of third parties, and they are not impartial and rational extensions of the self, as is often perceived; (2) Designing for autonomy is an ethical imperative critical to the future design of responsible AI; and (3) Autonomy-support must be analysed from at least six spheres of experience in order to appropriately capture contradictory and downstream effects.
AU - Calvo,RA
AU - Peters,D
AU - Vold,K
AU - Ryan,RM
DO - 10.1007/978-3-030-50585-1_2
EP - 54
PY - 2020///
SP - 31
TI - Supporting Human Autonomy in AI Systems: A Framework for Ethical Enquiry
T1 - Philosophical Studies Series
UR - http://dx.doi.org/10.1007/978-3-030-50585-1_2
ER -

Contact us

Dyson School of Design Engineering
Imperial College London
25 Exhibition Road
South Kensington
London
SW7 2DB

design.engineering@imperial.ac.uk
Tel: +44 (0) 20 7594 8888

Campus Map