Which way forward?

 

With the advent of technology that can learn and change itself, and the integration of vast data sources tracking every detail of human lives, engineering now entails decision-making with complex moral implications and global impact.  As part of daily practice, technologists face values-laden tensions concerning privacy, justice, transparency, wellbeing, human rights, and questions that strike at the very nature of what it is to be human.

We recently edited a Special Issue of IEEE Transaction on Technology and Society about “After Covid-19: Crises, Ethics, and Socio-Technical Change”

"Our research works to understand the paths toward a future in which technology benefits all of humankind and the planet. We collaborate with social scientists to develop practical methods and socio-technical solutions to equip engineers and designers with the tools necessary for practicing responsibly through every step of the development process. "

Projects

Responsible Tech Design Library

Find out more about tools and methods for more ethical practice in technology design

Staff

Prof. Rafael Calvo

Prof. Rafael Calvo

Dr Celine Mougenot

Dr Celine Mougenot

Prof. Sebastian Deterding

Prof. Sebastian Deterding

Dr Fangzhou You

Dr Fangzhou You

Laura Moradbakhti

Laura Moradbakhti

Dr Juan Pablo Bermudez

Dr Juan Pablo Bermudez

Marco Da Re

Marco Da Re

Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Journal article
    Peters D, Vold K, Robinson D, Calvo Ret al., 2020,

    Responsible AI—two frameworks for ethical design practice

    , IEEE Transactions on Technology and Society, Vol: 1, Pages: 34-47, ISSN: 2637-6415

    In 2019, the IEEE launched the P7000 standards projects intended to address ethical issues in the design of autonomous and intelligent systems. This move came amidst a growing public concern over the unintended consequences of artificial intelligence (AI), compounded by the lack of an anticipatory process for attending to ethical impact within professional practice. However, the difficulty in moving from principles to practice presents a significant challenge to the implementation of ethical guidelines. Herein, we describe two complementary frameworks for integrating ethical analysis into engineering practice to help address this challenge. We then provide the outcomes of an ethical analysis informed by these frameworks, conducted within the specific context of Internet-delivered therapy in digital mental health. We hope both the frameworks and analysis can provide tools and insights, not only for the context of digital healthcare but also for data-enabled and intelligent technology development more broadly.

  • Journal article
    Calvo RA, Peters D, Cave S, 2020,

    Advancing impact assessment for intelligent systems

    , Nature Machine Intelligence, Vol: 2, Pages: 89-91, ISSN: 2522-5839
  • Book chapter
    Calvo RA, Peters D, Vold K, Ryan RMet al., 2020,

    Supporting Human Autonomy in AI Systems: A Framework for Ethical Enquiry

    , Philosophical Studies Series, Pages: 31-54

    Autonomy has been central to moral and political philosophy for millennia, and has been positioned as a critical aspect of both justice and wellbeing. Research in psychology supports this position, providing empirical evidence that autonomy is critical to motivation, personal growth and psychological wellness. Responsible AI will require an understanding of, and ability to effectively design for, human autonomy (rather than just machine autonomy) if it is to genuinely benefit humanity. Yet the effects on human autonomy of digital experiences are neither straightforward nor consistent, and are complicated by commercial interests and tensions around compulsive overuse. This multi-layered reality requires an analysis that is itself multidimensional and that takes into account human experience at various levels of resolution. We borrow from HCI and psychological research to apply a model (“METUX”) that identifies six distinct spheres of technology experience. We demonstrate the value of the model for understanding human autonomy in a technology ethics context at multiple levels by applying it to the real-world case study of an AI-enhanced video recommender system. In the process we argue for the following three claims: (1) There are autonomy-related consequences to algorithms representing the interests of third parties, and they are not impartial and rational extensions of the self, as is often perceived; (2) Designing for autonomy is an ethical imperative critical to the future design of responsible AI; and (3) Autonomy-support must be analysed from at least six spheres of experience in order to appropriately capture contradictory and downstream effects.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://www.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=1296&limit=10&page=2&respub-action=search.html Current Millis: 1732204955153 Current Time: Thu Nov 21 16:02:35 GMT 2024

Contact us

Dyson School of Design Engineering
Imperial College London
25 Exhibition Road
South Kensington
London
SW7 2DB

design.engineering@imperial.ac.uk
Tel: +44 (0) 20 7594 8888

Campus Map