AI promises a way to allow employees to do more, at a higher standard, and in less time. The delegation of tasks to intelligent systems can allow them to direct their efforts to addressing new business needs and opportunities.
However, as my colleagues have highlighted, there are safety risks that need to be addressed if we’re to make this positive vision a reality. The most common issues raised relate to technology misuse, biases within systems, and issues of data copyright and ownership.
But I would like to bring your attention to a different type of safety threat. And one that is familiar to employers across sectors because it impacts every measure of success across an organisation: engagement.
Employees who are not engaged – that is, who do not feel meaningfully or consistently motivated in their work – are less effective, less productive, less reliable, and experience poorer wellbeing. When AI systems aren’t integrated carefully into the workplace, motivation and engagement can take a hit.
Creative and committed employees may be drained of initiative as they watch aspects of their job they most value stolen by automation. Employees that thrive on connection to people, or on seeing the outcomes of their work, could languish as these connections are severed by new disconnected and automated processes. And we are increasingly familiar with the ways AI surveillance can lead to mistrust, or how AI-induced skill atrophy can degrade competence and self-efficacy.
In short, when AI joins a human system it needs to work optimally within that system and this means taking psychology and motivational structures into account. This requires foregrounding the experience of employees of all types who are impacted.
We work on designing technologies in ways that support precisely these factors that fuel motivation, engagement and wellbeing and we have methods for conducting design and research together with the people impacted. Our experience in doing so has made it clear that without addressing this human psychological layer, even the cleverest technology can fail. Sol, my take away message is this: Systematically addressing basic psychological needs is critical to the success of AI and to achieving the gains in engagement, productivity, safety, competence and trust we seek.
Trust and enthusiasm for AI innovation is only possible if the needs and experiences of the people impacted by the technologies are deeply understood which means involving employees, service users and the broader publics into the design process. we need to consider stakeholder expertise as equally important to technical expertise.
Design Engineers work like policy makers. We first need to understand stakeholders’ needs and values. Then they write policy and we write code, or design documents. For AI policy we need to work together.
Contact us
Dyson School of Design Engineering
Imperial College London
25 Exhibition Road
South Kensington
London
SW7 2DB
design.engineering@imperial.ac.uk
Tel: +44 (0) 20 7594 8888