Abstract geometric silhouette of a human head with circular gears depicting a robotic brain. Bright connecting dots convey digital connectivity

Written by

Published

Category

With artificial intelligence (AI) having an increasing impact on our lives, David Shrier considers how we can develop profitable AI ventures that also benefit society

Artificial intelligence is changing business and society at a scale not seen since the First Industrial Revolution, with estimates suggesting that AI could displace 50 per cent of the workforce within the next five to seven years – and some believe up to 99 per cent could be displaced over the next 10 to 20.

How do we respond to a technology that can change how we work and live so quickly? The economic and social implications are simultaneously thrilling and frightening.

When we hand decision-making to a machine, how do we make sure that it reflects the values of our societies?

If you can run a bank 50 to 80 per cent more profitably, don’t you have an obligation to your shareholders to do so? What if it were now possible to help the 3.4 billion people in the world who are underbanked because AI now makes it cost-effective to serve them? If you can manage transportation networks with 94 per cent fewer traffic fatalities, saving one million lives per year, do you have any other choice? Supply chain shortages in the UK are being blamed in part on the inability to find enough lorry drivers – a problem also solved by autonomous computing systems.

On the other hand, if not closely supervised, artificial intelligence can quickly train itself to become racist and sexist and violate regulations. When we hand decision-making to a machine, how do we make sure that it reflects the values of our societies? How do we ensure that AI follows the law? And how should we think about the fact that, in the process of making a bank 80 per cent more profitable, we throw tens of thousands out of work?

The costs of failing to responsibly deploy technologies are existential, not only for individual organisations, but for entire countries

I spend a good deal of my time educating entrepreneurs around AI-related startups and working with large-scale enterprises on strategic growth and change. Behind this lies a body of research, frameworks and policy development around how we deploy technologies responsibly; the costs of failing to do so are existential, not only for individual organisations, but for entire countries and multilateral alliances.

In teaching students how to shape new enterprises around the powerful technology of artificial intelligence, we need to consider not only the complexities of long-cycle innovation and commercial market entry, but also the societal implications. 

Engineering trusted AI

A growing body of researchers have been distilling the common ethical values or frameworks in dozens of countries, identifying universal principles that can be applied to create trusted artificial intelligences.

Professor Luciano Floridi and Josh Cowls at the University of Oxford have noted five common principles for ethical AI:

  1. Benevolence: the AI does something good. This could be at the micro level, such as making money for my trading portfolio or helping me with my exercise regimen, or it could be at the macro level, like helping stop the spread of coronavirus (COVID-19).

  2. Non-malevolence: in doing something good, the AI should not also do something bad. If my portfolio trading AI does something good, such as making money for me, it should not do so illegally, or have an adverse impact on others – for example, manipulating the price of rice futures so that poor people starve.

  3. Explicability: the AI should be able to explain how it arrived at a decision.

  4. Justice and fairness: AI systems should embed common societal mores around justice and fairness in the decisions they make.

  5. Autonomy: the AI should do what we expect it to do, and not simply what it ‘wants’ to do.

Regulatory response

The European Union (EU), already a leader in digital privacy with the General Data Protection Regulation (GDPR), has now proposed new artificial intelligence rules that seek to mitigate potential harm from AI while still creating room for innovation; I am on the advisory committee to the European Parliament supporting this effort.

As a former Facebook executive said, “The tools that we have created are starting to erode the social fabric of how society works”

The EU is planning to adopt a risk-based approach, where AI would experience a very light touch where the possibility of harm is low, and more significant regulation or an outright ban where the potential for harm is high.

For example, a marketing chatbot would simply need to disclose to a user that they are interacting with a machine and not a human being. On the other hand, applications such as credit scoring or employment screening would be subject to strict oversight. Social scoring, similar to what China has adopted, would be banned outright.

While some might argue that there is enough regulation around data and privacy, the excesses of Facebook (recently rebranded as Meta) and others in recent years – and the concomitant impact on liberal democracy – have demonstrated that AI cannot simply be left to the free markets. As one former Facebook executive, Chamath Palihapitiya, has said, “The tools that we have created today are starting to erode the social fabric of how society works.” 

Implementing the future

My colleagues and I continue to wrestle with how we can embed these ideas into the new enterprises that are being generated by innovators at Imperial and elsewhere.

Our AI Ventures programme embeds exercises and frameworks to sensitize our students to the risks and implications of what they are doing, and we have proposed an even broader toolkit to deliver a systematic means of enforcing trusted AI. Much remains to be done.

What is inevitable is that AI innovation is scaling up, and we need to respond. We have no other choice.

Written by

Published

Category

Main image: Who_I_am / iStock / Getty Images Plus via Getty Images.

Headshot of David Shrier, Professor of Practice

About David Shrier

Professor of Practice, AI & Innovation; Director, Centre for Digital Transformation
David Shrier is a Professor of Practice, AI & Innovation at Imperial College Business School. His latest book, Augmenting Your Career: How to Win at Work in the Age of AI was recently published by Little Brown.

David is a member of Imperial's Centre for Digital Transformation, and leads both the Translational AI Lab (TRAIL), applying trusted AI solutions to problems of business, government, and society, and the Institutional Digital Assets Project, helping organisations understand and adopt digital assets such as crypto.

You can find the author's full profile, including publications, at their Imperial Profile

Monthly newsletter

Receive the latest insights from Imperial College Business School