New AI hub aims to unravel the fundamental mathematics behind AI

by

An abstract depiction of a data network, with many points being connected over a blue background.

Imperial researchers are co-leading a £10m artificial intelligence (AI) hub that will develop frameworks for understanding machine learning models.

The AI Hub is a consortium of six universities and 13 British international industry and public sector leading players as partners. Dr Anthea Monod from the Department of Mathematics is the Imperial College London Node Lead, and will also act as a member of the hub’s Board of Directors and its Equality, Diversity and Inclusion Committee Chair.

Professor Tom Coates and Professor Jeroen Lamb from the Department of Mathematics will also act as co-investigators in the AI Hub.

Led by Professor Michael Bronstein, from the University of Oxford’s Department of Computer Science, the establishment of the Hub is part of the UK's goal to position itself as a global leader in AI and machine learning (ML).

Funded by the Engineering and Physical Sciences Research Council (EPSRC), the consortium will study the mathematical and computational underpinnings of AI using pure mathematics. Its research will go towards building more principled, effective, and safer models.

Dr Monod said: “AI is something that is very fast moving and is generating a lot of exciting tools but its extent is something that’s still unknown.”

“The approach pf the Hub will be to develop a common framework based on foundational mathematics so that when the next big AI thing catches on, we have a basis to understand what it's doing, where it's coming from, and what we can expect from it,” she said.

An ‘Erlangen Programme’ for AI

The 19th century was an extremely productive century for mathematics. Mathematicians like Gauss and Riemann had created new and exciting fields within the larger umbrella of geometry, such as hyperbolic geometry that later became critical to Albert Einstein’s theory of general relativity.

What was not known at the time was how each of these seemingly disparate fields related to each other. The solution came in 1872 when a young professor called Felix Klein unified these sub-disciplines of geometry – not by looking at what differed between them, but what remained the same.

The study of ‘invariants’ in geometry revealed structures that would be preserved when mathematicians transformed shapes. Seemingly complicated procedures were vastly simplified when mathematicians could keep specific things constant as one system transformed into another.

Klein’s discovery was a breakthrough because it standardised the procedures that mathematicians approached geometry with.

AI in our present day has many similarities to geometry in the 19th century. Scientists currently have a diverse zoo of different ways to construct an AI model, known as ‘architectures’, but they lack a taxonomy that can help identify the fundamental differences and similarities between them.

The goal of the new AI Hub will be to explore the unifying principles between different AI architectures. With a better understanding of a common framework, researchers can then use their knowledge to develop more effective and safer models.

Complicated geometries

The growing availability of high-quality datasets and computational resources, like powerful and commercially-available graphics processing units (GPUs), have created an explosive growth the performance of AI models.

ChatGPT, powered by GPT-3.5, was the fastest growing consumer internet application in 2023 and hit 100 million active monthly users only two months after its launch.

However, continuing to build powerful models that can handle increasingly complex data may be a significant bottleneck to AI development in the future.

This includes data in healthcare about a patient’s symptoms, test results and behaviour that may not only be determined by a large number of factors, but may also influence each other.

We need more tools and abstractions to deal with data that go beyond the norm and are more complicated, such as networks and text. Dr Anthea Monod Department of Mathematics

Dr Monod says that scientists now have the issue of processing data that is more unstructured than conventional datasets that may just take the form of a collection of numbers or vectors: “We need more tools and abstractions to deal with data that go beyond the norm and are more complicated, such as networks and text."

Developing a framework for AI would not only allow scientists to possibly build models that can handle such complex datasets, but can also build safer AI. By understanding the decision-making of AI that can regulate its own learning, scientists can also predict its limitations and design levels of human intervention.

The consortium will work with industry and public partners to apply its research. The AI Hub will act as a liaison and facilitator for connections between academia, industry, and the government.

There will be opportunities for researchers to complete secondments and placements during the five-year lifetime of the Hub.

Dr Monod said: “It’s exciting because we have the potential to reach all sectors in the UK, and to allow the EPSRC to really achieve its goal of putting the UK at the forefront of AI research.”

Reporter

Jacklin Kwan

Jacklin Kwan
Faculty of Natural Sciences

Tags:

Research, Artificial-intelligence, Big-data
See more tags