Student Projects

2021/22

Uncovering the Neural Code of DRL Agents

Neuroscience has evolved exquisite tools to probe the behaviour of neurons in biology. Yet, very few of these tools are applied to decipher the encoding of deep neural networks. This particularly true in the field of deep reinforcement learning (DRL), where layered artificial neural networks learn mappings from observations to policies, or control signals. In this project, we build on prior work in agents trained to perform visuo-motor control, such as guiding a robot arm towards a target. The aim is to answer specific questions on parameter distributions and activation statistics, comparing these as training algorithms are altered, or environments perturbed.


Neural Architectures for Predicting the Behaviour of Dynamical Systems

Fuelled by artificial neural architectures and backpropagation, data-driven approaches now dominate the engineering of systems for pattern recognition. However, the predictive modelling of the behaviour of complex dynamical systems – such as those governed by systems of coupled differential equations – remains challenging in two key ways: (i) long term prediction and (ii) out-of-distribution prediction. Recent progress in disentangled representations (Fotiadis et al, 2021) has nudged the field forward, but it is now time to return to the underlying neural architectures, seeking those that are better suited to the intrinsic dynamics implicit in a system of equations. We seek to explore different approaches to this problem, including Siamese network structures, progressive network growth or, perhaps, neurons which incorporate some form of plasticity.


Cyclic-Consistent Joint Representations for Diagnostic Images and Text

Medical image analysis has taken huge steps forward due to the emergence of practical machine learning algorithms based around the use of deep networks. However, the standard pipeline of medical image analysis involves a tedious process of human interpretation, sometimes with segmentation. Simple labelling is usually done, and this is fine, but does not scale well. Nor is it necessarily sustainable: when new imaging methods come along, one has to start from scratch again. The aim of this project is to generalise the principle of cycle-consistent training to provide a learning signal that can aid or regularise the learning of joint image/text representations. The project is done in collaboration with Third Eye Intelligence.

 

 

 

Contact us

Prof Anil Anthony Bharath
a.bharath@imperial.ac.uk

Telephone
+44 (0)20 7594 5463

Address
Room 4.12
Department of Bioengineering
Royal School of Mines Building
Imperial College London