Talk Title
Using Machine Learning Tools to Study Learning and Adaption in the Brain

Talk Summary
Animals use afferent feedback to rapidly correct ongoing movements in the presence of a perturbation. Repeated exposure to a predictable perturbation leads to behavioural adaptation that counteracts its effects. Primary motor cortex (M1) is intimately involved in both processes, integrating inputs from various sensorimotor brain regions to update the motor output. Here, we investigate whether feedback-based motor control and motor adaptation may share a common implementation in M1 circuits. We trained a recurrent neural network to control its own output through an error feedback signal, which allowed it to recover rapidly from external perturbations. Implementing a biologically plausible plasticity rule based on this same feedback signal also enabled the network to learn to counteract persistent perturbations through a trial-by-trial process, in a manner that reproduced several key aspects of human adaptation. Moreover, the resultant network activity changes were also present in neural population recordings from monkey M1. Online movement correction and longer-term motor adaptation may thus share a common implementation in neural circuits.

Speaker Bio – Professor Claudia Clopath
Professor Claudia Clopath leads the Computational Neuroscience Laboratory in the Bioengineering Department at Imperial College London. Her research focuses on learning and memory in neuroscience, using mathematical and computational tools to model synaptic plasticity. Claudia holds an MSc in Physics from EPFL and a PhD in Computer Science. She completed postdoctoral fellowships in neuroscience at Paris Descartes and Columbia University.

Time: 14.00 – 15.00
Date: Thursday 18 July
Location: Hybrid Event | Online and in I-X Conference Room, Level 5
Translation and Innovation Hub (I-HUB)
Imperial White City Campus
84 Wood Lane
W12 0BZ

Link to join online via Teams.

Any questions, please contact Andreas Joergensen (a.joergensen@imperial.ac.uk).

Getting here