Imperial College London

Professor Aldo Faisal

Faculty of EngineeringDepartment of Bioengineering

Professor of AI & Neuroscience
 
 
 
//

Contact

 

+44 (0)20 7594 6373a.faisal Website

 
 
//

Assistant

 

Miss Teresa Ng +44 (0)20 7594 8300

 
//

Location

 

4.08Royal School of MinesSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

236 results found

Parkin B, Daws R, Das Neves I, Violante I, Soreq E, Faisal A, Sandrone S, Lao-Kaim N, Martin-Bastida A, Roussakis A-A, Piccini P, Hampshire Aet al., 2021, Dissociable effects of age and Parkinson's disease on instruction based learning, Brain Communications, Vol: 3, ISSN: 2632-1297

The cognitive deficits associated with Parkinson’s disease vary across individuals and change across time, with implications for prognosis and treatment. Key outstanding challenges are to define the distinct behavioural characteristics of this disorder and develop diagnostic paradigms that can assess these sensitively in individuals. In a previous study, we measured different aspects of attentional control in Parkinson’s disease using an established fMRI switching paradigm. We observed no deficits for the aspects of attention the task was designed to examine; instead those with Parkinson’s disease learnt the operational requirements of the task more slowly. We hypothesized that a subset of people with early-to-mid stage Parkinson’s might be impaired when encoding rules for performing new tasks. Here, we directly test this hypothesis and investigate whether deficits in instruction-based learning represent a characteristic of Parkinson’s Disease. Seventeen participants with Parkinson’s disease (8 male; mean age: 61.2 years), 18 older adults (8 male; mean age: 61.3 years) and 20 younger adults (10 males; mean age: 26.7 years) undertook a simple instruction-based learning paradigm in the MRI scanner. They sorted sequences of coloured shapes according to binary discrimination rules that were updated at two-minute intervals. Unlike common reinforcement learning tasks, the rules were unambiguous, being explicitly presented; consequently, there was no requirement to monitor feedback or estimate contingencies. Despite its simplicity, a third of the Parkinson’s group, but only one older adult, showed marked increases in errors, 4 SD greater than the worst performing young adult. The pattern of errors was consistent, reflecting a tendency to misbind discrimination rules. The misbinding behaviour was coupled with reduced frontal, parietal and anterior caudate activity when rules were being encoded, but not when attention was initially o

Journal article

Ortega P, Faisal A, 2021, Deep Learning multimodal fNIRS & EEG signals for bimanual grip force decoding, Journal of Neural Engineering, Vol: 18, Pages: 1-21, ISSN: 1741-2560

Objective Non-invasive BMI offer an alternative, safe and accessible way to interact with the environment. To enable meaningful and stable physical interactions, BMIs need to decode forces. Although previously addressed in the unimanual case, controlling forces from both hands would enable BMI-users to perform a greater range of interactions. We here investigate the decoding of hand-specific forces. Approach We maximise cortical information by using EEG and fNIRS and developing a deep-learning architecture with attention and residual layers (cnnatt) to improve their fusion. Our task required participants to generate hand-specific force profiles on which we trained and tested our deep-learning and linear decoders. Main results The use of EEG and fNIRS improved the decoding of bimanual force and the deep-learning models outperformed the linear model. In both cases, the greatest gain in performance was due to the detection of force generation. In particular, the detection of forces was hand-specific and better for the right dominant hand and cnnatt was better at fusing EEG and fNIRS. Consequently, the study of cnnatt revealed that forces from each hand were differently encoded at the cortical level. cnnatt also revealed traces of the cortical activity being modulated by the level of force which was not previously found using linear models. Significance Our results can be applied to avoid hand-cross talk during hand force decoding to increase the robustness of BMI robotic devices. In particular, we improve the fusion of EEG and fNIRS signals and offer hand-specific interpretability of the encoded forces which are valuable during motor rehabilitation assessment.

Journal article

Festor P, Habil I, Jia Y, Gordon A, Faisal A, Komorowski Met al., 2021, Levels of Autonomy & Safety Assurance forAI-based Clinical Decision Systems, WAISE 2021 : 4th International Workshop on Artificial Intelligence Safety Engineering

Conference paper

Maimon-Mor RO, Schone HR, Henderson Slater D, Faisal AA, Makin TRet al., 2021, Author response: Early life experience sets hard limits on motor learning as evidenced from artificial arm use

Journal article

Festor P, Luise G, Komorowski M, Faisal Aet al., 2021, Enabling risk-aware Reinforcement Learning for medical interventions through uncertainty decomposition, ICML2021 workshop on Interpretable Machine Learning in Healthcare

Reinforcement Learning (RL) is emerging as toolfor tackling complex control and decision-makingproblems. However, in high-risk environmentssuch as healthcare, manufacturing, automotive oraerospace, it is often challenging to bridge the gapbetween an apparently optimal policy learned byan agent and its real-world deployment, due to theuncertainties and risk associated with it. Broadlyspeaking RL agents face two kinds of uncertainty,1. aleatoric uncertainty, which reflects randomness or noise in the dynamics of the world, and 2.epistemic uncertainty, which reflects the boundedknowledge of the agent due to model limitationsand finite amount of information/data the agenthas acquired about the world. These two typesof uncertainty carry fundamentally different implications for the evaluation of performance andthe level of risk or trust. Yet these aleatoric andepistemic uncertainties are generally confoundedas standard and even distributional RL is agnosticto this difference. Here we propose how a distributional approach (UA-DQN) can be recast torender uncertainties by decomposing the net effects of each uncertainty . We demonstrate theoperation of this method in grid world examplesto build intuition and then show a proof of concept application for an RL agent operating as aclinical decision support system in critical care.

Conference paper

Festor P, Luise G, Komorowski M, Faisal Aet al., 2021, ICML2021 workshop on Interpretable Machine Learning in Healthcare, ICML2021 workshop on Interpretable Machine Learning in Healthcare

Conference paper

Festor P, Luise G, Komorowski M, Faisal Aet al., 2021, Enabling risk-aware Reinforcement Learning for medical interventions through uncertainty decomposition, ICML2021 workshop on Interpretable Machine Learning in Healthcare

Conference paper

Stout D, Chaminade T, Apel J, Shafti A, Faisal AAet al., 2021, The measurement, evolution, and neural representation of action grammars of human behavior, Scientific Reports, Vol: 11, Pages: 1-13, ISSN: 2045-2322

Human behaviors from toolmaking to language are thought to rely on a uniquely evolved capacity for hierarchical action sequencing. Testing this idea will require objective, generalizable methods for measuring the structural complexity of real-world behavior. Here we present a data-driven approach for extracting action grammars from basic ethograms, exemplified with respect to the evolutionarily relevant behavior of stone toolmaking. We analyzed sequences from the experimental replication of ~ 2.5 Mya Oldowan vs. ~ 0.5 Mya Acheulean tools, finding that, while using the same “alphabet” of elementary actions, Acheulean sequences are quantifiably more complex and Oldowan grammars are a subset of Acheulean grammars. We illustrate the utility of our complexity measures by re-analyzing data from an fMRI study of stone toolmaking to identify brain responses to structural complexity. Beyond specific implications regarding the co-evolution of language and technology, this exercise illustrates the general applicability of our method to investigate naturalistic human behavior and cognition.

Journal article

Khwaja M, Pieritz S, Faisal AA, Matic Aet al., 2021, Personality and engagement with digital mental health interventions, Pages: 235-239

Personalisation is key to creating successful digital health applications. Recent evidence links personality and preference for digital experience - suggesting that psychometric traits can be a promising basis for personalisation of digital mental health services. However, there is still little quantitative evidence from actual app usage. In this study, we explore how different personality types engage with different intervention content in a commercial mental health application. Specifically, we collected the Big Five personality traits alongside the app usage data of 126 participants using a mobile mental health app for seven days. We found that personality traits significantly correlate with the engagement and user ratings of different intervention content. These findings open a promising research avenue that can inform the personalised delivery of digital mental health content and the creation of recommender systems, ultimately improving the effectiveness of mental health interventions.

Conference paper

Wei X, Ortega P, Faisal A, 2021, Inter-subject deep transfer learning for motor imagery EEG decoding, 10th International IEEE EMBS Conference on Neural Engineering (NER 21), Publisher: IEEE, Pages: 1-4

Convolutional neural networks (CNNs) have be-come a powerful technique to decode EEG and have become the benchmark for motor imagery EEG Brain-Computer-Interface (BCI) decoding. However, it is still challenging to train CNNs on multiple subjects’ EEG without decreasing individual performance. This is known as the negative transfer problem, i.e. learning from dissimilar distributions causes CNNs to misrepresent each of them instead of learning a richer representation. As a result, CNNs cannot directly use multiple subjects’ EEG to enhance model performance directly. To address this problem, we extend deep transfer learning techniques to the EEG multi-subject training case. We propose a multi-branch deep transfer network, the Separate-Common-Separate Network (SCSN) based on splitting the network’s feature extractors for individual subjects. We also explore the possibility of applying Maximum-mean discrepancy (MMD) to the SCSN (SCSN-MMD) to better align distributions of features from individual feature extractors. The proposed network is evaluated on the BCI Competition IV 2a dataset (BCICIV2adataset) and our online recorded dataset. Results show that the proposed SCSN (81.8%, 53.2%) and SCSN-MMD (81.8%,54.8%) outperformed the benchmark CNN (73.4%, 48.8%) on both datasets using multiple subjects. Our proposed networks show the potential to utilise larger multi-subject datasets to train an EEG decoder without being influenced by negative transfer.

Conference paper

Faisal AA, 2021, Putting touch into action, Science, Vol: 372, Pages: 791-792, ISSN: 0036-8075

Journal article

Li L, Faisal AA, 2021, Bayesian Distributional Policy Gradients, The Thirty-Fifth AAAI Conference on Artificial Intelligence, ISSN: 2159-5399

Conference paper

Patel BV, Haar S, Handslip R, Auepanwiriyakul C, Lee TM-L, Patel S, Harston JA, Hosking-Jervis F, Kelly D, Sanderson B, Borgatta B, Tatham K, Welters I, Camporota L, Gordon AC, Komorowski M, Antcliffe D, Prowle JR, Puthucheary Z, Faisal AAet al., 2021, Natural history, trajectory, and management of mechanically ventilated COVID-19 patients in the United Kingdom, Intensive Care Medicine, Vol: 47, Pages: 549-565, ISSN: 0342-4642

PurposeThe trajectory of mechanically ventilated patients with coronavirus disease 2019 (COVID-19) is essential for clinical decisions, yet the focus so far has been on admission characteristics without consideration of the dynamic course of the disease in the context of applied therapeutic interventions.MethodsWe included adult patients undergoing invasive mechanical ventilation (IMV) within 48 h of intensive care unit (ICU) admission with complete clinical data until ICU death or discharge. We examined the importance of factors associated with disease progression over the first week, implementation and responsiveness to interventions used in acute respiratory distress syndrome (ARDS), and ICU outcome. We used machine learning (ML) and Explainable Artificial Intelligence (XAI) methods to characterise the evolution of clinical parameters and our ICU data visualisation tool is available as a web-based widget (https://www.CovidUK.ICU).ResultsData for 633 adults with COVID-19 who underwent IMV between 01 March 2020 and 31 August 2020 were analysed. Overall mortality was 43.3% and highest with non-resolution of hypoxaemia [60.4% vs17.6%; P < 0.001; median PaO2/FiO2 on the day of death was 12.3(8.9–18.4) kPa] and non-response to proning (69.5% vs.31.1%; P < 0.001). Two ML models using weeklong data demonstrated an increased predictive accuracy for mortality compared to admission data (74.5% and 76.3% vs 60%, respectively). XAI models highlighted the increasing importance, over the first week, of PaO2/FiO2 in predicting mortality. Prone positioning improved oxygenation only in 45% of patients. A higher peak pressure (OR 1.42[1.06–1.91]; P < 0.05), raised respiratory component (OR 1.71[ 1.17–2.5]; P < 0.01) and cardiovascular component (OR 1.36 [1.04–1.75]; P < 0.05) of the sequential organ failure assessment (SOFA) score and raised lactate (OR 1.33 [0.99–1.79

Journal article

Dudley WL, Faisal A, Shafti SA, 2021, Real-world to virtual - flexible and scalable investigations of human-agent collaboration, CHI Workshop 2021

Conference paper

Pieritz S, Khwaja M, Faisal A, Matic Aet al., 2021, Personalised recommendations in mental health Apps: the impact of autonomy and data sharing, ACM Conference on Human Factors in Computing Systems (CHI), Publisher: ACM, Pages: 1-12

The recent growth of digital interventions for mental well-being prompts a call-to-arms to explore the delivery of personalised recommendations from a user's perspective. In a randomised placebo study with a two-way factorial design, we analysed the difference between an autonomous user experience as opposed to personalised guidance, with respect to both users’ preference and their actual usage of a mental well-being app. Furthermore, we explored users’ preference in sharing their data for receiving personalised recommendations, by juxtaposing questionnaires and mobile sensor data. Interestingly, self-reported results indicate the preference for personalised guidance, whereas behavioural data suggests that a blend of autonomous choice and recommended activities results in higher engagement. Additionally, although users reported a strong preference of filling out questionnaires instead of sharing their mobile data, the data source did not have any impact on the actual app use. We discuss the implications of our findings and provide takeaways for designers of mental well-being applications.

Conference paper

Kadirvelu B, Burcea G, Quint JK, Costelloe CE, Faisal AAet al., 2021, Covid-19 does not look like what you are looking for: Clustering symptoms by nation and multi-morbidities reveal substantial differences to the classical symptom triad

<jats:title>ABSTRACT</jats:title><jats:p>COVID-19 is by convention characterised by a triad of symptoms: cough, fever and loss of taste/smell. The aim of this study was to examine clustering of COVID-19 symptoms based on underlying chronic disease and geographical location. Using a large global symptom survey of 78,299 responders in 190 different countries, we examined symptom profiles in relation to geolocation (grouped by country) and underlying chronic disease (single, co- or multi-morbidities) associated with a positive COVID-19 test result using statistical and machine learning methods to group populations by underlying disease, countries, and symptoms. Taking the responses of 7980 responders with a COVID-19 positive test in the top 5 contributing countries, we find that the most frequently reported symptoms differ across the globe: For example, fatigue 4108(51.5%), headache 3640(45.6%) and loss of smell and taste 3563(44.6%) are the most reported symptoms globally. However, symptom patterns differ by continent; India reported a significantly lower proportion of headache (22.8% vs 45.6%, p&lt;0.05) and itchy eyes (7.0% vs. 15.3%, p&lt;0.05) than other countries, as does Pakistan (33.6% vs 45.6%, p&lt;0.05 and 8.6% vs 15.3%, p&lt;0.05). Mexico and Brazil report significantly less of these symptoms. As with geographic location, we find people differed in their reported symptoms, if they suffered from specific underlying diseases. For example, COVID-19 positive responders with asthma or other lung disease were more likely to report shortness of breath as a symptom, compared with COVID-19 positive responders who had no underlying disease (25.3% vs. 13.7%, p&lt;0.05, and 24.2 vs.13.7%, p&lt;0.05). Responders with no underlying chronic diseases were more likely to report loss of smell and tastes as a symptom (46%), compared with the responders with type 1 diabetes (21.3%), Type 2 diabetes (33.5%) lung disease (29.3%), or hype

Journal article

Shafti SA, Tjomsland J, Dudley W, Faisal Aet al., 2021, Real-world human-robot collaborative reinforcement learning, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 11161-11166, ISSN: 2153-0866

The intuitive collaboration of humans and intel-ligent robots (embodied AI) in the real-world is an essentialobjective for many desirable applications of robotics. Whilstthere is much research regarding explicit communication, wefocus on how humans and robots interact implicitly, on motoradaptation level. We present a real-world setup of a human-robot collaborative maze game, designed to be non-trivial andonly solvable through collaboration, by limiting the actions torotations of two orthogonal axes, and assigning each axes to oneplayer. This results in neither the human nor the agent beingable to solve the game on their own. We use deep reinforcementlearning for the control of the robotic agent, and achieve resultswithin 30 minutes of real-world play, without any type ofpre-training. We then use this setup to perform systematicexperiments on human/agent behaviour and adaptation whenco-learning a policy for the collaborative game. We presentresults on how co-policy learning occurs over time between thehuman and the robotic agent resulting in each participant’sagent serving as a representation of how they would play thegame. This allows us to relate a person’s success when playingwith different agents than their own, by comparing the policyof the agent with that of their own agent.

Conference paper

Ortega San Miguel P, Faisal AA, 2021, HemCNN: Deep Learning enables decoding of fNIRS cortical signals in hand grip motor tasks, 10th International IEEE EMBS Conference on Neural Engineering (NER 21), Publisher: IEEE

We solve the fNIRS left/right hand force decoding problem using a data-driven approach by using a convolutional neural network architecture, the HemCNN. We test HemCNN’s decoding capabilities to decode in a streaming way the hand, left or right, from fNIRS data. HemCNN learned to detect which hand executed a grasp at a naturalistic hand action speed of1Hz, outperforming standard methods. Since HemCNN does not require baseline correction and the convolution operation is invariant to time translations, our method can help to unlock fNIRS for a variety of real-time tasks. Mobile brain imaging and mobile brain machine interfacing can benefit from this to develop real-world neuroscience and practical human neural interfacing based on BOLD-like signals for the evaluation, assistance and rehabilitation of force generation, such as fusion of fNIRS with EEG signals.

Conference paper

Wannawas N, Subramanian M, Faisal A, 2021, Neuromechanics-based deep reinforcement learning of neurostimulation control in FES cycling, 10th International IEEE EMBS Conference on Neural Engineering (NER 21), Publisher: IEEE

Functional Electrical Stimulation (FES) can re-store motion to a paralysed’s person muscles. Yet, control stimulating many muscles to restore the practical function of entire limbs is an unsolved problem. Current neurostimulation engineering still relies on 20th Century control approaches and correspondingly shows only modest results that require daily tinkering to operate at all. Here, we present our state-of-the-art Deep Reinforcement Learning developed for real-time adaptive neurostimulation of paralysed legs for FES cycling. Core to our approach is the integration of a personalised neuromechanical component into our reinforcement learning (RL) framework that allows us to train the model efficiently–without demanding extended training sessions with the patient and working out-of-the-box. Our neuromechanical component includes merges musculoskeletal models of muscle/tendon function and a multi-state model of muscle fatigue, to render the neurostimulation responsive to a paraplegic’s cyclist instantaneous muscle capacity. Our RL approach outperforms PID and Fuzzy Logic controllers in accuracy and performance. Crucially, our system learned to stimulate a cyclist’s legs from ramping up speed at the start to maintaining a high cadence in steady-state racing as the muscles fatigue. A part of our RL neurostimulation system has been successfully deployed at the Cybathlon 2020 bionic Olympics in the FES discipline with our paraplegic cyclist winning the Silver medal among 9 competing teams.

Conference paper

Ortega San Miguel P, Zhao T, Faisal AA, 2021, Deep real-time decoding of bimanual grip force from EEG & fNIRS, 10th International IEEE EMBS Conference on Neural Engineering (NER 2021), Publisher: IEEE

Non-invasive cortical neural interfaces have only achieved modest performance in cortical decoding of limb movements and their forces, compared to invasive brain-computer interfaces (BCIs). While non-invasive methodologies are safer, cheaper and vastly more accessible technologies, signals suffer from either poor resolution in the space domain(EEG) or the temporal domain (BOLD signal of functional Near Infrared Spectroscopy, fNIRS). The non-invasive BCI decoding of bimanual force generation and the continuous force signal has not been realised before and so we introduce an isometric grip force tracking task to evaluate the decoding. We find that combining EEG and fNIRS using deep neural networks works better than linear models to decode continuous grip force modulations produced by the left and the right hand. Our multi-modal deep learning decoder achieves 55.2 FVAF[%] in force reconstruction and improves the decoding performance by at least 15% over each individual modality. Our results show away to achieve continuous hand force decoding using cortical signals obtained with non-invasive mobile brain imaging has immediate impact for rehabilitation, restoration and consumer applications.

Conference paper

Subramanian M, Park S, Orlov P, Shafti A, Faisal Aet al., 2021, Gaze-contingent decoding of human navigation intention on an autonomous wheelchair platform, 10th International IEEE EMBS Conference on Neural Engineering, Publisher: IEEE

We have pioneered the Where-You-Look-Is Where-You-Go approach to controlling mobility platforms by decoding how the user looks at the environment to understand where they want to navigate their mobility device. However, many natural eye-movements are not relevant for action intention decoding, only some are, which places a challenge on decoding, the so-called Midas Touch Problem. Here, we present a new solution, consisting of 1. deep computer vision to understand what object a user is looking at in their field of view, with 2. an analysis of where on the object’s bounding box the user is looking, to 3. use a simple machine learning classifier to determine whether the overt visual attention on the object is predictive of a navigation intention to that object. Our decoding system ultimately determines whether the user wants to drive to e.g., a door or just looks at it. Crucially, we find that when users look at an object and imagine they were moving towards it, the resulting eye-movements from this motor imagery (akin to neural interfaces) remain decodable. Once a driving intention and thus also the location is detected our system instructs our autonomous wheelchair platform, the A. Eye-Drive, to navigate to the desired object while avoiding static and moving obstacles. Thus, for navigation purposes, we have realised a cognitive-level human interface, as it requires the user only to cognitively interact with the desired goal, not to continuously steer their wheelchair to the target (low-level human interfacing).

Conference paper

Denghao L, Ortega San Miguel P, Wei X, Faisal AAet al., 2021, Model-agnostic meta-learning for EEG motor imagery decoding in brain-computer-interfacing, 10th International IEEE EMBS Conference on Neural Engineering (NER 21), Publisher: IEEE

We introduce here the idea of Meta Learning for training EEG BCI decoders. Meta Learning is a way of training machine learning systems so they learn to learn. We apply here meta learning to a simple Deep Learning BCI architecture and compare it to transfer learning on the same architecture. Our Meta learning strategy operates by finding optimal parameters for the BCI decoder so that it can quickly generalise between different users and recording sessions –thereby also generalising to new users or new sessions quickly. We tested our algorithm on the Physionet EEG motor imagery dataset. Our approach increased motor imagery classification accuracy between 60 to 80%, outperforming other algorithms under the little-data condition. We believe that establishing the meta learning or learning-to-learn approach will help neural engineering and human interfacing with the challenges of quickly setting up decoders of neural signals to make them more suitable for daily-life.

Conference paper

Shafti SA, Faisal A, 2021, Non-invasive cognitive-level human interfacing for the roboticrestoration of reaching & grasping, 10th International IEEE EMBS Conference on Neural Engineering (NER 21), Publisher: IEEE

Assistive and Wearable Robotics have the potential to support humans with different types of motor impairments to become independent and fulfil their activities of daily living successfully. The success of these robot systems, however, relies on the ability to meaningfully decode human action intentions and carry them out appropriately. Neural interfaces have been explored for use in such system with several successes, however, they tend to be invasive and require training periods in the order of months. We present a robotic system for human augmentation, capable of actuating the user’s arm and fingers for them, effectively restoring the capability of reaching, grasping and manipulating objects; controlled solely through the user’s eye movements. We combine wearable eye tracking, the visual context of the environment and the structural grammar of human actions to create a cognitive-level assistive robotic setup that enables the users in fulfilling activities of daily living, while conserving interpretability, and the agency of the user. The interface is worn, calibrated and ready to use within 5 minutes. Users learn to control and make successful use of the system with an additional 5 minutes of interaction. The system is tested with 5 healthy participants, showing an average success rate of96.6%on first attempt across6 tasks.

Conference paper

Haar Millo S, Sundar G, Faisal A, 2021, Embodied virtual reality for the study of real-world motor learning, PLoS One, Vol: 16, ISSN: 1932-6203

Motor-learning literature focuses on simple laboratory-tasks due to their controlled manner and the ease to apply manipulations to induce learning and adaptation. Recently, we introduced a billiards paradigm and demonstrated the feasibility of real-world-neuroscience using wearables for naturalistic full-body motion-tracking and mobile-brain-imaging. Here we developed an embodied virtual-reality (VR) environment to our real-world billiards paradigm, which allows to control the visual feedback for this complex real-world task, while maintaining sense of embodiment. The setup was validated by comparing real-world ball trajectories with the trajectories of the virtual balls, calculated by the physics engine. We then ran our short-term motor learning protocol in the embodied VR. Subjects played billiard shots when they held the physical cue and hit a physical ball on the table while seeing it all in VR. We found comparable short-term motor learning trends in the embodied VR to those we previously reported in the physical real-world task. Embodied VR can be used for learning real-world tasks in a highly controlled environment which enables applying visual manipulations, common in laboratory-tasks and rehabilitation, to a real-world full-body task. Embodied VR enables to manipulate feedback and apply perturbations to isolate and assess interactions between specific motor-learning components, thus enabling addressing the current questions of motor-learning in real-world tasks. Such a setup can potentially be used for rehabilitation, where VR is gaining popularity but the transfer to the real-world is currently limited, presumably, due to the lack of embodiment.

Journal article

Maimon-Mor RO, Schone HR, Slater DH, Faisal AA, Makin TRet al., 2021, Early life experience sets hard limits on motor learning as evidenced from artificial arm use

<jats:title>Abstract</jats:title><jats:p>The study or artificial arms provides a unique opportunity to address long-standing questions on sensorimotor plasticity and development. Learning to use an artificial arm arguably depends on fundamental building blocks of body representation and would therefore be impacted by early-life experience. We tested artificial arm motor-control in two adult populations with upper-limb deficiency: congenital one-handers – who were born with a partial arm, and amputees – who lost their biological arm in adulthood. Brain plasticity research teaches us that the earlier we train to acquire new skills (or use a new technology) the better we benefit from this practice as adults. Instead, we found that although one-hander started using an artificial arm as toddlers, they produced increased error noise and directional errors when reaching to visual targets, relative to amputees who performed similarly to controls. However, the earlier a one-hander was fitted with an artificial arm the better their motor control was. We suggest that visuomotor integration, underlying the observed deficits, is highly dependent on either biological or artificial arm experience at a very young age. Subsequently, opportunities for sensorimotor plasticity become more limited.</jats:p>

Journal article

Auepanwiriyakul C, Waibel S, Songa J, Bentley P, Faisal AAet al., 2020, Accuracy and acceptability of wearable motion tracking for inpatient monitoring using smartwatches, Sensors, Vol: 20, ISSN: 1424-8220

Inertial Measurement Units (IMUs) within an everyday consumer smartwatch offer a convenient and low-cost method to monitor the natural behaviour of hospital patients. However, their accuracy at quantifying limb motion, and clinical acceptability, have not yet been demonstrated. To this end we conducted a two-stage study: First, we compared the inertial accuracy of wrist-worn IMUs, both research-grade (Xsens MTw Awinda, and Axivity AX3) and consumer-grade (Apple Watch Series 3 and 5), and optical motion tracking (OptiTrack). Given the moderate to strong performance of the consumer-grade sensors, we then evaluated this sensor and surveyed the experiences and attitudes of hospital patients (N = 44) and staff (N = 15) following a clinical test in which patients wore smartwatches for 1.5–24 h in the second study. Results indicate that for acceleration, Xsens is more accurate than the Apple Series 5 and 3 smartwatches and Axivity AX3 (RMSE 1.66 ± 0.12 m·s−2; R2 0.78 ± 0.02; RMSE 2.29 ± 0.09 m·s−2; R2 0.56 ± 0.01; RMSE 2.14 ± 0.09 m·s−2; R2 0.49 ± 0.02; RMSE 4.12 ± 0.18 m·s−2; R2 0.34 ± 0.01 respectively). For angular velocity, Series 5 and 3 smartwatches achieved similar performances against Xsens with RMSE 0.22 ± 0.02 rad·s−1; R2 0.99 ± 0.00; and RMSE 0.18 ± 0.01 rad·s−1; R2 1.00± SE 0.00, respectively. Surveys indicated that in-patients and healthcare professionals strongly agreed that wearable motion sensors are easy to use, comfortable, unobtrusive, suitable for long-term use, and do not cause anxiety or limit daily activities. Our results suggest that consumer smartwatches achieved moderate to strong levels of accuracy compared to laboratory gold-standard and are acceptable for pervasive monitoring of motion/behaviour within hospital settings.

Journal article

Li L, Faisal A, 2020, Bayesian distributional policy gradients, AAAI Conference on Artificial Intelligence, Publisher: AAAI

Distributional reinforcement learning (Distributional RL)maintains the entire probability distribution of the reward-to-go, i.e. the return, providing a more principled approach to account for the uncertainty associated with policy performance, which may be beneficial for trading off exploration and exploitation and policy learning in general. Previous work in distributional RL focused mainly on computing the state-action-return distributions, here we model the state-return distributions. This enables us to translate successful conventional RL algorithms that are based on state values into distributional RL. We formulate the distributional Bell-man operation as an inference-based auto-encoding process that minimises Wasserstein metrics between target/model re-turn distributions. Our algorithm, BDPG (Bayesian Distributional Policy Gradients), uses adversarial training in joint-contrastive learning to learn a variational posterior from there turns. Moreover, we can now interpret the return prediction uncertainty as an information gain, which allows to obtain anew curiosity measure that helps BDPG steer exploration actively and efficiently. In our experiments, Atari 2600 games and MuJoCo tasks, we demonstrate how BDPG learns generally faster and with higher asymptotic performance than reference distributional RL algorithms, including well known hard exploration tasks.

Conference paper

Gallego-Delgado P, James R, Browne E, Meng J, Umashankar S, Tan L, Picon C, Mazarakis ND, Faisal AA, Howell OW, Reynolds Ret al., 2020, Neuroinflammation in the normal-appearing white matter (NAWM) of the multiple sclerosis brain causes abnormalities at the nodes of Ranvier., PLoS Biology, Vol: 18, Pages: 1-36, ISSN: 1544-9173

Changes to the structure of nodes of Ranvier in the normal-appearing white matter (NAWM) of multiple sclerosis (MS) brains are associated with chronic inflammation. We show that the paranodal domains in MS NAWM are longer on average than control, with Kv1.2 channels dislocated into the paranode. These pathological features are reproduced in a model of chronic meningeal inflammation generated by the injection of lentiviral vectors for the lymphotoxin-α (LTα) and interferon-γ (IFNγ) genes. We show that tumour necrosis factor (TNF), IFNγ, and glutamate can provoke paranodal elongation in cerebellar slice cultures, which could be reversed by an N-methyl-D-aspartate (NMDA) receptor blocker. When these changes were inserted into a computational model to simulate axonal conduction, a rapid decrease in velocity was observed, reaching conduction failure in small diameter axons. We suggest that glial cells activated by pro-inflammatory cytokines can produce high levels of glutamate, which triggers paranodal pathology, contributing to axonal damage and conduction deficits.

Journal article

Auepanwiriyakul C, Waibel S, Songa J, Bentley P, Faisal AAet al., 2020, Accuracy and Acceptability of Wearable Motion Tracking Smartwatches for Inpatient Monitoring, Sensors, ISSN: 1424-8220

<jats:p>: Inertial Measurement Units (IMUs) within an everyday consumer smartwatch offer a convenient and low-cost method to monitor the natural behaviour of hospital patients. However, their accuracy at quantifying limb motion, and clinical acceptability, have not yet been demonstrated. To this end we conducted a two-stage study: First, we compared the inertial accuracy of wrist-worn IMUs, both research-grade (Xsens MTw Awinda, and Axivity AX3) and consumer-grade (Apple Watch Series 3 and 5), relative to gold-standard optical motion tracking (OptiTrack). Given the moderate to the strong performance of the consumer-grade sensors we then evaluated this sensor and surveyed the experiences and attitudes of hospital patients (N=44) and staff (N=15) following a clinical test in which patients wore smartwatches for 1.5-24 hours in the second study. Results indicate that for acceleration, Xsens is more accurate than the Apple smartwatches and Axivity AX3 (RMSE 0.17+/-0.01 g; R2 0.88+/-0.01; RMSE 0.22+/-0.01 g; R2 0.64+/-0.01; RMSE 0.42+/-0.01 g; R2 0.43+/-0.01, respectively). However, for angular velocity, the smartwatches are marginally more accurate than Xsens (RMSE 1.28+/-0.01 rad/s; R2 0.85+/-0.00; RMSE 1.37+/-0.01 rad/s; R2 0.82+/-0.01, respectively). Surveys indicated that in-patients and healthcare professionals strongly agreed that wearable motion sensors are easy to use, comfortable, unobtrusive, suitable for long term use, and do not cause anxiety or limit daily activities. Our results suggest that smartwatches achieved moderate to strong levels of accuracy compared to a gold-standard reference and are likely to be accepted as a pervasive measure of motion/behaviour within hospitals.</jats:p>

Journal article

Haar Millo S, van Assel C, Faisal A, 2020, Motor learning in real-world pool billiards, Scientific Reports, Vol: 10, Pages: 1-13, ISSN: 2045-2322

The neurobehavioral mechanisms of human motor-control and learning evolved in free behaving, real-life settings, yet this is studied mostly in reductionistic lab-based experiments. Here we take a step towards a more real-world motor neuroscience using wearables for naturalistic full-body motion-tracking and the sports of pool billiards to frame a real-world skill learning experiment. First, we asked if well-known features of motor learning in lab-based experiments generalize to a real-world task. We found similarities in many features such as multiple learning rates, and the relationship between task-related variability and motor learning. Our data-driven approach reveals the structure and complexity of movement, variability, and motor learning, enabling an in-depth understanding of the structure of motor learning in three ways: First, while expecting most of the movement learning is done by the cue-wielding arm, we find that motor learning affects the whole body, changing motor-control from head to toe. Second, during learning, all subjects decreased their movement variability and their variability in the outcome. Subjects who were initially more variable were also more variable after learning. Lastly, when screening the link across subjects between initial variability in individual joints and learning, we found that only the initial variability in the right forearm supination shows a significant correlation to the subjects’ learning rates. This is in-line with the relationship between learning and variability: while learning leads to an overall reduction in movement variability, only initial variability in specific task-relevant dimensions can facilitate faster learning.

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: id=00539811&limit=30&person=true&page=2&respub-action=search.html