Collage of published research papers

Citation

BibTex format

@inproceedings{Zhao:2022:10.1109/iotdi54339.2022.00011,
author = {Zhao, Y and Barnaghi, P and Haddadi, H},
doi = {10.1109/iotdi54339.2022.00011},
publisher = {IEEE},
title = {Multimodal federated learning on IoT data},
url = {http://dx.doi.org/10.1109/iotdi54339.2022.00011},
year = {2022}
}

RIS format (EndNote, RefMan)

TY  - CPAPER
AB - Federated learning is proposed as an alternative to centralized machine learning since its client-server structure provides better privacy protection and scalability in real-world applications. In many applications, such as smart homes with Internet-of-Things (IoT) devices, local data on clients are generated from different modalities such as sensory, visual, and audio data. Existing federated learning systems only work on local data from a single modality, which limits the scalability of the systems. In this paper, we propose a multimodal and semi-supervised federated learning framework that trains autoencoders to extract shared or correlated representations from different local data modalities on clients. In addition, we propose a multimodal FedAvg algorithm to aggregate local autoencoders trained on different data modalities. We use the learned global autoencoder for a downstream classification task with the help of auxiliary labelled data on the server. We empirically evaluate our framework on different modalities including sensory data, depth camera videos, and RGB camera videos. Our experimental results demonstrate that introducing data from multiple modalities into federated learning can improve its classification performance. In addition, we can use labelled data from only one modality for supervised learning on the server and apply the learned model to testing data from other modalities to achieve decent F1 scores (e.g., with the best performance being higher than 60%), especially when combining contributions from both unimodal clients and multimodal clients.
AU - Zhao,Y
AU - Barnaghi,P
AU - Haddadi,H
DO - 10.1109/iotdi54339.2022.00011
PB - IEEE
PY - 2022///
TI - Multimodal federated learning on IoT data
UR - http://dx.doi.org/10.1109/iotdi54339.2022.00011
UR - https://ieeexplore.ieee.org/document/9797401
UR - http://hdl.handle.net/10044/1/99387
ER -

Awards

  • Finalist: Best Paper - IEEE Transactions on Mechatronics (awarded June 2021)

  • Finalist: IEEE Transactions on Mechatronics; 1 of 5 finalists for Best Paper in Journal

  • Winner: UK Institute of Mechanical Engineers (IMECHE) Healthcare Technologies Early Career Award (awarded June 2021): Awarded to Maria Lima (UKDRI CR&T PhD candidate)

  • Winner: Sony Start-up Acceleration Program (awarded May 2021): Spinout company Serg Tech awarded (1 of 4 companies in all of Europe) a place in Sony corporation start-up boot camp

  • “An Extended Complementary Filter for Full-Body MARG Orientation Estimation” (CR&T authors: S Wilson, R Vaidyanathan)

UK DRI


Established in 2017 by its principal funder the Medical Research Council, in partnership with Alzheimer's Society and Alzheimer’s Research UK, The UK Dementia Research Institute (UK DRI) is the UK’s leading biomedical research institute dedicated to neurodegenerative diseases.