Main content blocks

Head of Group

Dr Stamatia Giannarou

About us

The Cognitive Vision in Robotic Surgery Lab is developing computer vision and AI techniques for intraoperative navigation and real-time tissue characterisation.

Research lab info

What we do

Surgery is undergoing rapid changes driven by recent technological advances and our on-going pursuit towards early intervention and personalised treatment. We are developing computer vision and Artificial Intelligence techniques for intraoperative navigation and real-time tissue characterisation during minimally invasive and robot-assisted operations to improve both the efficacy and safety of surgical procedures. Our work will revolutionize the treatment of cancers and pave the way for autonomous robot-assisted interventions.

Why it is important?

With recent advances in medical imaging, sensing, and robotics, surgical oncology is entering a new era of early intervention, personalised treatment, and faster patient recovery. The main goal is to completely remove cancerous tissue while minimising damage to surrounding areas. However, achieving this can be challenging, often leading to imprecise surgeries, high re-excision rates, and reduced quality of life due to unintended injuries. Therefore, technologies that enhance cancer detection and enable more precise surgeries may improve patient outcomes.

How can it benefit patients?

Our methods aim to ensure patients receive accurate and timely surgical treatment while reducing surgeons' mental workload, overcoming limitations, and minimizing errors. By improving tumor excision, our hybrid diagnostic and therapeutic tools will lower recurrence rates and enhance survival outcomes. More complete tumor removal will also reduce the need for repeat procedures, improving patient quality of life, life expectancy, and benefiting society and the economy.

Meet the team

Mr Alfie Roddan

Mr Alfie Roddan

Mr Alfie Roddan
Research Postgraduate

Mr Chi Xu

Mr Chi Xu

Mr Chi Xu
Research Assistant

Mr Yihang Zhou

Mr Yihang Zhou

Mr Yihang Zhou
Research Assistant

Citation

BibTex format

@article{Xu:2024:10.1109/TMI.2024.3405794,
author = {Xu, C and Xu, H and Giannarou, S},
doi = {10.1109/TMI.2024.3405794},
journal = {IEEE Trans Med Imaging},
title = {Distance Regression Enhanced with Temporal Information Fusion and Adversarial Training for Robot-Assisted Endomicroscopy.},
url = {http://dx.doi.org/10.1109/TMI.2024.3405794},
volume = {PP},
year = {2024}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - Probe-based confocal laser endomicroscopy (pCLE) has a role in characterising tissue intraoperatively to guide tumour resection during surgery. To capture good quality pCLE data which is important for diagnosis, the probe-tissue contact needs to be maintained within a working range of micrometre scale. This can be achieved through micro-surgical robotic manipulation which requires the automatic estimation of the probe-tissue distance. In this paper, we propose a novel deep regression framework composed of the Deep Regression Generative Adversarial Network (DR-GAN) and a Sequence Attention (SA) module. The aim of DR-GAN is to train the network using an enhanced image-based supervision approach. It extents the standard generator by using a well-defined function for image generation, instead of a learnable decoder. Also, DR-GAN uses a novel learnable neural perceptual loss which combines for the first time spatial and frequency domain features. This effectively suppresses the adverse effects of noise in the pCLE data. To incorporate temporal information, we've designed the SA module which is a cross-attention module, enhanced with Radial Basis Function based encoding (SA-RBF). Furthermore, to train the regression framework, we designed a multi-step training mechanism. During inference, the trained network is used to generate data representations which are fused along time in the SA-RBF module to boost the regression stability. Our proposed network advances SOTA networks by addressing the challenge of excessive noise in the pCLE data and enhancing regression stability. It outperforms SOTA networks applied on the pCLE Regression dataset (PRD) in terms of accuracy, data quality and stability.
AU - Xu,C
AU - Xu,H
AU - Giannarou,S
DO - 10.1109/TMI.2024.3405794
PY - 2024///
TI - Distance Regression Enhanced with Temporal Information Fusion and Adversarial Training for Robot-Assisted Endomicroscopy.
T2 - IEEE Trans Med Imaging
UR - http://dx.doi.org/10.1109/TMI.2024.3405794
UR - https://www.ncbi.nlm.nih.gov/pubmed/38801689
VL - PP
ER -

Contact Us

General enquiries
hamlyn@imperial.ac.uk

Facility enquiries
hamlyn.facility@imperial.ac.uk


The Hamlyn Centre
Bessemer Building
South Kensington Campus
Imperial College
London, SW7 2AZ
Map location