Main content blocks

Head of Group

Dr Stamatia Giannarou

About us

The Cognitive Vision in Robotic Surgery Lab is developing computer vision and AI techniques for intraoperative navigation and real-time tissue characterisation.

Research lab info

What we do

Surgery is undergoing rapid changes driven by recent technological advances and our on-going pursuit towards early intervention and personalised treatment. We are developing computer vision and Artificial Intelligence techniques for intraoperative navigation and real-time tissue characterisation during minimally invasive and robot-assisted operations to improve both the efficacy and safety of surgical procedures. Our work will revolutionize the treatment of cancers and pave the way for autonomous robot-assisted interventions.

Why it is important?

With recent advances in medical imaging, sensing, and robotics, surgical oncology is entering a new era of early intervention, personalised treatment, and faster patient recovery. The main goal is to completely remove cancerous tissue while minimising damage to surrounding areas. However, achieving this can be challenging, often leading to imprecise surgeries, high re-excision rates, and reduced quality of life due to unintended injuries. Therefore, technologies that enhance cancer detection and enable more precise surgeries may improve patient outcomes.

How can it benefit patients?

Our methods aim to ensure patients receive accurate and timely surgical treatment while reducing surgeons' mental workload, overcoming limitations, and minimizing errors. By improving tumor excision, our hybrid diagnostic and therapeutic tools will lower recurrence rates and enhance survival outcomes. More complete tumor removal will also reduce the need for repeat procedures, improving patient quality of life, life expectancy, and benefiting society and the economy.

Meet the team

Mr Alfie Roddan

Mr Alfie Roddan

Mr Alfie Roddan
Research Postgraduate

Mr Chi Xu

Mr Chi Xu

Mr Chi Xu
Research Assistant

Mr Yihang Zhou

Mr Yihang Zhou

Mr Yihang Zhou
Research Assistant

Citation

BibTex format

@article{Li:2018:10.1007/s11548-018-1806-7,
author = {Li, Y and Charalampaki, P and Liu, Y and Yang, G-Z and Giannarou, S},
doi = {10.1007/s11548-018-1806-7},
journal = {International Journal of Computer Assisted Radiology and Surgery},
pages = {1187--1199},
title = {Context aware decision support in neurosurgical oncology based on an efficient classification of endomicroscopic data},
url = {http://dx.doi.org/10.1007/s11548-018-1806-7},
volume = {13},
year = {2018}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - Purpose: Probe-based confocal laser endomicroscopy (pCLE) enables in vivo, in situ tissue characterisation without changes in the surgical setting and simplifies the oncological surgical workflow. The potential of this technique in identifying residual cancer tissue and improving resection rates of brain tumours has been recently verified in pilot studies. The interpretation of endomicroscopic information is challenging, particularly for surgeons who do not themselves routinely review histopathology. Also, the diagnosis can be examiner-dependent, leading to considerable inter-observer variability. Therefore, automatic tissue characterisation with pCLE would support the surgeon in establishing diagnosis as well as guide robot-assisted intervention procedures.Methods: The aim of this work is to propose a deep learning-based framework for brain tissue characterisation for context aware diagnosis support in neurosurgical oncology. An efficient representation of the context information of pCLE data is presented by exploring state-of-the-art CNN models with different tuning configurations. A novel video classification framework based on the combination of convolutional layers with long-range temporal recursion has been proposed to estimate the probability of each tumour class. The video classification accuracy is compared for different network architectures and data representation and video segmentation methods.Results: We demonstrate the application of the proposed deep learning framework to classify Glioblastoma and Meningioma brain tumours based on endomicroscopic data. Results show significant improvement of our proposed image classification framework over state-of-the-art feature-based methods. The use of video data further improves the classification performance, achieving accuracy equal to 99.49%.Conclusions: This work demonstrates that deep learning can provide an efficient representation of pCLE data and accurately classify Glioblastoma and Meningioma tumours. Th
AU - Li,Y
AU - Charalampaki,P
AU - Liu,Y
AU - Yang,G-Z
AU - Giannarou,S
DO - 10.1007/s11548-018-1806-7
EP - 1199
PY - 2018///
SN - 1861-6410
SP - 1187
TI - Context aware decision support in neurosurgical oncology based on an efficient classification of endomicroscopic data
T2 - International Journal of Computer Assisted Radiology and Surgery
UR - http://dx.doi.org/10.1007/s11548-018-1806-7
UR - http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000440294000006&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=1ba7043ffcc86c417c072aa74d649202
UR - https://link.springer.com/article/10.1007/s11548-018-1806-7
UR - http://hdl.handle.net/10044/1/62868
VL - 13
ER -

Contact Us

General enquiries
hamlyn@imperial.ac.uk

Facility enquiries
hamlyn.facility@imperial.ac.uk


The Hamlyn Centre
Bessemer Building
South Kensington Campus
Imperial College
London, SW7 2AZ
Map location