Main content blocks

Head of Group

Dr Stamatia Giannarou

About us

The Cognitive Vision in Robotic Surgery Lab is developing computer vision and AI techniques for intraoperative navigation and real-time tissue characterisation.

Research lab info

What we do

Surgery is undergoing rapid changes driven by recent technological advances and our on-going pursuit towards early intervention and personalised treatment. We are developing computer vision and Artificial Intelligence techniques for intraoperative navigation and real-time tissue characterisation during minimally invasive and robot-assisted operations to improve both the efficacy and safety of surgical procedures. Our work will revolutionize the treatment of cancers and pave the way for autonomous robot-assisted interventions.

Why it is important?

With recent advances in medical imaging, sensing, and robotics, surgical oncology is entering a new era of early intervention, personalised treatment, and faster patient recovery. The main goal is to completely remove cancerous tissue while minimising damage to surrounding areas. However, achieving this can be challenging, often leading to imprecise surgeries, high re-excision rates, and reduced quality of life due to unintended injuries. Therefore, technologies that enhance cancer detection and enable more precise surgeries may improve patient outcomes.

How can it benefit patients?

Our methods aim to ensure patients receive accurate and timely surgical treatment while reducing surgeons' mental workload, overcoming limitations, and minimizing errors. By improving tumor excision, our hybrid diagnostic and therapeutic tools will lower recurrence rates and enhance survival outcomes. More complete tumor removal will also reduce the need for repeat procedures, improving patient quality of life, life expectancy, and benefiting society and the economy.

Meet the team

Mr Alfie Roddan

Mr Alfie Roddan

Mr Alfie Roddan
Research Postgraduate

Mr Chi Xu

Mr Chi Xu

Mr Chi Xu
Research Assistant

Mr Yihang Zhou

Mr Yihang Zhou

Mr Yihang Zhou
Research Assistant

Citation

BibTex format

@article{Cartucho:2020:10.1007/s11548-020-02165-4,
author = {Cartucho, J and Shapira, D and Ashrafian, H and Giannarou, S},
doi = {10.1007/s11548-020-02165-4},
journal = {International Journal of Computer Assisted Radiology and Surgery},
pages = {819--826},
title = {Multimodal mixed reality visualisation for intraoperative surgical guidance},
url = {http://dx.doi.org/10.1007/s11548-020-02165-4},
volume = {15},
year = {2020}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - PurposeIn the last decade, there has been a great effort to bring mixed reality (MR) into the operating room to assist surgeons intraoperatively. However, progress towards this goal is still at an early stage. The aim of this paper is to propose a MR visualisation platform which projects multiple imaging modalities to assist intraoperative surgical guidance.MethodologyIn this work, a MR visualisation platform has been developed for the Microsoft HoloLens. The platform contains three visualisation components, namely a 3D organ model, volumetric data, and tissue morphology captured with intraoperative imaging modalities. Furthermore, a set of novel interactive functionalities have been designed including scrolling through volumetric data and adjustment of the virtual objects’ transparency. A pilot user study has been conducted to evaluate the usability of the proposed platform in the operating room. The participants were allowed to interact with the visualisation components and test the different functionalities. Each surgeon answered a questionnaire on the usability of the platform and provided their feedback and suggestions.ResultsThe analysis of the surgeons’ scores showed that the 3D model is the most popular MR visualisation component and neurosurgery is the most relevant speciality for this platform. The majority of the surgeons found the proposed visualisation platform intuitive and would use it in their operating rooms for intraoperative surgical guidance. Our platform has several promising potential clinical applications, including vascular neurosurgery.ConclusionThe presented pilot study verified the potential of the proposed visualisation platform and its usability in the operating room. Our future work will focus on enhancing the platform by incorporating the surgeons’ suggestions and conducting extensive evaluation on a large group of surgeons.
AU - Cartucho,J
AU - Shapira,D
AU - Ashrafian,H
AU - Giannarou,S
DO - 10.1007/s11548-020-02165-4
EP - 826
PY - 2020///
SN - 1861-6410
SP - 819
TI - Multimodal mixed reality visualisation for intraoperative surgical guidance
T2 - International Journal of Computer Assisted Radiology and Surgery
UR - http://dx.doi.org/10.1007/s11548-020-02165-4
UR - http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000528438700001&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=1ba7043ffcc86c417c072aa74d649202
UR - https://link.springer.com/article/10.1007%2Fs11548-020-02165-4
UR - http://hdl.handle.net/10044/1/80329
VL - 15
ER -

Contact Us

General enquiries

Facility enquiries


The Hamlyn Centre
Bessemer Building
South Kensington Campus
Imperial College
London, SW7 2AZ
Map location