The Cognitive Vision in Robotic Surgery Lab is developing computer vision and AI techniques for intraoperative navigation and real-time tissue characterisation.
Head of Group
Dr Stamatia (Matina) Giannarou
411 Bessemer Building
South Kensington Campus
+44 (0) 20 7594 8904
What we do
Surgery is undergoing rapid changes driven by recent technological advances and our on-going pursuit towards early intervention and personalised treatment. We are developing computer vision and Artificial Intelligence techniques for intraoperative navigation and real-time tissue characterisation during minimally invasive and robot-assisted operations to improve both the efficacy and safety of surgical procedures. Our work will revolutionize the treatment of cancers and pave the way for autonomous robot-assisted interventions.
Why it is important?
With recent advances in medical imaging, sensing, and robotics, surgical oncology is entering a new era of early intervention, personalised treatment, and faster patient recovery. The main goal is to completely remove cancerous tissue while minimising damage to surrounding areas. However, achieving this can be challenging, often leading to imprecise surgeries, high re-excision rates, and reduced quality of life due to unintended injuries. Therefore, technologies that enhance cancer detection and enable more precise surgeries may improve patient outcomes.
How can it benefit patients?
Our methods aim to ensure patients receive accurate and timely surgical treatment while reducing surgeons' mental workload, overcoming limitations, and minimizing errors. By improving tumor excision, our hybrid diagnostic and therapeutic tools will lower recurrence rates and enhance survival outcomes. More complete tumor removal will also reduce the need for repeat procedures, improving patient quality of life, life expectancy, and benefiting society and the economy.
Meet the team
Results
- Showing results for:
- Reset all filters
Search results
-
Journal articleCartucho J, Shapira D, Ashrafian H, et al., 2020,
Multimodal mixed reality visualisation for intraoperative surgical guidance
, International Journal of Computer Assisted Radiology and Surgery, Vol: 15, Pages: 819-826, ISSN: 1861-6410PurposeIn the last decade, there has been a great effort to bring mixed reality (MR) into the operating room to assist surgeons intraoperatively. However, progress towards this goal is still at an early stage. The aim of this paper is to propose a MR visualisation platform which projects multiple imaging modalities to assist intraoperative surgical guidance.MethodologyIn this work, a MR visualisation platform has been developed for the Microsoft HoloLens. The platform contains three visualisation components, namely a 3D organ model, volumetric data, and tissue morphology captured with intraoperative imaging modalities. Furthermore, a set of novel interactive functionalities have been designed including scrolling through volumetric data and adjustment of the virtual objects’ transparency. A pilot user study has been conducted to evaluate the usability of the proposed platform in the operating room. The participants were allowed to interact with the visualisation components and test the different functionalities. Each surgeon answered a questionnaire on the usability of the platform and provided their feedback and suggestions.ResultsThe analysis of the surgeons’ scores showed that the 3D model is the most popular MR visualisation component and neurosurgery is the most relevant speciality for this platform. The majority of the surgeons found the proposed visualisation platform intuitive and would use it in their operating rooms for intraoperative surgical guidance. Our platform has several promising potential clinical applications, including vascular neurosurgery.ConclusionThe presented pilot study verified the potential of the proposed visualisation platform and its usability in the operating room. Our future work will focus on enhancing the platform by incorporating the surgeons’ suggestions and conducting extensive evaluation on a large group of surgeons.
-
Conference paperCartucho J, Tukra S, Li Y, et al., 2020,
VisionBlender: A Tool for Generating Computer Vision Datasets in Robotic Surgery (best paper award)
, Joint MICCAI 2020 Workshop on Augmented Environments for Computer-Assisted Interventions (AE-CAI), Computer-Assisted Endoscopy (CARE) and Context-Aware Operating Theatres 2.0 (OR2.0) -
Conference paperHuang B, Tsai Y-Y, Cartucho J, et al., 2020,
Tracking and Visualization of the Sensing Area for a Tethered Laparoscopic Gamma Probe
, Information Processing in Computer Assisted Intervention (IPCAI) -
Book chapterZhao L, Giannarou S, Lee SL, et al., 2020,
Real-Time Robust Simultaneous Catheter and Environment Modeling for Endovascular Navigation
, Intravascular Ultrasound: From Acquisition to Advanced Quantitative Analysis, Pages: 185-197Due to the complexity in catheter control and navigation, endovascular procedures are characterized by significant challenges. Real-time recovery of the 3D structure of the vasculature intraoperatively is necessary to visualize the interaction between the catheter and its surrounding environment to facilitate catheter manipulations. Nonionizing imaging techniques such as intravascular ultrasound (IVUS) are increasingly used in vessel reconstruction approaches. To enable accurate recovery of vessel structures, this chapter presents a robust and real-time simultaneous catheter and environment modeling method for endovascular navigation based on IVUS imaging, electromagnetic (EM) sensing as well as the vessel structure information obtained from the preoperative CT/MR imaging. By considering the uncertainty in both the IVUS contour and the EM pose in the proposed nonlinear optimization problem, the proposed algorithm can provide accurate vessel reconstruction, at the same time deal with sensing errors and abrupt catheter motions. Experimental results using two different phantoms, with different catheter motions demonstrated the accuracy of the vessel reconstruction and the potential clinical value of the proposed vessel reconstruction method.
-
Journal articleLi Y, Charalampaki P, Liu Y, et al., 2018,
Context aware decision support in neurosurgical oncology based on an efficient classification of endomicroscopic data
, International Journal of Computer Assisted Radiology and Surgery, Vol: 13, Pages: 1187-1199, ISSN: 1861-6410Purpose: Probe-based confocal laser endomicroscopy (pCLE) enables in vivo, in situ tissue characterisation without changes in the surgical setting and simplifies the oncological surgical workflow. The potential of this technique in identifying residual cancer tissue and improving resection rates of brain tumours has been recently verified in pilot studies. The interpretation of endomicroscopic information is challenging, particularly for surgeons who do not themselves routinely review histopathology. Also, the diagnosis can be examiner-dependent, leading to considerable inter-observer variability. Therefore, automatic tissue characterisation with pCLE would support the surgeon in establishing diagnosis as well as guide robot-assisted intervention procedures.Methods: The aim of this work is to propose a deep learning-based framework for brain tissue characterisation for context aware diagnosis support in neurosurgical oncology. An efficient representation of the context information of pCLE data is presented by exploring state-of-the-art CNN models with different tuning configurations. A novel video classification framework based on the combination of convolutional layers with long-range temporal recursion has been proposed to estimate the probability of each tumour class. The video classification accuracy is compared for different network architectures and data representation and video segmentation methods.Results: We demonstrate the application of the proposed deep learning framework to classify Glioblastoma and Meningioma brain tumours based on endomicroscopic data. Results show significant improvement of our proposed image classification framework over state-of-the-art feature-based methods. The use of video data further improves the classification performance, achieving accuracy equal to 99.49%.Conclusions: This work demonstrates that deep learning can provide an efficient representation of pCLE data and accurately classify Glioblastoma and Meningioma tumours. Th
-
Conference paperTriantafyllou P, Wisanuvej P, Giannarou S, et al., 2018,
A Framework for Sensorless Tissue Motion Tracking in Robotic Endomicroscopy Scanning
, IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE COMPUTER SOC, Pages: 2694-2699, ISSN: 1050-4729- Author Web Link
- Cite
- Citations: 1
-
Conference paperShen M, Giannarou S, Shah PL, et al., 2017,
Branch: Bifurcation recognition for airway navigation based on structural characteristics
, MICCAI 2017, Publisher: Springer, Pages: 182-189, ISSN: 0302-9743Bronchoscopic navigation is challenging, especially at the level of peripheral airways due to the complicated bronchial structures and the large respiratory motion. The aim of this paper is to propose a localisation approach tailored for navigation in the distal airway branches. Salient regions are detected on the depth maps of video images and CT virtual projections to extract anatomically meaningful areas that represent airway bifurcations. An airway descriptor based on shape context is introduced which encodes both the structural characteristics of the bifurcations and their spatial distribution. The bronchoscopic camera is localised in the airways by minimising the cost of matching the region features in video images to the pre-computed CT depth maps considering both the shape and temporal information. The method has been validated on phantom and in vivo data and the results verify its robustness to tissue deformation and good performance in distal airways.
-
Journal articleZhang L, Ye M, Giannarou S, et al., 2017,
Motion-compensated autonomous scanning for tumour localisation using intraoperative ultrasound
, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol: 10434, Pages: 619-627, ISSN: 0302-9743Intraoperative ultrasound facilitates localisation of tumour boundaries during minimally invasive procedures. Autonomous ultrasound scanning systems have been recently proposed to improve scanning accuracy and reduce surgeons’ cognitive load. However, current methods mainly consider static scanning environments typically with the probe pressing against the tissue surface. In this work, a motion-compensated autonomous ultrasound scanning system using the da Vinci® Research Kit (dVRK) is proposed. An optimal scanning trajectory is generated considering both the tissue surface shape and the ultrasound transducer dimensions. An effective vision-based approach is proposed to learn the underlying tissue motion characteristics. The learned motion model is then incorporated into the visual servoing framework. The proposed system has been validated with both phantom and ex vivo experiments.
-
Journal articleMaier-Hein L, Vedula SS, Speidel S, et al., 2017,
Surgical data science for next-generation interventions
, Nature Biomedical Engineering, Vol: 1, Pages: 691-696, ISSN: 2157-846XInterventional healthcare will evolve from an artisanal craft based on the individual experiences, preferences and traditions of physicians into a discipline that relies on objective decision-making on the basis of large-scale data from heterogeneous sources.
-
Conference paperYe M, Zhang L, Giannarou S, et al., 2016,
Real-Time 3D Tracking of Articulated Tools for Robotic Surgery
, International Conference on Medical Image Computing and Computer-Assisted Intervention – MICCAI 2016, Publisher: Springer, Pages: 386-394, ISSN: 0302-9743In robotic surgery, tool tracking is important for providingsafe tool-tissue interaction and facilitating surgical skills assessment. De-spite recent advances in tool tracking, existing approaches are faced withmajor difficulties in real-time tracking of articulated tools. Most algo-rithms are tailored for offline processing with pre-recordedvideos. In thispaper, we propose a real-time 3D tracking method for articulated toolsin robotic surgery. The proposed method is based on the CAD modelof the tools as well as robot kinematics to generate online part-basedtemplates for efficient 2D matching and 3D pose estimation. A robustverification approach is incorporated to reject outliers in2D detections,which is then followed by fusing inliers with robot kinematic readingsfor 3D pose estimation of the tool. The proposed method has been val-idated with phantom data, as well asex vivoandin vivoexperiments.The results derived clearly demonstrate the performance advantage ofthe proposed method when compared to the state-of-the-art.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.
Contact Us
The Hamlyn Centre
Bessemer Building
South Kensington Campus
Imperial College
London, SW7 2AZ
Map location