Main content blocks

Head of Group

Dr Stamatia Giannarou

About us

The Cognitive Vision in Robotic Surgery Lab is developing computer vision and AI techniques for intraoperative navigation and real-time tissue characterisation.

Research lab info

What we do

Surgery is undergoing rapid changes driven by recent technological advances and our on-going pursuit towards early intervention and personalised treatment. We are developing computer vision and Artificial Intelligence techniques for intraoperative navigation and real-time tissue characterisation during minimally invasive and robot-assisted operations to improve both the efficacy and safety of surgical procedures. Our work will revolutionize the treatment of cancers and pave the way for autonomous robot-assisted interventions.

Why it is important?

With recent advances in medical imaging, sensing, and robotics, surgical oncology is entering a new era of early intervention, personalised treatment, and faster patient recovery. The main goal is to completely remove cancerous tissue while minimising damage to surrounding areas. However, achieving this can be challenging, often leading to imprecise surgeries, high re-excision rates, and reduced quality of life due to unintended injuries. Therefore, technologies that enhance cancer detection and enable more precise surgeries may improve patient outcomes.

How can it benefit patients?

Our methods aim to ensure patients receive accurate and timely surgical treatment while reducing surgeons' mental workload, overcoming limitations, and minimizing errors. By improving tumor excision, our hybrid diagnostic and therapeutic tools will lower recurrence rates and enhance survival outcomes. More complete tumor removal will also reduce the need for repeat procedures, improving patient quality of life, life expectancy, and benefiting society and the economy.

Meet the team

Mr Alfie Roddan

Mr Alfie Roddan

Mr Alfie Roddan
Research Postgraduate

Mr Chi Xu

Mr Chi Xu

Mr Chi Xu
Research Assistant

Mr Yihang Zhou

Mr Yihang Zhou

Mr Yihang Zhou
Research Assistant

Citation

BibTex format

@article{Huang:2020:10.1007/s11548-020-02205-z,
author = {Huang, B and Tsai, Y-Y and Cartucho, J and Vyas, K and Tuch, D and Giannarou, S and Elson, DS},
doi = {10.1007/s11548-020-02205-z},
journal = {International Journal of Computer Assisted Radiology and Surgery},
pages = {1389--1397},
title = {Tracking and visualization of the sensing area for a tethered laparoscopic gamma probe},
url = {http://dx.doi.org/10.1007/s11548-020-02205-z},
volume = {15},
year = {2020}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - PurposeIn surgical oncology, complete cancer resection and lymph node identification are challenging due to the lack of reliable intraoperative visualization. Recently, endoscopic radio-guided cancer resection has been introduced where a novel tethered laparoscopic gamma detector can be used to determine the location of tracer activity, which can complement preoperative nuclear imaging data and endoscopic imaging. However, these probes do not clearly indicate where on the tissue surface the activity originates, making localization of pathological sites difficult and increasing the mental workload of the surgeons. Therefore, a robust real-time gamma probe tracking system integrated with augmented reality is proposed.MethodsA dual-pattern marker has been attached to the gamma probe, which combines chessboard vertices and circular dots for higher detection accuracy. Both patterns are detected simultaneously based on blob detection and the pixel intensity-based vertices detector and used to estimate the pose of the probe. Temporal information is incorporated into the framework to reduce tracking failure. Furthermore, we utilized the 3D point cloud generated from structure from motion to find the intersection between the probe axis and the tissue surface. When presented as an augmented image, this can provide visual feedback to the surgeons.ResultsThe method has been validated with ground truth probe pose data generated using the OptiTrack system. When detecting the orientation of the pose using circular dots and chessboard dots alone, the mean error obtained is 0.05and 0.06, respectively. As for the translation, the mean error for each pattern is 1.78 mm and 1.81 mm. The detection limits for pitch, roll and yaw are 360,360 and 8–82∪188–352.ConclusionThe performance evaluation results show that this dual-pattern marker can provide high detection rates, as well as more accurate pose estimation and a larger workspace than the previously proposed hyb
AU - Huang,B
AU - Tsai,Y-Y
AU - Cartucho,J
AU - Vyas,K
AU - Tuch,D
AU - Giannarou,S
AU - Elson,DS
DO - 10.1007/s11548-020-02205-z
EP - 1397
PY - 2020///
SN - 1861-6410
SP - 1389
TI - Tracking and visualization of the sensing area for a tethered laparoscopic gamma probe
T2 - International Journal of Computer Assisted Radiology and Surgery
UR - http://dx.doi.org/10.1007/s11548-020-02205-z
UR - http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000540990700001&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=1ba7043ffcc86c417c072aa74d649202
UR - http://hdl.handle.net/10044/1/80746
VL - 15
ER -

Contact Us

General enquiries

Facility enquiries


The Hamlyn Centre
Bessemer Building
South Kensington Campus
Imperial College
London, SW7 2AZ
Map location