The Cognitive Vision in Robotic Surgery Lab is developing computer vision and AI techniques for intraoperative navigation and real-time tissue characterisation.
Head of Group
Dr Stamatia (Matina) Giannarou
411 Bessemer Building
South Kensington Campus
+44 (0) 20 7594 8904
What we do
Surgery is undergoing rapid changes driven by recent technological advances and our on-going pursuit towards early intervention and personalised treatment. We are developing computer vision and Artificial Intelligence techniques for intraoperative navigation and real-time tissue characterisation during minimally invasive and robot-assisted operations to improve both the efficacy and safety of surgical procedures. Our work will revolutionize the treatment of cancers and pave the way for autonomous robot-assisted interventions.
Why it is important?
With recent advances in medical imaging, sensing, and robotics, surgical oncology is entering a new era of early intervention, personalised treatment, and faster patient recovery. The main goal is to completely remove cancerous tissue while minimising damage to surrounding areas. However, achieving this can be challenging, often leading to imprecise surgeries, high re-excision rates, and reduced quality of life due to unintended injuries. Therefore, technologies that enhance cancer detection and enable more precise surgeries may improve patient outcomes.
How can it benefit patients?
Our methods aim to ensure patients receive accurate and timely surgical treatment while reducing surgeons' mental workload, overcoming limitations, and minimizing errors. By improving tumor excision, our hybrid diagnostic and therapeutic tools will lower recurrence rates and enhance survival outcomes. More complete tumor removal will also reduce the need for repeat procedures, improving patient quality of life, life expectancy, and benefiting society and the economy.
Meet the team
Dr Stamatia Giannarou
Dr Stamatia Giannarou
Senior Lecturer
Dr Po Wen Lo
Dr Po Wen Lo
Research Associate
Mr Alfie Roddan
Mr Alfie Roddan
Research Postgraduate
Mr Alistair G Weld
Mr Alistair G Weld
Casual - Teaching Support
Mr Chi Xu
Mr Chi Xu
Research Assistant
Mr Haozheng Xu
Mr Haozheng Xu
Research Assistant in Surgical Robot Vision
Mr Yihang Zhou
Mr Yihang Zhou
Research Assistant
Results
- Showing results for:
- Reset all filters
Search results
-
Journal articleWeld A, Dixon L, Anichini G, et al., 2024,
Challenges with segmenting intraoperative ultrasound for brain tumours
, Acta Neurochirurgica: the European journal of neurosurgery, Vol: 166, ISSN: 0001-6268Objective - Addressing the challenges that come with identifying and delineating brain tumours in intraoperative ultrasound. Our goal is to both qualitatively and quantitatively assess the interobserver variation, amongst experienced neuro-oncological intraoperative ultrasound users (neurosurgeons and neuroradiologists), in detecting and segmenting brain tumours on ultrasound. We then propose that, due to the inherent challenges of this task, annotation by localisation of the entire tumour mass with a bounding box could serve as an ancillary solution to segmentation for clinical training, encompassing margin uncertainty and the curation of large datasets. Methods - 30 ultrasound images of brain lesions in 30 patients were annotated by 4 annotators - 1 neuroradiologist and 3 neurosurgeons. The annotation variation of the 3 neurosurgeons was first measured, and then the annotations of each neurosurgeon were individually compared to the neuroradiologist's, which served as a reference standard as their segmentations were further refined by cross-reference to the preoperative magnetic resonance imaging (MRI). The following statistical metrics were used: Intersection Over Union (IoU), Sørensen-Dice Similarity Coefficient (DSC) and Hausdorff Distance (HD). These annotations were then converted into bounding boxes for the same evaluation. Results - There was a moderate level of interobserver variance between the neurosurgeons [ I o U : 0.789 , D S C : 0.876 , H D : 103.227 ] and a larger level of variance when compared against the MRI-informed reference standard annotations by the neuroradiologist, mean across annotators [ I o U : 0.723 , D S C : 0.813 , H D : 115.675 ] . After converting the segments to bounding boxes, all metrics improve, most significantly, the interquartile range drops by [ I o U : 37 % , D S C : 41 % , H D : 54 % ] . Conclusion - This study highlights the current challenges with detecting and defining tumour boundaries in neuro-oncologi
-
Journal articleLo FP-W, Qiu J, Wang Z, et al., 2024,
Dietary Assessment With Multimodal ChatGPT: A Systematic Analysis
, IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, Vol: 28, Pages: 7577-7587, ISSN: 2168-2194 -
Journal articleXu C, Xu H, Giannarou S, 2024,
Distance Regression Enhanced With Temporal Information Fusion and Adversarial Training for Robot-Assisted Endomicroscopy
, IEEE TRANSACTIONS ON MEDICAL IMAGING, Vol: 43, Pages: 3895-3908, ISSN: 0278-0062 -
Journal articleYou J, Ajlouni S, Kakaletri I, et al., 2024,
XRelevanceCAM: towards explainable tissue characterization with improved localisation of pathological structures in probe-based confocal laser endomicroscopy
, INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY, Vol: 19, Pages: 1061-1073, ISSN: 1861-6410 -
Journal articleTukra S, Xu H, Xu C, et al., 2024,
Generalizable stereo depth estimation with masked image modelling
, Healthcare Technology Letters, Vol: 11, Pages: 108-116, ISSN: 2053-3713Generalizable and accurate stereo depth estimation is vital for 3D reconstruction, especially in surgery. Supervised learning methods obtain best performance however, limited ground truth data for surgical scenes limits generalizability. Self-supervised methods don't need ground truth, but suffer from scale ambiguity and incorrect disparity prediction due to inconsistency of photometric loss. This work proposes a two-phase training procedure that is generalizable and retains the high performance of supervised methods. It entails: (1) performing self-supervised representation learning of left and right views via masked image modelling (MIM) to learn generalizable semantic stereo features (2) utilizing the MIM pre-trained model to learn robust depth representation via supervised learning for disparity estimation on synthetic data only. To improve stereo representations learnt via MIM, perceptual loss terms are introduced, which improve the model's stereo representations learnt by explicitly encouraging the learning of higher scene-level features. Qualitative and quantitative performance evaluation on surgical and natural scenes shows that the approach achieves sub-millimetre accuracy and lowest errors respectively, setting a new state-of-the-art. Despite not training on surgical nor natural scene data for disparity estimation.
-
Journal articleDyck M, Weld A, Klodmann J, et al., 2024,
Toward Safe and Collaborative Robotic Ultrasound Tissue Scanning in Neurosurgery
, IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS, Vol: 6, Pages: 64-67 -
Book chapterGiannarou S, Xu C, Roddan A, 2024,
Endomicroscopy
, Biophotonics and Biosensing: from Fundamental Research to Clinical Trials through Advances of Signal and Image Processing, Pages: 269-284Endomicroscopy is an enabling technology that can transform tissue characterization, allowing optical biopsies to be taken during diagnostic and interventional procedures, assisting tissue characterization and decision-making. New techniques, such as probe-based confocal laser endomicroscopy (pCLE) have enabled direct visualization of the tissue at a microscopic level and have been approved for clinical use in a range of clinical applications. Recent pilot studies suggest that the technique may have a role in identifying residual cancer tissue and improving resection rates. The aim of this chapter is to present the technological advances in this area, describe the challenges and limitations associated with this imaging modality and present methods which have been developed to facilitate the application of this technique as well as understanding of the collected data.
-
Journal articleCartucho J, Weld A, Tukra S, et al., 2024,
SurgT challenge: Benchmark of soft-tissue trackers for robotic surgery
, MEDICAL IMAGE ANALYSIS, Vol: 91, ISSN: 1361-8415 -
Conference paperRoddan A, Yu Z, Leiloglou M, et al., 2024,
Towards real-time hyperspectral imaging in neurosurgery
, Conference on Clinical Biophotonics III, Publisher: SPIE-INT SOC OPTICAL ENGINEERING, ISSN: 0277-786X -
Conference paperHuang B, Hu Y, Nguyen A, et al., 2023,
Detecting the sensing area of a laparoscopic probe in minimally invasive cancer surgery
, MICCAI 2023, Publisher: Springer Nature Switzerland, Pages: 260-270, ISSN: 0302-9743In surgical oncology, it is challenging for surgeons to identify lymph nodes and completely resect cancer even with pre-operative imaging systems like PET and CT, because of the lack of reliable intraoperative visualization tools. Endoscopic radio-guided cancer detection and resection has recently been evaluated whereby a novel tethered laparoscopic gamma detector is used to localize a preoperatively injected radiotracer. This can both enhance the endoscopic imaging and complement preoperative nuclear imaging data. However, gamma activity visualization is challenging to present to the operator because the probe is non-imaging and it does not visibly indicate the activity origination on the tissue surface. Initial failed attempts used segmentation or geometric methods, but led to the discovery that it could be resolved by leveraging high-dimensional image features and probe position information. To demonstrate the effectiveness of this solution, we designed and implemented a simple regression network that successfully addressed the problem. To further validate the proposed solution, we acquired and publicly released two datasets captured using a custom-designed, portable stereo laparoscope system. Through intensive experimentation, we demonstrated that our method can successfully and effectively detect the sensing area, establishing a new performance benchmark. Code and data are available at https://github.com/br0202/Sensing_area_detection.git.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.
Contact Us
The Hamlyn Centre
Bessemer Building
South Kensington Campus
Imperial College
London, SW7 2AZ
Map location