Main content blocks

Head of Group

Dr Stamatia Giannarou

About us

The Cognitive Vision in Robotic Surgery Lab is developing computer vision and AI techniques for intraoperative navigation and real-time tissue characterisation.

Research lab info

What we do

Surgery is undergoing rapid changes driven by recent technological advances and our on-going pursuit towards early intervention and personalised treatment. We are developing computer vision and Artificial Intelligence techniques for intraoperative navigation and real-time tissue characterisation during minimally invasive and robot-assisted operations to improve both the efficacy and safety of surgical procedures. Our work will revolutionize the treatment of cancers and pave the way for autonomous robot-assisted interventions.

Why it is important?

With recent advances in medical imaging, sensing, and robotics, surgical oncology is entering a new era of early intervention, personalised treatment, and faster patient recovery. The main goal is to completely remove cancerous tissue while minimising damage to surrounding areas. However, achieving this can be challenging, often leading to imprecise surgeries, high re-excision rates, and reduced quality of life due to unintended injuries. Therefore, technologies that enhance cancer detection and enable more precise surgeries may improve patient outcomes.

How can it benefit patients?

Our methods aim to ensure patients receive accurate and timely surgical treatment while reducing surgeons' mental workload, overcoming limitations, and minimizing errors. By improving tumor excision, our hybrid diagnostic and therapeutic tools will lower recurrence rates and enhance survival outcomes. More complete tumor removal will also reduce the need for repeat procedures, improving patient quality of life, life expectancy, and benefiting society and the economy.

Meet the team

No results found

Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Conference paper
    Huang B, Hu Y, Nguyen A, Giannarou S, Elson DSet al., 2023,

    Detecting the sensing area of a laparoscopic probe in minimally invasive cancer surgery

    , MICCAI 2023, Publisher: Springer Nature Switzerland, Pages: 260-270, ISSN: 0302-9743

    In surgical oncology, it is challenging for surgeons to identify lymph nodes and completely resect cancer even with pre-operative imaging systems like PET and CT, because of the lack of reliable intraoperative visualization tools. Endoscopic radio-guided cancer detection and resection has recently been evaluated whereby a novel tethered laparoscopic gamma detector is used to localize a preoperatively injected radiotracer. This can both enhance the endoscopic imaging and complement preoperative nuclear imaging data. However, gamma activity visualization is challenging to present to the operator because the probe is non-imaging and it does not visibly indicate the activity origination on the tissue surface. Initial failed attempts used segmentation or geometric methods, but led to the discovery that it could be resolved by leveraging high-dimensional image features and probe position information. To demonstrate the effectiveness of this solution, we designed and implemented a simple regression network that successfully addressed the problem. To further validate the proposed solution, we acquired and publicly released two datasets captured using a custom-designed, portable stereo laparoscope system. Through intensive experimentation, we demonstrated that our method can successfully and effectively detect the sensing area, establishing a new performance benchmark. Code and data are available at https://github.com/br0202/Sensing_area_detection.git.

  • Journal article
    Cui Z, Cartucho J, Giannarou S, Baena FRYet al., 2023,

    Caveats on the First-Generation da Vinci Research Kit: Latent Technical Constraints and Essential Calibrations

    , IEEE ROBOTICS & AUTOMATION MAGAZINE, ISSN: 1070-9932
  • Conference paper
    Xu H, Runciman M, Cartucho J, Xu C, Giannarou Set al., 2023,

    Graph-based pose estimation of texture-less surgical tools for autonomous robot control

    , 2023 IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 2731-2737

    In Robot-assisted Minimally Invasive Surgery (RMIS), the estimation of the pose of surgical tools is crucial for applications such as surgical navigation, visual servoing, autonomous robotic task execution and augmented reality. A plethora of hardware-based and vision-based methods have been proposed in the literature. However, direct application of these methods to RMIS has significant limitations due to partial tool visibility, occlusions and changes in the surgical scene. In this work, a novel keypoint-graph-based network is proposed to estimate the pose of texture-less cylindrical surgical tools of small diameter. To deal with the challenges in RMIS, keypoint object representation is used and for the first time, temporal information is combined with spatial information in keypoint graph representation, for keypoint refinement. Finally, stable and accurate tool pose is computed using a PnP solver. Our performance evaluation study has shown that the proposed method is able to accurately predict the pose of a textureless robotic shaft with an ADD-S score of over 98%. The method outperforms state-of-the-art pose estimation models under challenging conditions such as object occlusion and changes in the lighting of the scene.

  • Journal article
    Weld A, Cartucho J, Xu C, Davids J, Giannarou Set al., 2023,

    Regularising disparity estimation via multi task learning with structured light reconstruction

    , COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING-IMAGING AND VISUALIZATION, Vol: 11, Pages: 1206-1214, ISSN: 2168-1163
  • Journal article
    DeLorey C, Davids JD, Cartucho J, Xu C, Roddan A, Nimer A, Ashrafian H, Darzi A, Thompson AJ, Akhond S, Runciman M, Mylonas G, Giannarou S, Avery Jet al., 2023,

    A cable‐driven soft robotic end‐effector actuator for probe‐based confocal laser endomicroscopy: Development and preclinical validation

    , Translational Biophotonics, Vol: 5, ISSN: 2627-1850

    Soft robotics is becoming a popular choice for end-effectors. An end-effector was designed that has various advantages including ease of manufacturing, simplicity and control. This device may have the advantage of enabling probe-based devices to intraoperatively measure cancer histology, because it can flexibly and gently position a probe perpendicularly over an area of delicate tissue. This is demonstrated in a neurosurgical setting where accurate cancer resection has been limited by lack of accurate visualisation and impaired tumour margin delineation with the need for in-situ histology. Conventional surgical robotic end-effectors are unsuitable to accommodate a probe-based confocal laser endomicroscopy (p-CLE) probe because of their rigid and non-deformable properties, which can damage the thin probe. We have therefore designed a new soft robotic platform, which is advantageous by conforming to the probe's shape to avoid damage and to facilitate precision scanning.

  • Conference paper
    Weld A, Agrawal A, Giannarou S, 2023,

    Ultrasound segmentation using a 2D UNet with Bayesian volumetric support

    , MICCAI 2022 25th International Conference on Medical Image Computing and Computer Assisted Intervention, Publisher: Springer Nature Switzerland, Pages: 63-68, ISSN: 0302-9743

    We present a novel 2D segmentation neural network design for the segmentation of tumour tissue in intraoperative ultrasound (iUS). Due to issues with brain shift and tissue deformation, pre-operative imaging for tumour resection has limited reliability within the operating room (OR). iUS serves as a tool for improving tumour localisation and boundary delineation. Our proposed method takes inspiration from Bayesian networks. Rather than using a conventional 3D UNet, we develop a technique which samples from the volume around the query slice, and perform multiple segmentation’s which provides volumetric support to improve the accuracy of the segmentation of the query slice. Our results show that our proposed architecture achieves an 0.04 increase in the validation dice score compared to the benchmark network.

  • Conference paper
    Wang C, Cartucho J, Elson D, Darzi A, Giannarou Set al., 2022,

    Towards autonomous control of surgical instruments using adaptive-fusion tracking and robot self-calibration

    , IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 2395-2401, ISSN: 2153-0858

    The ability to track surgical instruments in realtime is crucial for autonomous Robotic Assisted Surgery (RAS). Recently, the fusion of visual and kinematic data has been proposed to track surgical instruments. However, these methods assume that both sensors are equally reliable, and cannot successfully handle cases where there are significant perturbations in one of the sensors' data. In this paper, we address this problem by proposing an enhanced fusion-based method. The main advantage of our method is that it can adjust fusion weights to adapt to sensor perturbations and failures. Another problem is that before performing an autonomous task, these robots have to be repetitively recalibrated by a human for each new patient to estimate the transformations between the different robotic arms. To address this problem, we propose a self-calibration algorithm that empowers the robot to autonomously calibrate the transformations by itself in the beginning of the surgery. We applied our fusion and selfcalibration algorithms for autonomous ultrasound tissue scanning and we showed that the robot achieved stable ultrasound imaging when using our method. Our performance evaluation shows that our proposed method outperforms the state-of-art both in normal and challenging situations.

  • Conference paper
    Huang B, Zheng J-Q, Nguyen A, Xu C, Gkouzionis I, Vyas K, Tuch D, Giannarou S, Elson DSet al., 2022,

    Self-supervised depth estimation in laparoscopic image using 3D geometric consistency

    , 25th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 13-22, ISSN: 0302-9743

    Depth estimation is a crucial step for image-guided intervention in robotic surgery and laparoscopic imaging system. Since per-pixel depth ground truth is difficult to acquire for laparoscopic image data, it is rarely possible to apply supervised depth estimation to surgical applications. As an alternative, self-supervised methods have been introduced to train depth estimators using only synchronized stereo image pairs. However, most recent work focused on the left-right consistency in 2D and ignored valuable inherent 3D information on the object in real world coordinates, meaning that the left-right 3D geometric structural consistency is not fully utilized. To overcome this limitation, we present M3Depth, a self-supervised depth estimator to leverage 3D geometric structural information hidden in stereo pairs while keeping monocular inference. The method also removes the influence of border regions unseen in at least one of the stereo images via masking, to enhance the correspondences between left and right images in overlapping areas. Extensive experiments show that our method outperforms previous self-supervised approaches on both a public dataset and a newly acquired dataset by a large margin, indicating a good generalization across different samples and laparoscopes.

  • Conference paper
    Huang B, Zheng J-Q, Giannarou S, Elson DSet al., 2022,

    H-Net: unsupervised attention-based stereo depth estimation leveraging epipolar geometry

    , IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Publisher: IEEE, Pages: 4459-4466

    Depth estimation from a stereo image pair has become one of the most explored applications in computer vision, with most previous methods relying on fully supervised learning settings. However, due to the difficulty in acquiring accurate and scalable ground truth data, the training of fully supervised methods is challenging. As an alternative, self-supervised methods are becoming more popular to mitigate this challenge. In this paper, we introduce the H-Net, a deep-learning framework for unsupervised stereo depth estimation that leverages epipolar geometry to refine stereo matching. For the first time, a Siamese autoencoder architecture is used for depth estimation which allows mutual information between rectified stereo images to be extracted. To enforce the epipolar constraint, the mutual epipolar attention mechanism has been designed which gives more emphasis to correspondences of features that lie on the same epipolar line while learning mutual information between the input stereo pair. Stereo correspondences are further enhanced by incorporating semantic information to the proposed attention mechanism. More specifically, the optimal transport algorithm is used to suppress attention and eliminate outliers in areas not visible in both cameras. Extensive experiments on KITTI2015 and Cityscapes show that the proposed modules are able to improve the performance of the unsupervised stereo depth estimation methods while closing the gap with the fully supervised approaches.

  • Journal article
    Tukra S, Giannarou S, 2022,

    Randomly connected neural networks for self-supervised monocular depth estimation

    , COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING-IMAGING AND VISUALIZATION, Vol: 10, Pages: 390-399, ISSN: 2168-1163

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://www.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=1306&limit=10&page=2&respub-action=search.html Current Millis: 1727395247504 Current Time: Fri Sep 27 01:00:47 BST 2024

Contact Us

General enquiries
hamlyn@imperial.ac.uk

Facility enquiries
hamlyn.facility@imperial.ac.uk


The Hamlyn Centre
Bessemer Building
South Kensington Campus
Imperial College
London, SW7 2AZ
Map location