Main content blocks

Head of Group

Dr Stamatia Giannarou

About us

The Cognitive Vision in Robotic Surgery Lab is developing computer vision and AI techniques for intraoperative navigation and real-time tissue characterisation.

Research lab info

What we do

Surgery is undergoing rapid changes driven by recent technological advances and our on-going pursuit towards early intervention and personalised treatment. We are developing computer vision and Artificial Intelligence techniques for intraoperative navigation and real-time tissue characterisation during minimally invasive and robot-assisted operations to improve both the efficacy and safety of surgical procedures. Our work will revolutionize the treatment of cancers and pave the way for autonomous robot-assisted interventions.

Why it is important?

With recent advances in medical imaging, sensing, and robotics, surgical oncology is entering a new era of early intervention, personalised treatment, and faster patient recovery. The main goal is to completely remove cancerous tissue while minimising damage to surrounding areas. However, achieving this can be challenging, often leading to imprecise surgeries, high re-excision rates, and reduced quality of life due to unintended injuries. Therefore, technologies that enhance cancer detection and enable more precise surgeries may improve patient outcomes.

How can it benefit patients?

Our methods aim to ensure patients receive accurate and timely surgical treatment while reducing surgeons' mental workload, overcoming limitations, and minimizing errors. By improving tumor excision, our hybrid diagnostic and therapeutic tools will lower recurrence rates and enhance survival outcomes. More complete tumor removal will also reduce the need for repeat procedures, improving patient quality of life, life expectancy, and benefiting society and the economy.

Meet the team

No results found

Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Journal article
    Berthet-Rayne P, Sadati S, Petrou G, Patel N, Giannarou S, Leff DR, Bergeles Cet al., 2021,

    MAMMOBOT: A Miniature Steerable Soft Growing Robot for Early Breast Cancer Detection

    , IEEE Robotics and Automation Letters, Pages: 1-1
  • Journal article
    Collins JW, Marcus HJ, Ghazi A, Sridhar A, Hashimoto D, Hager G, Arezzo A, Jannin P, Maier-Hein L, Marz K, Valdastri P, Mori K, Elson D, Giannarou S, Slack M, Hares L, Beaulieu Y, Levy J, Laplante G, Ramadorai A, Jarc A, Andrews B, Garcia P, Neemuchwala H, Andrusaite A, Kimpe T, Hawkes D, Kelly JD, Stoyanov Det al., 2021,

    Ethical implications of AI in robotic surgical training: A Delphi consensus statement

    , European Urology Focus, ISSN: 2405-4569

    ContextAs the role of AI in healthcare continues to expand there is increasing awareness of the potential pitfalls of AI and the need for guidance to avoid them.ObjectivesTo provide ethical guidance on developing narrow AI applications for surgical training curricula. We define standardised approaches to developing AI driven applications in surgical training that address current recognised ethical implications of utilising AI on surgical data. We aim to describe an ethical approach based on the current evidence, understanding of AI and available technologies, by seeking consensus from an expert committee.Evidence acquisitionThe project was carried out in 3 phases: (1) A steering group was formed to review the literature and summarize current evidence. (2) A larger expert panel convened and discussed the ethical implications of AI application based on the current evidence. A survey was created, with input from panel members. (3) Thirdly, panel-based consensus findings were determined using an online Delphi process to formulate guidance. 30 experts in AI implementation and/or training including clinicians, academics and industry contributed. The Delphi process underwent 3 rounds. Additions to the second and third-round surveys were formulated based on the answers and comments from previous rounds. Consensus opinion was defined as ≥ 80% agreement.Evidence synthesisThere was 100% response from all 3 rounds. The resulting formulated guidance showed good internal consistency, with a Cronbach alpha of >0.8. There was 100% consensus that there is currently a lack of guidance on the utilisation of AI in the setting of robotic surgical training. Consensus was reached in multiple areas, including: 1. Data protection and privacy; 2. Reproducibility and transparency; 3. Predictive analytics; 4. Inherent biases; 5. Areas of training most likely to benefit from AI.ConclusionsUsing the Delphi methodology, we achieved international consensus among experts to develop and reach

  • Journal article
    Tukra S, Marcus HJ, Giannarou S, 2021,

    See-Through Vision with Unsupervised Scene Occlusion Reconstruction.

    , IEEE Trans Pattern Anal Mach Intell, Vol: PP

    Among the greatest of the challenges of Minimally Invasive Surgery (MIS) is the inadequate visualisation of the surgical field through keyhole incisions. Moreover, occlusions caused by instruments or bleeding can completely obfuscate anatomical landmarks, reduce surgical vision and lead to iatrogenic injury. The aim of this paper is to propose an unsupervised end-to-end deep learning framework, based on Fully Convolutional Neural networks to reconstruct the view of the surgical scene under occlusions and provide the surgeon with intraoperative see-through vision in these areas. A novel generative densely connected encoder-decoder architecture has been designed which enables the incorporation of temporal information by introducing a new type of 3D convolution, the so called 3D partial convolution, to enhance the learning capabilities of the network and fuse temporal and spatial information. To train the proposed framework, a unique loss function has been proposed which combines perceptual, reconstruction, style, temporal and adversarial loss terms, for generating high fidelity image reconstructions. Advancing the state-of-the-art, our method can reconstruct the underlying view obstructed by irregularly shaped occlusions of divergent size, location and orientation. The proposed method has been validated on in-vivo MIS video data, as well as natural scenes on a range of occlusion-to-image (OIR) ratios.

  • Journal article
    Bautista-Salinas D, Kundrat D, Kogkas A, Abdelaziz MEMK, Giannarou S, Baena FRYet al., 2021,

    Integrated Augmented Reality Feedback for Cochlear Implant Surgery Instruments

    , IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS, Vol: 3, Pages: 261-264
  • Conference paper
    Vieira Cartucho J, Wang C, Huang B, Elson D, Darzi A, Giannarou Set al., 2021,

    An Enhanced Marker Pattern that Achieves Improved Accuracy in Surgical Tool Tracking

    , Joint MICCAI 2021 Workshop on Augmented Environments for Computer-Assisted Interventions (AE-CAI), Computer-Assisted Endoscopy (CARE) and Context-Aware Operating Theatres 2.0 (OR2.0), Publisher: Taylor and Francis, ISSN: 2168-1163
  • Journal article
    Cartucho J, Tukra S, Li Y, S Elson D, Giannarou Set al., 2021,

    VisionBlender: a tool to efficiently generate computer vision datasets for robotic surgery

    , Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, Vol: 9, Pages: 331-338, ISSN: 2168-1163

    Surgical robots rely on robust and efficient computer vision algorithms to be able to intervene in real-time. The main problem, however, is that the training or testing of such algorithms, especially when using deep learning techniques, requires large endoscopic datasets which are challenging to obtain, since they require expensive hardware, ethical approvals, patient consent and access to hospitals. This paper presents VisionBlender, a solution to efficiently generate large and accurate endoscopic datasets for validating surgical vision algorithms. VisionBlender is a synthetic dataset generator that adds a user interface to Blender, allowing users to generate realistic video sequences with ground truth maps of depth, disparity, segmentation masks, surface normals, optical flow, object pose, and camera parameters. VisionBlender was built with special focus on robotic surgery, and examples of endoscopic data that can be generated using this tool are presented. Possible applications are also discussed, and here we present one of those applications where the generated data has been used to train and evaluate state-of-the-art 3D reconstruction algorithms. Being able to generate realistic endoscopic datasets efficiently, VisionBlender promises an exciting step forward in robotic surgery.

  • Journal article
    Davids J, Manivannan S, Darzi A, Giannarou S, Ashrafian H, Marcus HJet al., 2020,

    Simulation for skills training in neurosurgery: a systematic review, meta-analysis, and analysis of progressive scholarly acceptance.

    , Neurosurgical Review, Vol: 44, Pages: 1853-1867, ISSN: 0344-5607

    At a time of significant global unrest and uncertainty surrounding how the delivery of clinical training will unfold over the coming years, we offer a systematic review, meta-analysis, and bibliometric analysis of global studies showing the crucial role simulation will play in training. Our aim was to determine the types of simulators in use, their effectiveness in improving clinical skills, and whether we have reached a point of global acceptance. A PRISMA-guided global systematic review of the neurosurgical simulators available, a meta-analysis of their effectiveness, and an extended analysis of their progressive scholarly acceptance on studies meeting our inclusion criteria of simulation in neurosurgical education were performed. Improvement in procedural knowledge and technical skills was evaluated. Of the identified 7405 studies, 56 studies met the inclusion criteria, collectively reporting 50 simulator types ranging from cadaveric, low-fidelity, and part-task to virtual reality (VR) simulators. In all, 32 studies were included in the meta-analysis, including 7 randomised controlled trials. A random effects, ratio of means effects measure quantified statistically significant improvement in procedural knowledge by 50.2% (ES 0.502; CI 0.355; 0.649, p < 0.001), technical skill including accuracy by 32.5% (ES 0.325; CI - 0.482; - 0.167, p < 0.001), and speed by 25% (ES - 0.25, CI - 0.399; - 0.107, p < 0.001). The initial number of VR studies (n = 91) was approximately double the number of refining studies (n = 45) indicating it is yet to reach progressive scholarly acceptance. There is strong evidence for a beneficial impact of adopting simulation in the improvement of procedural knowledge and technical skill. We show a growing trend towards the adoption of neurosurgical simulators, although we have not fully gained progressive scholarly acceptance for VR-based simulation technologies in neurosurgical education.

  • Conference paper
    Zhan J, Cartucho J, Giannarou S, 2020,

    Autonomous tissue scanning under free-form motion for intraoperative tissue characterisation

    , 2020 IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 11147-11154

    In Minimally Invasive Surgery (MIS), tissue scanning with imaging probes is required for subsurface visualisation to characterise the state of the tissue. However, scanning of large tissue surfaces in the presence of motion is a challenging task for the surgeon. Recently, robot-assisted local tissue scanning has been investigated for motion stabilisation of imaging probes to facilitate the capturing of good quality images and reduce the surgeon's cognitive load. Nonetheless, these approaches require the tissue surface to be static or translating with periodic motion. To eliminate these assumptions, we propose a visual servoing framework for autonomous tissue scanning, able to deal with free-form tissue motion. The 3D structure of the surgical scene is recovered, and a feature-based method is proposed to estimate the motion of the tissue in real-time. The desired scanning trajectory is manually defined on a reference frame and continuously updated using projective geometry to follow the tissue motion and control the movement of the robotic arm. The advantage of the proposed method is that it does not require the learning of the tissue motion prior to scanning and can deal with free-form motion. We deployed this framework on the da Vinci ® surgical robot using the da Vinci Research Kit (dVRK) for Ultrasound tissue scanning. Our framework can be easily extended to other probe-based imaging modalities.

  • Journal article
    Huang B, Tsai Y-Y, Cartucho J, Vyas K, Tuch D, Giannarou S, Elson DSet al., 2020,

    Tracking and visualization of the sensing area for a tethered laparoscopic gamma probe

    , International Journal of Computer Assisted Radiology and Surgery, Vol: 15, Pages: 1389-1397, ISSN: 1861-6410

    PurposeIn surgical oncology, complete cancer resection and lymph node identification are challenging due to the lack of reliable intraoperative visualization. Recently, endoscopic radio-guided cancer resection has been introduced where a novel tethered laparoscopic gamma detector can be used to determine the location of tracer activity, which can complement preoperative nuclear imaging data and endoscopic imaging. However, these probes do not clearly indicate where on the tissue surface the activity originates, making localization of pathological sites difficult and increasing the mental workload of the surgeons. Therefore, a robust real-time gamma probe tracking system integrated with augmented reality is proposed.MethodsA dual-pattern marker has been attached to the gamma probe, which combines chessboard vertices and circular dots for higher detection accuracy. Both patterns are detected simultaneously based on blob detection and the pixel intensity-based vertices detector and used to estimate the pose of the probe. Temporal information is incorporated into the framework to reduce tracking failure. Furthermore, we utilized the 3D point cloud generated from structure from motion to find the intersection between the probe axis and the tissue surface. When presented as an augmented image, this can provide visual feedback to the surgeons.ResultsThe method has been validated with ground truth probe pose data generated using the OptiTrack system. When detecting the orientation of the pose using circular dots and chessboard dots alone, the mean error obtained is 0.05∘and 0.06∘, respectively. As for the translation, the mean error for each pattern is 1.78 mm and 1.81 mm. The detection limits for pitch, roll and yaw are 360∘,360∘ and 8∘–82∘∪188∘–352∘.ConclusionThe performance evaluation results show that this dual-pattern marker can provide high detection rates, as well as more accurate pose estimation and a larger workspace than the previously proposed hyb

  • Journal article
    Giannarou S, Hacihaliloglu I, 2020,

    IJCARS - IPCAI 2020 special issue: 11th conference on information processing for computer-assisted interventions - part 1

    , International Journal of Computer Assisted Radiology and Surgery, Vol: 15, Pages: 737-738, ISSN: 1861-6410

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://www.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=1306&limit=10&page=4&respub-action=search.html Current Millis: 1727395262811 Current Time: Fri Sep 27 01:01:02 BST 2024

Contact Us

General enquiries
hamlyn@imperial.ac.uk

Facility enquiries
hamlyn.facility@imperial.ac.uk


The Hamlyn Centre
Bessemer Building
South Kensington Campus
Imperial College
London, SW7 2AZ
Map location