We use perceptual methods, AI, and frugal robotics innovation to deliver transformative diagnostic and treatment solutions.
Head of Group
B415B Bessemer Building
South Kensington Campus
+44 (0)20 3312 5145
What we do
The HARMS lab leverages perceptually enabled methodologies, artificial intelligence, and frugal innovation in robotics (such as soft surgical robots) to deliver transformative solutions for diagnosis and treatment. Our research is driven by both problem-solving and curiosity, aiming to build a comprehensive understanding of the actions, interactions, and reactions occurring in the operating room. We focus on using robotic technologies to facilitate procedures that are not yet widely adopted, particularly in endoluminal surgery, such as advanced treatments for gastrointestinal cancer.
Meet the team
Results
- Showing results for:
- Reset all filters
Search results
-
Conference paperStoyanov D, Mylonas G, Deligianni F, et al., 2005,
Soft-tissue motion tracking and structure estimation for robotic assisted MIS procedures
, Medical Image Computing and Computer Assisted Intervention (MICCAI05), Pages: 139-146In robotically assisted laparoscopic surgery, soft-tissue motion tracking and structure recovery are important for intraoperative surgical guidance, motion compensation and delivering active constraints. In this paper, we present a novel method for feature based motion tracking of deformable soft-tissue surfaces in totally endoscopic coronary artery bypass graft (TECAB) surgery. We combine two feature detectors to recover distinct regions on the epicardial surface for which the sparse 3D surface geometry may be computed using a pre-calibrated stereo laparoscope. The movement of the 3D points is then tracked in the stereo images with stereo-temporal constrains by using an iterative registration algorithm. The practical value of the technique is demonstrated on both a deformable phantom model with tomographically derived surface geometry and in vivo robotic assisted minimally invasive surgery (MIS) image sequences.
-
Conference paperMylonas G, Stoyanov D, Deligianni F, et al., 2005,
Gaze-contingent soft tissue deformation tracking for minimally invasive robotic surgery
, Medical Image Computing and Computer Assisted Intervention (MICCAI05), Pages: 843-850The introduction of surgical robots in Minimally Invasive Surgery (MIS) has allowed enhanced manual dexterity through the use of microprocessor controlled mechanical wrists. Although fully autonomous robots are attractive, both ethical and legal barriers can prohibit their practical use in surgery. The purpose of this paper is to demonstrate that it is possible to use real-time binocular eye tracking for empowering robots with human vision by using knowledge acquired in situ. By utilizing the close relationship between the horizontal disparity and the depth perception varying with the viewing distance, it is possible to use ocular vergence for recovering 3D motion and deformation of the soft tissue during MIS procedures. Both phantom and in vivo experiments were carried out to assess the potential frequency limit of the system and its intrinsic depth recovery accuracy. The potential applications of the technique include motion stabilization and intra-operative planning in the presence of large tissue deformation.
-
Conference paperMylonas GP, Darzi A, Yang GZ, 2004,
Gaze contingent depth recovery and motion stabilisation for minimally invasive robotic surgery
, Berlin, 2nd international workshop on medical imaging and augmented reality (MIAR 2004), Beijing, Peoples Republic of China, Publisher: Springer-Verlag, Pages: 311-319
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.
Contact Us
The Hamlyn Centre
Bessemer Building
South Kensington Campus
Imperial College
London, SW7 2AZ
Map location