We use perceptual methods, AI, and frugal robotics innovation to deliver transformative diagnostic and treatment solutions.

Head of Group

Dr George Mylonas

B415B Bessemer Building
South Kensington Campus

+44 (0)20 3312 5145

YouTube ⇒ HARMS Lab

What we do

The HARMS lab leverages perceptually enabled methodologies, artificial intelligence, and frugal innovation in robotics (such as soft surgical robots) to deliver transformative solutions for diagnosis and treatment. Our research is driven by both problem-solving and curiosity, aiming to build a comprehensive understanding of the actions, interactions, and reactions occurring in the operating room. We focus on using robotic technologies to facilitate procedures that are not yet widely adopted, particularly in endoluminal surgery, such as advanced treatments for gastrointestinal cancer.

Meet the team

Mr Junhong Chen

Mr Junhong Chen
Research Postgraduate

Dr Adrian Rubio Solis

Dr Adrian Rubio Solis
Research Associate in Sensing and Machine Learning

Citation

BibTex format

@article{Kogkas:2017:10.1007/s11548-017-1580-y,
author = {Kogkas, AA and Darzi, A and Mylonas, GP},
doi = {10.1007/s11548-017-1580-y},
journal = {International Journal of Computer Assisted Radiology and Surgery},
pages = {1131--1140},
title = {Gaze-contingent perceptually enabled interactions in the operating theatre.},
url = {http://dx.doi.org/10.1007/s11548-017-1580-y},
volume = {12},
year = {2017}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - PURPOSE: Improved surgical outcome and patient safety in the operating theatre are constant challenges. We hypothesise that a framework that collects and utilises information -especially perceptually enabled ones-from multiple sources, could help to meet the above goals. This paper presents some core functionalities of a wider low-cost framework under development that allows perceptually enabled interaction within the surgical environment. METHODS: The synergy of wearable eye-tracking and advanced computer vision methodologies, such as SLAM, is exploited. As a demonstration of one of the framework's possible functionalities, an articulated collaborative robotic arm and laser pointer is integrated and the set-up is used to project the surgeon's fixation point in 3D space. RESULTS: The implementation is evaluated over 60 fixations on predefined targets, with distances between the subject and the targets of 92-212 cm and between the robot and the targets of 42-193 cm. The median overall system error is currently 3.98 cm. Its real-time potential is also highlighted. CONCLUSIONS: The work presented here represents an introduction and preliminary experimental validation of core functionalities of a larger framework under development. The proposed framework is geared towards a safer and more efficient surgical theatre.
AU - Kogkas,AA
AU - Darzi,A
AU - Mylonas,GP
DO - 10.1007/s11548-017-1580-y
EP - 1140
PY - 2017///
SN - 1861-6410
SP - 1131
TI - Gaze-contingent perceptually enabled interactions in the operating theatre.
T2 - International Journal of Computer Assisted Radiology and Surgery
UR - http://dx.doi.org/10.1007/s11548-017-1580-y
UR - http://hdl.handle.net/10044/1/48208
VL - 12
ER -

Contact Us

General enquiries

Facility enquiries


The Hamlyn Centre
Bessemer Building
South Kensington Campus
Imperial College
London, SW7 2AZ
Map location