We use perceptual methods, AI, and frugal robotics innovation to deliver transformative diagnostic and treatment solutions.

Head of Group

Dr George Mylonas

B415B Bessemer Building
South Kensington Campus

+44 (0)20 3312 5145

YouTube ⇒ HARMS Lab

What we do

The HARMS lab leverages perceptually enabled methodologies, artificial intelligence, and frugal innovation in robotics (such as soft surgical robots) to deliver transformative solutions for diagnosis and treatment. Our research is driven by both problem-solving and curiosity, aiming to build a comprehensive understanding of the actions, interactions, and reactions occurring in the operating room. We focus on using robotic technologies to facilitate procedures that are not yet widely adopted, particularly in endoluminal surgery, such as advanced treatments for gastrointestinal cancer.

Meet the team

Mr Junhong Chen

Mr Junhong Chen
Research Postgraduate

Dr Adrian Rubio Solis

Dr Adrian Rubio Solis
Research Associate in Sensing and Machine Learning

Citation

BibTex format

@inproceedings{Wang:2019:10.1109/IROS.2018.8594045,
author = {Wang, M-Y and Kogkas, AA and Darzi, A and Mylonas, GP},
doi = {10.1109/IROS.2018.8594045},
pages = {2355--2361},
publisher = {IEEE},
title = {Free-view, 3D gaze-guided, assistive robotic system for activities of daily living},
url = {http://dx.doi.org/10.1109/IROS.2018.8594045},
year = {2019}
}

RIS format (EndNote, RefMan)

TY  - CPAPER
AB - Patients suffering from quadriplegia have limited body motion which prevents them from performing daily activities. We have developed an assistive robotic system with an intuitive free-view gaze interface. The user's point of regard is estimated in 3D space while allowing free head movement and is combined with object recognition and trajectory planning. This framework allows the user to interact with objects using fixations. Two operational modes have been implemented to cater for different eventualities. The automatic mode performs a pre-defined task associated with a gaze-selected object, while the manual mode allows gaze control of the robot's end-effector position on the user's frame of reference. User studies reported effortless operation in automatic mode. A manual pick and place task achieved a success rate of 100% on the users' first attempt.
AU - Wang,M-Y
AU - Kogkas,AA
AU - Darzi,A
AU - Mylonas,GP
DO - 10.1109/IROS.2018.8594045
EP - 2361
PB - IEEE
PY - 2019///
SN - 2153-0858
SP - 2355
TI - Free-view, 3D gaze-guided, assistive robotic system for activities of daily living
UR - http://dx.doi.org/10.1109/IROS.2018.8594045
UR - http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000458872702049&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=1ba7043ffcc86c417c072aa74d649202
UR - http://hdl.handle.net/10044/1/81399
ER -

Contact Us

General enquiries

Facility enquiries


The Hamlyn Centre
Bessemer Building
South Kensington Campus
Imperial College
London, SW7 2AZ
Map location