How can robots understand the actions of other agents (humans and other robots), infer their internal states and intentions, and build models of their behaviour to facilitate better interaction & collaboration? Our laboratory’s interdisciplinary research spans several areas, including human-centred computer vision, machine learning, multiscale user modelling, cognitive architectures, and shared control. We aim at advance fundamental theoretical concepts in these fields without ignoring the engineering challenges of the real world, so our experiments involve real robots, humans, and tasks.

 

Feel free to contact us if you have any queries, are interested in joining us as a student or a researcher, or have a great idea for scientific collaboration.

Research Themes

At the core of our research lies the ability of the robot system to perceive what humans are doing and infer what their internal cognitive states (including beliefs and intentions) are. We use multimodal signals (RGB-D, event-based (DVS), thermal signals, haptic and audio) to perform this inference. We research most pipeline steps of human action perception (eye tracking, pose estimation and tracking, human motion analysis, and action segmentation). We collect and publish our datasets for the community’s benefit (see section Software on this website).

 

Key Publications:

We are interested in learning and maintaining multiscale human models to personalise the contextual and temporal appropriateness of the assistance our robots provide – “how should we help this person, and when”. We typically represent humans at multiple levels of abstraction: from how they move, i.e., their spatiotemporal trajectory representations using statistical and neural network representations, to how they solve tasks, i.e., their action sequences using context-free grammatical representations, and how they use assistive equipment. Thus, we term our models “multiscale” models.

Key Publications:

We are researching the learning processes that allow humans and robots to acquire and represent sensorimotor skills. Our research spans several representational paradigms: from embodiment-oriented statistical and neural representations, to more cognition-oriented ontological and knowledge-graph-based representations. We use active learning (motor and goal babbling and exploration) and social learning (e.g. human observation and imitation) to expand the range of skills our robots have.

Key Publications:

How can robots plan and execute actions in an optimal, safe and trustworthy manner? We research robot motion generation algorithms in individual and collaborative tasks, paying particular attention to how control can be shared between human and robot collaborators. Apart from fundamental issues in human-robot shared/collaborative control, we are also interested in the interaction between multiple humans and robots, for example in triadic interactions (for example, an assistive robot, an assisted person and their carer)

Key Publications:

Virtual and Augmented Reality Interfaces have great potential for enhancing the interaction between humans and complex robot systems. Our research investigates how visualising and interacting with mixed reality information (for example dynamic signifiers, or dynamically- determined affordances) can facilitate human collaboration through enhanced explainability, and fluidity and efficiency of control.

KEY PUBLICATIONS:

As robots become more integrated into our lives, it's crucial they understand and respect privacy and trust. We focus on three key areas: 1) enabling robots to gauge human trust and adapt their behaviour accordingly, 2) designing robots with clear and understandable decision-making processes, and 3) ensuring they learn personalised behaviours without compromising user privacy.

KEY PUBLICATIONS:

  • C. Goubard and Y. Demiris, 2024, Learning Self-Confidence from Semantic Action Embeddings for Improved Trust in Human-Robot Interaction in 2024 IEEE International Conference on Robotics and Automation (ICRA).
  • F. Estevez Casado and Y. Demiris, 2022, Federated learning from demonstration for active assistance to smart wheelchair user in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 9326-9331.

Caregiving robotics, as a subset of assistive robotics, are developed to provide support for care-related tasks for elderly individuals and people with mobility impairments, aiming to improve the quality of their lives and independence while also reducing the workload on human caregivers.

KEY PUBLICATIONS:

Personal Robotics Laboratory Research Themes Perception

Multimodal Perception of Human Actions and Inference of Internal Human States

Personal Robotics Laboratory Multiscale Human Modelling

Multiscale Human Modelling and Human-in-the-Loop Digital Twins

Personal Robotics Laboratory Skill Representation

Skill Representation and Learning in Humans and Robots

Personal Robotics Laboratory Motion Planning

Motion Planning in Individual and Collaborative Tasks

Personal Robotics Laboratory Mixed Reality Human-Robot Interaction

Mixed Reality for Human-Robot Interaction

Personal Robotics Laboratory Multimodal Perception

Multimodal Perception of Human Actions and Inference of Internal Human States

Personal Robotics Laboratory Multiscale Human Modelling 2

Multiscale Human Modelling and Human-in-the-Loop Digital Twins

Personal Robotics Laboratory Follow Holo-SpoK indoors

Skill Representation and Learning in Humans and Robots

Personal Robotics Laboratory Motion Planning 2

Motion Planning in Individual and Collaborative Tasks

Personal Robotics Laboratory Mixed Reality

Mixed Reality for Human-Robot Interaction