Today people leave digital breadcrumbs wherever they go and whatever they do online and offline. This data dramatically increases our capacity to understand and affect the behavior of individuals and collectives, has been key to recent advances in AI, but also raises fundamentally new privacy and fairness questions. The Computational Privacy Group at Imperial aims to provide leadership, in the UK and beyond, in the safe, anonymous, and ethical use of large-scale behavioral datasets coming from the Internet of Things (IoT) devices, mobile phones, credit cards, browsers, etc.

 

Selected projects

Detecting illegal content

Proposed mechanisms to detect illegal content can be easily evaded

In this project the Computational Privacy Group showed that current mechanisms to detect illegal content known as perceptual hashing, proposed by governments, tech companies and researchers are not robust and could be easily bypassed by illegal attackers online who aim to evade detection.

In the study, published in 31st USENIX Security Symposium the team showed that 99.9% of images were able to successful bypass the system undetected whilst preserving the content of the image.

Through a large-scale evaluation of five commonly used algorithms for detecting illegal content, including the PDQ algorithm developed by Facebook, the team showed that modified images are able to avoid detection whilst still maintaining very visually similar images.

Find out more about the project in this Imperial News Story.