Imperial’s Computational Privacy Group lead session for London Data Week

by

London Data Week event at Imperial College London

Privacy experts from the Data Science Institute led a session about exploring the robustness of modern data privacy systems for London Data Week.

London Data Week is a citywide festival about data to learn, create, discuss, and explore how to use data to shape London for the better, organised by LOTI, The Alan Turing Institute and the Mayor of London.

On 2 July, privacy experts and tech enthusiasts packed out a room at Imperial College London to attend a discussion on data privacy, led by Postgraduate Researchers Nataša Krčo and Igor Shilov from Imperial’s Computational Privacy Group

The pair of researchers explained some of their research investigating the robustness of current data privacy systems, presenting their findings on privacy attacks on synthetic data and machine learning classifiers and the copyright implications of large language models.

Today people leave digital breadcrumbs wherever they go and sometimes, this data can leak in unexpected ways. We have methods of securing and encrypting data to protect against these data leaks or malicious hacks, but how effective are they?

Nataša’s work focuses on accurately evaluating the privacy risk of releasing machine learning models or synthetic datasets trained on private data using membership inference attacks (MIAs).

Membership inference attacks, where the attacker or auditor attempt to infer the presence of a particular record in the training set of an ML model are often used as a method to measure the privacy risk of the model and record. A successful MIA indicates that too much information is being leaked by the model, and that it may not be safe to release. MIAs must therefore be evaluated in carefully designed setups to ensure that the risk being measured is accurate to both the model and the record.

Igor presented his work on the use of copyright traps in large language models to detect the presence of copyrighted materials.

Copyright traps are intentionally inserted false or unique pieces of information into copyrighted material. These traps serve as markers to help identify if someone has copied or used the original content without permission. By including these traps, creators can detect unauthorised use of their work and protect their intellectual property rights.

The event went down well with attendees who found it “interesting and informative”, and they were able to network over pizza and drinks after the talks.

Reporter

Gemma Ralton

Gemma Ralton
Faculty of Engineering

Click to expand or contract

Contact details

Email: gemma.ralton@imperial.ac.uk

Show all stories by this author