Privacy meet up banner

The Computational Privacy Group at Imperial College London is organizing the first Machine Learning Privacy meetup. Recognizing the growing community of researchers in and around London working at the intersection of privacy and machine learning, we believe this to be a great opportunity for like-minded individuals to connect and share perspectives. 

What?

The meetup will feature a series of short research talks (10-15 minutes each) from a diverse group of researchers, addressing multiple aspects of privacy in machine learning. We will also invite early-career academics to present brief ‘lightning talks’ on their current work. After the talks, we’ll host a happy hour to encourage further connections. If you would like to sign up for a lighting talk, please submit your proposal here and we will be in touch. 

Agenda: 

6:00-6:15pm Welcome

6:15-7:15pm: Research talks by invited speakers (see below)

7:15-7:30pm: Lightning talks

7:30-9:00pm: Happy hour

Who?

The meetup is free and open for anyone interested. We are specifically inviting researchers from academia, industry research labs, and startups working on the intersection of machine learning and privacy. While registration is not mandatory, we kindly ask people to RSVP by registering here so we can better estimate the capacity.

When?

The event will take place on Thursday, November 7th from 6pm to 9pm at the South Kensington campus of Imperial College.  

Confirmed speaker list

We are happy to announce our (already) confirmed list of speakers:

 Graham Cormode Graham Cormode

University of Warwick, Meta AI

Graham Cormode is a senior research scientist at Meta, and a professor in the Department of Computer Science at the University of Warwick. His research interests are in data privacy, data stream analysis, massive data sets, and general algorithmic problems. His work on statistical analysis of data has been recognized by the 2017 Adams Prize in Mathematics and as a Fellow of the ACM. Talk: Federated Computation for Private Data Analysis
 Luke Lukas Wutschitz

M365 Research, Microsoft

Lukas Wutschitz is a research scientist at Microsoft Cambridge. He is interested in privacy preserving machine learning with a focus on large language models. Prior to joining Microsoft, Lukas obtained his PhD in Physics from the University of Cambridge. Talk: Empirical privacy risk estimation in LLMs
 Jamie Hayes Jamie Hayes

Google DeepMind

Jamie Hayes is a research scientist at Google DeepMind, and works on security, privacy and ML. Talk: Stealing User Prompts from Mixture-of-Experts models
 Ilia Shumailov Ilia Shumailov

Google DeepMind

Ilia Shumailov holds a PhD in Computer Science from University of Cambridge, specializing in Machine Learning and Computer Security. Currently, Ilia is a Research Scientist in the Google DeepMind’s Security and Privacy team. Talk: What does it mean to operationalise privacy?

Getting here