NeurIPS 2019About the Workshop

Neural Information Processing Systems (NeurIPS) is a prestigious peer reviewed conference that aims at publishing cutting edge research in Artificial Intelligence, Machine Learning and their related areas. Imperial College London research community has always made its presence felt at NeurIPS from various departments and groups. The Data Science Institute (DSI) aims at creating a common platform across Departments of Imperial College London to present their accepted works at NeurIPS to the Imperial College Researchers.

The Imperial @ NeurIPS 2019 workshop will take place on 28 November at 170 Queen’s Gate.

Registration of the workshop is now open! Please register for free on Eventbrite


Program & Organisation Committee:

Dr. K S Sesh Kumar, Prof. Alessandra Russo, Prof. Yi-ke Guo, and Dr. Kai Sun

Call for Posters

There is limited space in the poster session for NeurIPS 2019 workshop papers . Kindly submit your papers at https://easychair.org/my/conference?conf=imperialneurips2019#. We will shortlist the papers based on the available space.

Schedule:  Imperial @ NeurIPS 2019


10:00 - 10:10 Welcome and Opening
 - Prof. Yi-ke Guo, Director of the Data Science Institute

10:10 - 10:30 Self-Supervised Generalisation with Meta Auxiliary Learning Shikun Liu / Dr. Edward Johns

10:30 - 10:50 Bridging Machine Learning and Logical Reasoning by Abductive Learning Dr. Wang-Zhou Dai

10:50 - 11:20 Poster Session (with coffee break)

11:20 - 11:40 Sinkhorn Barycenters with Free Support via Frank-Wolfe Algorithm - Giulia Luise

11:40 - 12:00 Localized Structured Prediction - Dr. Carlo Ciliberto

12:00 - 12:20 Fast Decomposable Submodular Function Minimization using Constrained Total Variation - Dr. K S Sesh Kumar

12:20 - 13:40 Lunch break 

13:40 - 14:00 Domain Generalization via Model-Agnostic Learning of Semantic Features - Dr. Qi Dou

14:00 - 14:20 Feedforward Bayesian Inference for Crowdsourced Classification - Dr. Edoardo Manino

14:20 - 15:20 Poster Session (with coffee break)

15:20 - 15:40 Learning Object Bounding Boxes for 3D Instance Segmentation on Point Clouds - Dr. Bo Yang / Dr. Ronald Clark 

15:40 - 16:00 Minimum Stein Discrepancy Estimators - Alessandro Barp

 Invited Talks:


Self-Supervised Generalisation with Meta Auxiliary Learning
Speaker: Shikun Liu / Dr. Edward Johns

Abstract : Learning with auxiliary tasks can improve the ability of a primary task to generalise. However, this comes at the cost of manually labelling auxiliary data. We propose a new method which automatically learns appropriate labels for an auxiliary task, such that any supervised learning task can be improved without requiring access to any further data. The approach is to train two neural networks: a label-generation network to predict the auxiliary labels, and a multi-task network to train the primary task alongside the auxiliary task. The loss for the label-generation network incorporates the loss of the multi-task network, and so this interaction between the two networks can be seen as a form of meta learning with a double gradient. We show that our proposed method, Meta AuXiliary Learning (MAXL), outperforms single-task learning on 7 image datasets, without requiring any additional data. We also show that MAXL outperforms several other baselines for generating auxiliary labels, and is even competitive when compared with human-defined auxiliary labels. The self-supervised nature of our method leads to a promising new direction towards automated generalisation.

Biography of Speakers:

Ed Johns

 Dr. Edward Johns received a BA (2006) and MEng (2007) in Electrical and Information Engineering from Cambridge University, and a PhD (2014) in vision-based robot localisation     from the Hamlyn Centre at Imperial College, working with Guang-Zhong Yang. Following his PhD, he spent a year as a postdoc at UCL working with Gabriel Brostow. In 2014, he then   returned to Imperial College as a founding member of the Dyson Robotics Lab with Andrew Davison, where he held a Dyson Fellowship and led the lab's robot manipulation team.   In 2017, he was awarded a Royal Academy of Engineering Research Fellowship for his work on deep learning for robot manipulation. He was then appointed as a Lecturer at     Imperial College in 2018, and founded the Robot Learning Lab.

Liu Shikun

Liu, Shikuna first-year Ph.D. student in Machine Vision & Robotics at Imperial College London. He is currently working at theDyson Robotics Lab, co-advised by Edward Johns andAndrew Davison. He is interested in and actively studying on AutoML, self-supervised learning and meta learning with applications in vision and robotics. He is a strong advocate of reproducible and open machine learning research.

Bridging Machine Learning and Logical Reasoning by Abductive Learning
Speaker : Dr. Wang-Zhou Dai

Abstract: Perception and reasoning are two representative abilities of intelligence that are integrated seamlessly during human problem-solving processes. In the area of artificial intelligence (AI), the two abilities are usually realised by machine learning and logic programming, respectively. However, the two categories of techniques were developed separately throughout most of the history of AI. In this paper, we present the abductive learning targeted at unifying the two AI paradigms in a mutually beneficial way, where the machine learning model learns to perceive primitive logic facts from data, while logical reasoning can exploit symbolic domain knowledge and correct the wrongly perceived facts for improving the machine learning models. Furthermore, we propose a novel approach to optimise the machine learning model and the logical reasoning model jointly. We demonstrate that by using abductive learning, machines can learn to recognise numbers and resolve unknown mathematical operations simultaneously from images of simple hand-written equations. Moreover, the learned models can be generalised to longer equations and adapted to different tasks, which is beyond the capability of state-of-the-art deep learning models.

Biography of Speaker:

Wang Zhou Dai

Dr. Wang-Zhou Dai received Ph.D. in Computer Science from Nanjing University in 2019  (supervisor Prof. Zhi-Hua Zhou). Further, he joined Prof. Stephen Muggletons group in the Department of Computing at Imperial College London as a Research Associate from 2019. His research interest is in machine learning, a sub-field of artificial intelligence. Currently, he is interested in combining sub-symbolic machine learning and logic-based symbolic machine learning.

 

Domain Generalization via Model-Agnostic Learning of Semantic Features
Speaker : Dr.Qi Dou

Abstract: Generalization capability to unseen domains is crucial for machine learning models when deploying to real-world conditions. We investigate the challenging problem of domain generalization, i.e., training a model on multi-domain source data such that it can directly generalize to target domains with unknown statistics. We adopt a model-agnostic learning paradigm with gradient-based meta-train and meta-test procedures to expose the optimization to domain shift. Further, we introduce two complementary losses which explicitly regularize the semantic structure of the feature space. Globally, we align a derived soft confusion matrix to preserve general knowledge about inter-class relationships. Locally, we promote domain-independent class-specific cohesion and separation of sample features with a metric-learning component. The effectiveness of our method is demonstrated with new state-of-the-art results on two common object recognition benchmarks. Our method also shows consistent improvement on a medical image segmentation task.

Biography of Speaker:

Qi Dou

Dr. Qi Dou is a post-doc at the Department of Computing, Imperial College London, supervised by Dr. Ben Glocker in BioMedIA. She obtained Ph.D. degree in the Department of Computer Science and Engineering, The Chinese University of Hong Kong (CUHK), supervised by Prof. Pheng Ann Heng in July 2018. She received B. Eng. degree in Biomedical Engineering from Beihang University (BUAA) in Beijing, June 2014. She also worked with Dr. Yan Xu for undergraduate research in MSRA. Her research interests are Medical Image Analysis, Deep Learning, Machine Learning.


Learning Object Bounding Boxes for 3D Instance Segmentation on Point Clouds
Speakers: Dr. Ronald Clark/Bo Yang

Abstract: We propose a novel, conceptually simple and general framework for instance segmentation on 3D point clouds. Our method, called 3D-BoNet, follows the simple design philosophy of per-point multilayer perceptrons (MLPs). The framework directly regresses 3D bounding boxes for all instances in a point cloud, while simultaneously predicting a point-level mask for each instance. It consists of a backbone network followed by two parallel network branches for 1) bounding box regression and 2) point mask prediction. 3D-BoNet is single-stage, anchor-free and end-to-end trainable. Moreover, it is remarkably computationally efficient as, unlike existing approaches, it does not require any post-processing steps such as non-maximum suppression, feature sampling, clustering or voting. Extensive experiments show that our approach surpasses existing work on both ScanNet and S3DIS datasets while being approximately 10x more computationally efficient. Comprehensive ablation studies demonstrate the effectiveness of our design.

Biography of Speakers:

Ronald Clark

Dr. Ronald Clark is a research fellow at Imperial College London. His work lies at the intersection of computer vision and machine learning. His research mainly focuses on allowing machines to interpret and understand the 3D world around them. He obtained his PhD from the University of Oxford Department of Computer Science. Before coming to the UK, he did an  MSc in electrical engineering at the University of the Witwatersrand in South Africa. He has received  numerous international awards for my research including a best paper honourable mention at the Conference on Computer Vision and Pattern  Recognition (CVPR). His research has been supported by a number of prestigious fellowships, including an EPSRC doctoral studentship, a Dyson Fellowship and  most recently an Imperial College Early Career Fellowship.

BoyangDr. Bo Yang defended his PhD in the Department of Computer Science at University of Oxford, supervised by Profs. Niki Trigoni and Andrew Markham. Prior to Oxford, he obtained an M.Phil degree from The University of Hong Kong where he was supervised by Prof. S.H. Choi, and a B.Eng degree from Beijing University of Posts and Telecommunications.

 

Fast Decomposable Submodular Function Minimization using Constrained Total Variation
Speaker : Dr. K S Sesh Kumar

Abstract: We consider the problem of minimizing the sum of submodular set functions assuming minimization oracles of each summand function. Most existing approaches reformulate the problem as the convex minimization of the sum of the corresponding Lovász extensions and the squared Euclidean norm, leading to algorithms requiring total variation oracles of the summand functions; without further assumptions, these more complex oracles require many calls to the simpler minimization oracles often available in practice. In this paper, we consider a modified convex problem requiring constrained version of the total variation oracles that can be solved with significantly fewer calls to the simple minimization oracles. We support our claims by showing results on graph cuts for 2D and 3D graphs 

Biography of Speaker:

seshDr. K S Sesh Kumar is a research fellow at the Data Science Institute, Imperial College London. Prior to this, he was a research fellow funded by the Leverhulme Centre for the Future of Intelligence (CFI) at Imperial College London working with Prof. Marc Deisenroth and Dr. Adrian Weller. He was a post-doctoral researcher in the group of Vladimir Kolmogorov at IST, Austria until October, 2017. He finished  PhD under the supervision of Francis Bach at INRIA, SIERRA in September, 2016. He received MVA Masters from ENS-Cachan in 2013 and Bachelors degree (Computer Science and Engineering) from IIIT, Hyderabad in 2003. His research revolves around exploring the links between different topics in Discrete optimization and Machine Learning.

Sinkhorn Barycenters with Free Support via Frank-Wolfe Algorithm
Speaker : Giulia Luise on behalf of Dr. Carlo Ciliberto

Abstract : We present a novel algorithm to estimate the barycenter of arbitrary probability distributions with respect to the Sinkhorn divergence. Based on a Frank-Wolfe optimization strategy, our approach proceeds by populating the support of the barycenter incrementally, without requiring any pre-allocation. We consider discrete as well as continuous distributions, proving convergence rates of the proposed algorithm in both settings. Key elements of our analysis are a new result showing that the Sinkhorn divergence on compact domains has Lipschitz continuous gradient with respect to the Total Variation and a characterization of the sample complexity of Sinkhorn potentials. Experiments validate the effectiveness of our method in practice. 

Biography of Speaker:

Giulia LuiseGiulia Luise is a third-year PhD student in Machine Learning atUCL, London  under the supervision ofMassimiliano Pontil andCarlo Ciliberto. Her main research interest focuses on the interplay of optimal transport and machine learning. She is also interested in structured prediction and surrogate frameworks.

 


Feedforward Bayesian Inference for Crowdsourced Classification
Speaker : Dr. Edoardo Manino on behalf of Professor Nicholas Jennings

Abstract : A key challenge in crowdsourcing is inferring the ground truth from noisy and unreliable data. To do so, existing approaches rely on collecting redundant information from the crowd, and aggregating it with some probabilistic method. However, oftentimes such methods are computationally inefficient, are restricted to some specific settings, or lack theoretical guarantees. In this paper, we revisit the problem of binary classification from crowdsourced data. Specifically we propose Streaming Bayesian Inference for Crowdsourcing (SBIC), a new algorithm that does not suffer from any of these limitations. First, SBIC has low complexity and can be used in a real-time online setting. Second, SBIC has the same accuracy as the best state-of-the-art algorithms in all settings. Third, SBIC has provable asymptotic guarantees both in the online and offline settings

Biography of Speaker:

Edoardo Manino

Dr. Edoardo Manino is a research fellow at the University of Southampton. Currently, he is finishing his PhD in machine learning and crowdsourcing under the supervision of Prof. Nicholas R. Jennings and Dr. Long Tran-Thanh. His research interests range from Bayesian learning to algorithmic game theory and, more recently, influence maximisation on social networks.

 

Minimum Stein Discrepancy Estimators
Speaker : Alessandro Barp

Abstract: When maximum likelihood estimation is infeasible, one often turns to score matching, contrastive divergence, or minimum probability flow learning to obtain tractable parameter estimates. We provide a unifying perspective of these techniques as minimum Stein discrepancy estimators and use this lens to design new diffusion kernel Stein discrepancy (DKSD) and diffusion score matching (DSM) estimators with complementary strengths. We establish the consistency, asymptotic normality, and robustness of DKSD and DSM estimators, derive stochastic Riemannian gradient descent algorithms for their efficient optimization, and demonstrate their advantages over score matching in models with non-smooth densities or heavy tailed distributions

Biography of Speaker:

Alessandro BarpAlessandro Barp is a PhD student at Imperial College London and the Alan Turing Institute, supervised by Prof Mark Girolami and Prof Damiano Brigo. He works on the geometry of Information and of Monte Carlo Methods.

 

 
Localized Structured Prediction 
Speaker:  Dr. Carlo Ciliberto

Abstract: Key to structured prediction is exploiting the problem structure to simplify the learning process. A major challenge arises when data exhibit a local structure (e.g., are made by "parts") that can be leveraged to better approximate the relation between (parts of) the input and (parts of) the output. Recent literature on signal processing, and in particular computer vision, has shown that capturing these aspects is indeed essential to achieve state-of-the-art performance. While such algorithms are typically derived on a case-by-case basis, in this work we propose the first theoretical framework to deal with part-based data from a general perspective. We derive a novel approach to deal with these problems and study its generalization properties within the setting of statistical learning theory. Our analysis is novel in that it explicitly quantifies the benefits of leveraging the part-based structure of the problem with respect to the learning rates of the proposed estimator. 

Biography of Speaker:

Carlo Ciliberto

In 2008, Dr. Carlo Ciliberto graduated in Mathematics at the University of Roma Tre, Rome, Italy and in 2012 he obtained a PhD in humanoid robotics, computer vision and machine learning at the Italian Institute of Technology, Genova, Italy. He was a Postdoctoral fellow in Poggio Lab at the Massachusetts Institute of Technology from 2012 to 2016 and later a Research Associate at UCL from 2017-2018, where he is now Honorary Lecturer. In 2018 he became a Lecturer at the EEE Department at Imperial College.