Imperial researchers publish 17 papers at the NeurIPS 2022 conference
Seventeen papers by Imperial researchers from 4 departments will appear in the 36th Conference on Neural Information Processing Systems (NeurIPS 2022)
Imperial researchers from three faculties and four departments had their work accepted to the 36th Conference on Neural Information Processing Systems, or NeurIPS 2022. These seventeen papers, from the Department of Computing, the Department of Electrical and Electronic Engineering, the Department of Mathematics, and the Business School, indicate the breadth and depth of focus into machine learning research at Imperial.
The accepted papers are:
- Artem Artemev, Tilman Roeder, Mark van der Wilk. Memory safe computations with XLA compiler
- John Birge, Xiaocheng Li, Chunlin Sun. Learning from Stochastically Revealed Preference
- Yanzhi Chen, Weihao Sun, Yingzhen Li, Adrian Weller. Scalable Infomin Learning
- Jose Pablo Folch, Shiqiang Zhang, Robert M. Lee, Behrang Shafei, David Walz, Calvin Tsay, Mark van der Wilk, Ruth Misener. SnAKe: Bayesian Optimization with Pathwise Exploration
- Marton Havasi, Sonali Parbhoo, Finale Doshi-Velez. Addressing Leakage in Concept Bottleneck Models
- Clint Ho, Marek Petrik, Wolfram Wiesemann. Robust Phi-Divergence MDPs
- Alexander Immer, Tycho van der Ouderaa, Gunnar Rätsch, Vincent Fortuin, Mark van der Wilk. Invariance Learning in Deep Neural Networks with Differentiable Laplace Approximations
- Yichong Leng*, Zehua Chen*, Junliang Guo, Haohe Liu, Jiawei Chen, Xu Tan, Danilo Mandic, Lei He, Xiang-Yang Li, Tao Qin, Sheng Zhao, Tie-Yan Liu. BinauralGrad: A Two-Stage Conditional Diffusion Probabilistic Model for Binaural Audio Synthesis
- Shang Liu, Jiashuo Jiang, Xiaocheng Li. Non-stationary Bandits with Knapsacks
- Zijing Ou, Tingyang Xu, Qinliang Su, Yingzhen Li, Peilin Zhao, Yatao Bian. Learning Set Functions Under the Optimal Subset Oracle via Equivariant Variational Inference
- Cristopher Salvi, Maud Lemercier, Andris Gerasimovics. Neural Stochastic PDEs: Resolution-Invariant Learning of Continuous Spatiotemporal Dynamics
- Aras Selvi, Mohammadreza Belbasi, Martin Haugh, Wolfram Wiesemann. Wasserstein Logistic Regression with Mixed Features
- Harald Strömfelt, Luke Dickens, Artur Garcez, Alessandra Russo. Formalising Coherence and Consistency Applied to Transfer Learning in Neuro-Symbolic Autoencoders
- Ryutaro Tanno, Melanie F. Pradier, Aditya Nori, Yingzhen Li. Fixing Neural Networks by Leaving the Right Past Behind
- Alexander Thebelt, Calvin Tsay, Robert M. Lee, Nathan Sudermann-Merx, David Walz, Behrang Shafei, Ruth Misener. Tree ensemble kernels for Bayesian optimization with known constraints over mixed-feature spaces
- Ben Tu, Axel Gandy, Nikolas Kantas, Behrang Shafei. Joint Entropy Search for Multi-Objective Bayesian Optimization
- Tycho van der Ouderaa, David W. Romero, Mark van der Wilk. Relaxing Equivariance Constraints with Non-stationary Continuous Filters
Supporters
Article text (excluding photos or graphics) © Imperial College London.
Photos and graphics subject to third party copyright used with permission or © Imperial College London.