Schedule
Confirmed Speakers
Schedule
The start and end times are 11am -- 6pm GMT / 12pm -- 7pm CET / 6am -- 1pm EST / 3am - 10am PST / 8pm -- 3am JST. Our friends in the Americas are welcome to join the latter sessions, and our friends in eastern time zones are welcome to join the earlier sessions.
The schedule interleaves main conference events together with our invited speakers, as well as gather.town poster presentations to allow for networking and socialising.
11.00 - 11.05 (GMT) 12.00 - 12.05 (CET) |
Welcome and Opening Remarks | ||
11.05 - 11.25 (GMT) 12.05 - 12.25 (CET) |
Invited talk | Mark van der Wilk (Imperial College London) |
Bayesian Model Selection in Deep Learning |
11.30 - 11.50 (GMT) 12.30 - 12.50 (CET) |
Invited talk | Mihaela van der Schaar (University of Cambridge) |
Bayesian Uncertainty Estimation under Covariate Shift: Application to Cross-population Clinical Prognosis |
11.55 - 13.00 (GMT) 12.55 - 14.00 (CET) |
Social + Posters | ||
13.00 - 14.40 (GMT) 14.00 - 15.40 (CET) |
Lunch break (NeurIPS Breiman Lecture: Causal Learning) | ||
14.40 - 15.00 (GMT) 15.40 - 16.00 (CET) |
Invited talk | Daniela Rus (MIT CSAIL) |
Uncertainty in Transportation |
15.05 - 15.25 (GMT) 16.05 - 16.25 (CET) |
Invited talk | David Duvenaud (University of Toronto) |
Infinitely Deep Bayesian Neural Networks with Stochastic Differential Equations |
15.30 - 15.50 (GMT) 16.30 - 16.50 (CET) |
Invited talk | Tal Arbel (MILA) |
Modelling and Propagating Uncertainties in Machine Learning for Medical Images of Patients with Neurological Diseases |
15.55 - 16.15 (GMT) 16.55 - 17.15 (CET) |
Invited talk | Zack Chase Lipton (CMU) |
What are we so uncertain about? Broadening the scope of epistemic uncertainty in the application of machine learning |
16.20 - 16.40 (GMT) 17.20 - 17.40 (CET) |
Invited talk | Durk Kingma (Google) |
On Diffusion-Based Generative Models |
16.45 - 16.50 (GMT) 17.45 - 17.50 (CET) |
Closing remarks | ||
16.50 - 18.00 (GMT) 17.50 - 19.00 (CET) |
Social + Posters | ||
18.00 - 19.00 (GMT) 19.00 - 20.00 (CET) |
NeurIPS posters |
Accepted Posters
Posters and socials take place in Gather.Town. Please see instructions below. Password to access the space will be shared with registered attendees when the event starts. Posters will be uploaded to this website after the event.
Title | Authors | Poster Location |
Uncertainty via Stochastic Gradient Langevin Boosting: Bayesian Gradient Boosted Decision Trees | Andrey Malinin, Liudmila Prokhorenkova, Alexei Ustimenko | A1 |
Know Where to Drop Your Weights: Towards Faster Uncertainty Estimation | Akshatha Kamanth, Dwaraknath Ganeshwar, Matias Valdenegro-Toro | A10 |
Last Layer Marginal Likelihood for Invariance Learning | Pola Schwobel, Martin Jorgensen, Mark van der Wilk | A11 |
One Versus All for Deep Neural Network Incertitude (OVNNI) Quantification | Gianni Franchi, Andrei Bursuc, Emanuel Aldea, Severine Dubuisson, Isabelle Bloch | A12 |
Encoding the Latent Posterior of Bayesian Neural Networks for Uncertainty Quantification | Gianni Franchi, Andrei Bursuc, Emanuel Aldea, Severine Dubuisson, Isabelle Bloch | A13 |
Identifying Causal-effect Inference Failure Using Uncertainty-aware Models | Andrew Jesson, Soren Mindermann, Uri Shalit, Yarin Gal | A14 |
End-to-End Semi-Supervised Learning for Differentiable Particle Filters | Hao Wen, Xiongjie Chen, Georgios Papagiannis, Conghui Hu, Yunpeng Li | A15 |
Neural Empricical Bayes: Source Distribution Estimation and its Applications to Simulation-Based Inference | Maxime Vandegar, Michael Kagan, Antoine Wehenkel, Gilles Louppe | A16 |
Uncertainty in Structured Prediction: Pushing the Scale Limits of Uncertainty | Andrey Malinin, Mark Gales | A2 |
TyXe: Pyro-Based Bayesian Neural Networks for Pytorch Users in 5 Lines of Code | Hippolyt Ritter, Theofanis Karaletsos | A3 |
Expressive yet Tractable Bayesian Deep Learning via Subnetwork Inference | Erik Daxberger, Eric Nalisnick, James Urquhart Allingham, Javier Antoran, Jose Miguel Hernandez-Lobato | A4 |
Sparse Encoding for More-interpretable feature-selecting representations in probabilistic (Poisson) matrix factorization | Joshua C. Chang, Patrick Fletcher, Jungmin Han, Ted L. chang, Shashaank Vattikuti, Bart Desmet, Ayah Zirikly, Carson C. Chow | A5 |
On Signal-to-noise Ratio Issues in Variational Inference for Deep Gaussian Processes | Tim G. J. Rudner, Oscar Key, Yarin Gal, Tom Rainforth | A6 |
Rethinking Function-Space Variational Inference in Bayesian Neural Networks | Tim G. J. Rudner, Zonghao Chen, Yarin Gal | A7 |
Outcome-Driven Reinforcement Learning via Variational Inference | Tim G. J. Rudner, Vitchyr H. Pong, Rowan McAllister, Yarin Gal, Sergey Levine | A8 |
A Probabilistic Perspective on Pathologies in Behavioural Cloning for Reinforcement Learning | Tim G. J. Rudner, Cong Lu, Michael A. Osborne, Yarin Gal | A9 |
Self Normalizing Flows | T. Anderson Keller, Jorn W. T. Peters, Priyank Jaini, Emiel Hoogeboom, Patrick Forre, Max Welling | B1 |
Fixing Asymptotic Uncertainty of BNNs with Infinite ReLU Features | Agustinus Kristiadi, Mathias Hein, Philipp Hennig | B10 |
Deep Kernel Processes | Lawrence Aitchison, Sebastian Ober, Adam X. Yang | B11 |
Liberty or Depth: Deep Bayesian Neural Nets Do Not Need Complex Weight Posterior Approximations | Sebastian Farquhar, Lewis Smith, Yarin Gal | B12 |
Augmented Sliced Wasserstein Distances | Xiongjie Chen, Yongxin Yang, Yunpeng Li | B15 |
Bayesian Active Learning with Pretrained Language Models | Katerina Margatina, Loic Barrault, Nikos Aletras | B16 |
ThompsonBALD: Bayesian Batch Active Learning for Deep Learning via Thompson Sampling | Jaeik Jeon, Brooks Paige | B2 |
Learning under Model Misspecification: Applications to Variational and Ensemble Methods | Andres R. Masegosa | B7 |
Global Canopy Height Regression from Space-borne LiDAR | Nico Lang, Nikolai Kalishek, John Armston, Konrad Schindler, Ralph Duaya, Jan Dirk Wegner | B8 |
Sparse Uncertainty Representation in Deep Learning with Inducing Weights | Hippolyt Ritter, Martin Kukla, Cheng Zhang, Yingzhen Li | B9 |
Designing Priors for Bayesian Neural Networks | Tim Pearce, Russell Tsuchida, Alexandra Brintrup, Mohamed Zaki, Andy Neely and Andrew Y.K. Foong | C1 |
Deep Evidential Regression | Alexander Amini, Wilko Schwarting, Ava Soleimany, Danela Rus | C16 |
Evidential Deep Learning for Guided Molecular Property Prediction and Discover | Ava P. Soleimany, alexander amini, Samuel Goldman, Daniela Rus, Sangeeta N. Bhatia, Connor W. Coley | D1 |
Depth Uncertainty in Neural Networks | Javier Antoran, James Urquhart Allingham, Jose Miguel Hernandez-Lobato | D10 |
Decentralized Langevin Dynamics for Bayesian Learning | Anjaly Parayil, He Bai, Jemin George, Prudhvi Gurram | D11 |
i-DenseNets | Yura Perugachi-Diaz, Jakub M. Tomczak, Sandjai Bhulai | D12 |
BayesFlow: Scalable Amortized Bayesian Inference with Invertible Networks | Stefan T. Radev, Ullrich Kothe | D16 |
Wavelet Flow: Fast Training of High Resolution Normalizing Flows | Jason J. Yu, Konstantinos G. Derpanis, Marcus A. Brubaker | D2 |
General Invertible Transformations for Flow-based Generative Models | Jakub M. Tomczak | D3 |
SurVAE Flows: Surjections to Bridge the Gap between VAEs and Flows | Didrik Nielsen, Priyank Jaini, Emiel Hoogeboom, Ole Winther, Max Welling | D4 |
A Bayesian Perspective on Training Speed and Model Selection | Clare Lyle, Lisa Schut, Binxin Ru, Yarin Gal, Mark van der Wilk | D6 |
The Ridgelet Prior: A Covariance Function Approach to Prior Specification for Bayesian Neural Networks | Takuo Matsubara, Christ Oates, Francois-Xavier Briol | D7 |
Bayesian Neural Network Priors Revisited | Vincent Fortuin, Adria Garriga-Alonso, Florian Wenzel, Gunnar Ratsch, Richard Turner, Mark van der Wilk, Lawrence Aitchison | D8 |
Sampling-free Variational Inference for Neural Networks with Multiplicative Activation Noise | Jannik Schmitt, Stefan Roth | D9 |
Clue: A Method for Explaining Uncertainty Estimates | Javier Antoran, Umang Bhatt, Tameem Adel, Adrian Weller, Jose Miguel Hernandez-Lobato | E1 |
Temporal-hierarchical VAE for Heterogenous and Missing Data Handling | Daniel Barrejon-Moreno, Pablo M. Olmos, Antonio Artes-Rodriguez | E10 |
Efficient Low Rank Gaussian Variational Inference for Neural Networks | Marcin B. Tomczak, Siddharth Swaroop, Richard E. Turner | E11 |
Ensemble Distribution Distillation: Ensemble Uncertainty via a Single Model | Andrey Malinin, Sergey Chervontsev, Ivan Provilkov, Bruno Mlodozeniec, Mark Gales | E12 |
Towards a Unified Framework for Bayesian Neural Networks in PyTorch | Audrey Flower, Beliz Gokkaya, Sahar Karimi, Jessica Ai, Ousmane Dia, Ehsan Emamjomeh-Zadeh, Ilknur Kaynar Kabul, Erik Meijer, Adly Templeton | E13 |
Feature Space Singularity for Out-of-Distribution Detection | Haiwen Huang, Zhihan Li, Lulu Wang, Sishuo Chen, Bin Dong, Xinyu Zhou | E14 |
Hierarchical Gaussian Processes with Wasserstein-2 Kernels | Sebastian G. Popescu, David J. Sharp, James H. Cole, and Ben Glocker | E15 |
Sample-efficient Optimization in the Latent Space of Deep Generative Models via Weighted Retraining | Austin Tripp, Erik Daxberger, Jose Miguel Hernandez-Lobato | E16 |
A Comparative Evaluation of Methods for Epistemic Uncertainty Estimation | Lisha Chen, Hanjing Wang, Shiyu Chang, Hui Su, Qiang Ji | E2 |
Estimating Model Uncertainty of Neural Networks in Sparse Information Form | Jongseok Lee, Matthias Humt, Jianxing Feng, Rudolph Triebel | E3 |
Simple & Principled Uncertainty Estimation with Single Deep Model via Distance Awareness | Jeremiah Liu, Zi Lin, Shreyas Padhy, Dustin Tran, Tania Bedrax-Weiss, Balaji Lakshminarayanan | E4 |
Global Inducing Point Variational Posteriors for Bayesian Neural Networks and Deep Gaussian Processes | Sebastian W. Ober, Laurence Aitchison | E5 |
Unpacking Information Bottlenecks | Andreas Kirsch, Clare Lyle, Yarin Gal | E6 |
Revisiting the Train Loss: An Efficient Performance Estimator for Neural Architecture Search | Binxin Ru, Clare Lyle, Lisa Schut, Mark van der Wilk, Yarin Gal | E7 |
Using hamiltorch to Perform HMC over BNNs with Symmetric Splitting | Adam D. Cobb, Brian Jalaian | E8 |
Cross-Pollinated Deep Ensembles | Alexander Lyzhov, Daria Voronkova, Dmitry Vetrov | E9 |
Multi-headed Bayesian U-Net | Moritz Fuchs, Simon Kiefhaber, Hendrik Mehrtens, Faraz Zaidi, Camila Gonzalez, Arjan Kuijper, Anirban Mukhopadhyay | F1 |
Bayesian Active Learning for Wearable and Mobile Health | Gautham Krishna Gudur, Abhijith Ragav, Prahalathan Sundaramoorthy, Venkatesh Umaashankar | F10 |
Hierarchical Gaussian Process Priors for Bayesian Neural Networks | Theofnis Karaletsos, Thang D. Bui | F11 |
Bayesian Neural Networks for Acoustic Mosquito Detection | Ivan Kiskin, Adam D. Cobb, Steve Roberts | F12 |
The Hidden Uncertainty in a Neural Network's Activations | Janis Postels, Hermann Blum, Cesar Cadena, Roland Siegwart, Luc van Gool, Federico Tombari | F13 |
Mixed-curvature Conditional Prior VAE | Maciej Falkiewicz | F14 |
Bayesian BERT for Trustful Hate Speech Detection | Kristian Miok, Blaz Skrlj, Daniela Zaharie, Marko Robnik-Sikonja | F15 |
Uncertainty Quantification for Spectral Virtual Diagnostic | Owen Convery, Lewis Smith, Yarin Gal, Adi Hanuka | F16 |
Bayesian Multi-task Learning: Fully Differentiable Model Discovery | Gert-Jan Booth | F2 |
Towards Principled Prior Assumption in Deep Learning | Lassi Meronen, Martin Trapp, Arno Solin | F3 |
Perfect Density Models Cannot Guarantee Anomaly Detection | Charline Le Lan, Laurent Dinh | F4 |
Semi-supervised Learning of Galaxy Morphology Using Equivariant Transformer Variational Autoencoders | Mizu Nishikawa-Toomey, Lewis Smith, Yarin Gal | F5 |
Bayesian Deep Ensembles via the Neural Tangent Kernel | Bobby He, Balaji Lakshminarayanan, Yee Whye Teh | F6 |
Robustness of Bayesian Neural Networks to Gradient-Based Attacks | Ginevra Carbone, Matthew Wicker, Luca Laurenti, Andrea Patane, Luca Bortolussi, Guido Sanguinetti | F7 |
DrugEx2: Drug Molecule De Novo Design by Multi-Objective Reinforcement Learning for Polypharmacology | X. Liu, K. Ye, H.W.T van Vlijmen, M.T.M. Emmerich, A.P. IJzerman, G.J.P. van Westen | F8 |
Why Aren't bootstrapped Neural Networks Better? | Jeremy Nixon, Dustin Tran, Balaji Lakshminarayanan | F9 |