Schedule & Accepted Papers

Confirmed Speakers

Schedule

The start and end times are 11am -- 7pm GMT / 12pm -- 8pm CET / 6am -- 2pm EST / 3am - 11am PST / 8pm -- 4am JST. Our friends in the Americas are welcome to join the latter sessions, and our friends in eastern time zones are welcome to join the earlier sessions.

The schedule interleaves invited speakers, contributed talks, and gather.town poster presentations to allow for networking and socialising.

11.00 - 11.10 (GMT)
12.00 - 12.10 (CET)
Welcome and Opening Remarks
11.10 - 11.30 (GMT)
12.10 - 12.30 (CET)
Invited talk Emtiyaz Khan, Thomas Möllenhoff, Dharmesh Tailor, Siddharth Swaroop Adaptive and Robust Learning with Bayes
11.30 - 11.50 (GMT)
12.30 - 12.50 (CET)
Invited talk Yee Whye Teh A Bayesian Perspective on Meta-Learning
11.50 - 12.10 (GMT)
12.50 - 13.10 (CET)
Invited talk TBD Shifts Challenge: Robustness and Uncertainty under Real-World Distributional Shift
12.10 - 12.20 (GMT)
13.10 - 13.20 (CET)
Contributed talk TBD TBD
12.20 - 12.30 (GMT)
13.20 - 13.30 (CET)
Contributed talk TBD TBD
12.30 - 13.30 (GMT)
13.30 - 14.30 (CET)
Lunch Break (+ Posters)
13.30 - 13.50 (GMT)
14.30 - 14.50 (CET)
Invited talk Atılım Güneş Baydin, Francesco Pinto Spacecraft Collision Avoidance with Bayesian Deep Learning
13.50 - 14.10 (GMT)
14.30 - 15.10 (CET)
Invited talk Danilo Rezende, Peter Wirnsberger Inference & Sampling with Symmetries
14.10 - 14.30 (GMT)
15.10 - 15.30 (CET)
Invited talk Asja Fisher, Sina Däubener Bayesian Neural Networks, Andversarial Attacks, and How the Amount of Samples Matters
14.30 - 16.00 (GMT)
15.30 - 17.00 (CET)
Poster Session
16.00 - 16.20 (GMT)
17.00 - 17.20 (CET)
Invited talk Adi Hanuka, Owen Convery TBD
16.20 - 16.30 (GMT)
17.20 - 17.30 (CET)
Contributed talk TBD TBD
16.30 - 16.40 (GMT)
17.30 - 17.40 (CET)
Contributed talk TBD TBD
16.40 - 17.00 (GMT)
17.40 - 18.00 (CET)
Invited talk TBD Evaluating Approximate Inference in Bayesian Deep Learning
17.00 - 17.20 (GMT)
18.00 - 18.20 (CET)
Invited talk Tamara Broderick, Ryan Giordano An Automatic Finite-Data Robustness Metric for Bayes and Beyond: Can Dropping a Little Data Change Conclusions?
17.20 - 17.25 (GMT)
18.20 - 18.25 (CET)
Closing Remarks
17.25 - 19.00 (GMT)
18.25 - 20.00 (CET)
Social + Posters

Accepted Abstracts

Note that links will point to papers towards the workshop date (we know they are currently broken!)

Authors Title
Edith Zhang, David Blei Unveiling Mode-connectivity of the ELBO Landscape paper
Daniele Bracale, Stefano Favaro, Sandra Fortini, Stefano Peluchetti Infinite-channel deep convolutional Stable neural networks paper
Luong-Ha Nguyen, James-A. Goulet Analytically Tractable Inference in Neural Networks - An Alternative to Backpropagation paper
Tristan Cinquin, Alexander Immer, Max Horn, Vincent Fortuin Pathologies in Priors and Inference for Bayesian Transformers paper
Weichang Yu, Sara Wade, Howard Bondell, Lamiae Azizi Non-stationary Gaussian process discriminant analysis with variable selection for high-dimensional functional data paper
Ginevra Carbone, Luca Bortolussi, Guido Sanguinetti Resilience of Bayesian Layer-Wise Explanations under Adversarial Attacks paper
Tianci Liu, Jeffrey Regier An Empirical Comparison of GANs and Normalizing Flows for Density Estimation paper
Miles Martinez, John Pearson Reproducible, incremental representation learning with Rosetta VAE paper
Agustinus Kristiadi, Matthias Hein, Philipp Hennig Being a Bit Frequentist Improves Bayesian Neural Networks paper
Konstantinos P. Panousis, Sotirios Chatzis, Sergios Theodoridis Stochastic Local Winner-Takes-All Networks Enable Profound Adversarial Robustness paper
Neil Band, Tim G. J. Rudner, Qixuan Feng, Angelos Filos, Zachary Nado, Michael W Dusenberry, Ghassen Jerfel, Dustin Tran, Yarin Gal Benchmarking Bayesian Deep Learning on Diabetic Retinopathy Detection Tasks paper
Vincent Fortuin, Mark Collier, Florian Wenzel, James Urquhart Allingham, Jeremiah Zhe Liu, Dustin Tran, Balaji Lakshminarayanan, Jesse Berent, Rodolphe Jenatton, Effrosyni Kokiopoulou Deep Classifiers with Label Noise Modeling and Distance Awareness paper
Vitaliy Kinakh, Mariia Drozdova, Guillaume Quétant, Tobias Golling, Slava Voloshynovskiy Information-theoretic stochastic contrastive conditional GAN: InfoSCC-GAN paper
Mingtian Zhang, Peter Noel Hayes, David Barber Generalization Gap in Amortized Inference paper
Kumud Lakara, Akshat Bhandari, Pratinav Seth, Ujjwal Verma Evaluating Predictive Uncertainty and Robustness to Distributional Shift Using Real World Data paper
Francisca Vasconcelos, Bobby He, Yee Whye Teh Uncertainty Quantification in End-to-End Implicit Neural Representations for Medical Imaging paper
Mariia Drozdova, Vitaliy Kinakh, Guillaume Quetant, Tobias Golling, Slava Voloshynovskiy Generation of data on discontinuous manifolds via continuous stochastic non-invertible networks paper
Zachary Nado, Neil Band, Mark Collier, Josip Djolonga, Michael W Dusenberry, Sebastian Farquhar, Qixuan Feng, Angelos Filos, Marton Havasi, Rodolphe Jenatton, Ghassen Jerfel, Jeremiah Zhe Liu, Zelda E Mariet, Jeremy Nixon, Shreyas Padhy, Jie Ren, Tim G. J. Rudner, Yeming Wen, Florian Wenzel, Kevin Patrick Murphy, D. Sculley, Balaji Lakshminarayanan, Jasper Snoek, Yarin Gal, Dustin Tran Uncertainty Baselines: Benchmarks for Uncertainty & Robustness in Deep Learning paper
Laha Ale, Scott King, Ning Zhang Deep Bayesian Learning for Car Hacking Detection paper
Hui Jin, Pradeep Kr. Banerjee, Guido Montufar Power-law asymptotics of the generalization error for GP regression under power-law priors and targets paper
Masanori Koyama, Kentaro Minami, Takeru Miyato, Yarin Gal Contrastive Representation Learning with Trainable Augmentation Channel paper
Antonios Alexos, Alex James Boyd, Stephan Mandt Structured Stochastic Gradient MCMC: a hybrid VI and MCMC approach paper
Michal Lisicki, Arash Afkanpour, Graham W. Taylor An Empirical Study of Neural Kernel Bandits paper
Joost van Amersfoort, Lewis Smith, Andrew Jesson, Oscar Key, Yarin Gal On Feature Collapse and Deep Kernel Learning for Single Forward Pass Uncertainty paper
Aleksei Tiulpin, Matthew B. Blaschko Greedy Bayesian Posterior Approximation with Deep Ensembles paper
Richard Kurle, Tim Januschowski, Jan Gasthaus, Bernie Wang On Symmetries in Variational Bayesian Neural Nets paper
Ben Barrett, Alexander Camuto, Matthew Willetts, Tom Rainforth Certifiably Robust Variational Autoencoders paper
Laya Rafiee, Thomas Fevens Contrastive Generative Adversarial Network for Anomaly Detection paper
Dominik Schnaus, Jongseok Lee, Rudolph Triebel Kronecker-Factored Optimal Curvature paper
Runa Eschenhagen, Erik Daxberger, Philipp Hennig, Agustinus Kristiadi Mixtures of Laplace Approximations for Improved Post-Hoc Uncertainty in Deep Learning paper
Matias Valdenegro-Toro Exploring the Limits of Epistemic Uncertainty Quantification in Low-Shot Settings paper
Ming Gui, Ziqing Zhao, Tianming Qiu, Hao Shen Laplace Approximation with Diagonalized Hessian for Over-parameterized Neural Networks paper
Thomas M. Sutter, Julia E Vogt Multimodal Relational VAE paper
Maria Perez-Ortiz, Omar Rivasplata, Emilio Parrado-Hernández, Benjamin Guedj, John Shawe-Taylor Progress in Self-Certified Neural Networks paper
Samuel Klein, John Andrew Raine, Tobias Golling, Slava Voloshynovskiy, Sebastion Pina-Otey Funnels: Exact maximum likelihood with dimensionality reduction paper
Melanie Rey, Andriy Mnih Gaussian dropout as an information bottleneck layer paper
Haiwen Huang, Joost van Amersfoort, Yarin Gal Decomposing Representations for Deterministic Uncertainty Estimation paper
Lei Zhao Precision Agriculture Based on Bayesian Neural Network paper
Matthew Willetts, Xenia Miscouridou, Stephen J. Roberts, Christopher C. Holmes Relaxed-Responsibility Hierarchical Discrete VAEs paper
Mariia Vladimirova, Julyan Arbel, Stephane Girard Dependence between Bayesian neural network units paper
Yehao Liu, Matteo Pagliardini, Tatjana Chavdarova, Sebastian U Stich The Peril of Popular Deep Learning Uncertainty Estimation Methods paper
Chelsea Murray, James Urquhart Allingham, Javier Antoran, José Miguel Hernández-Lobato Depth Uncertainty Networks for Active Learning paper
Jannik Wolff, Tassilo Klein, Moin Nabi, Rahul G Krishnan, Shinichi Nakajima Mixture-of-experts VAEs can disregard unimodal variation in surjective multimodal data paper
Albert Qiaochu Jiang, Clare Lyle, Lisa Schut, Yarin Gal Can Network Flatness Explain the Training Speed-Generalisation Connection? paper
Aaqib Parvez Mohammed, Matias Valdenegro-Toro Benchmark for Out-of-Distribution Detection in Deep Reinforcement Learning paper
Stefano Bonasera, Giacomo Acciarini, Jorge Pérez-Hernández, Bernard Benson, Edward Brown, Eric Sutton, Moriba Jah, Christopher Bridges, Atilim Gunes Baydin Dropout and Ensemble Networks for Thermospheric Density Uncertainty Estimation paper
Johanna Rock, Tiago Azevedo, René de Jong, Daniel Ruiz-Muñoz, Partha Maji On Efficient Uncertainty Estimation for Resource-Constrained Mobile Applications paper
Isaiah Brand, Michael Noseworthy, Sebastian Castro, Nicholas Roy Object-Factored Models with Partially Observable State paper
Jiaming Song, Stefano Ermon Likelihood-free Density Ratio Acquisition Functions are not Equivalent to Expected Improvements paper
Gianluigi Silvestri, Emily Fertig, Dave Moore, Luca Ambrogioni Model-embedding flows: Combining the inductive biases of model-free deep learning and explicit probabilistic modeling paper
Dae Heun Koh, Aashwin Mishra, Kazuhiro Terao Evaluating Deep Learning Uncertainty Quantification Methods for Neutrino Physics Applications paper
Hector Javier Hortua Constraining cosmological parameters from N-body simulations with Bayesian Neural Networks paper
Laixi Shi, Peide Huang, Rui Chen Latent Goal Allocation for Multi-Agent Goal-Conditioned Self-Supervised Learning paper
Lipi Gupta, Aashwin Ananda Mishra, Auralee Edelen Reliable Uncertainty Quantification of Deep Learning Models for a Free Electron Laser Scientific Facility paper
Roman Novak, Jascha Sohl-Dickstein, Samuel Stern Schoenholz Fast Finite Width Neural Tangent Kernel paper
Jimmy T.H. Smith, Dieterich Lawson, Scott Linderman Bayesian Inference in Augmented Bow Tie Networks paper
Thang D Bui Biases in variational Bayesian neural networks paper
Lee Zamparo, Marc-Etienne Brunet, Thomas George, Sepideh Kharaghani, Gintare Karolina Dziugaite The Dynamics of Functional Diversity throughout Neural Network Training paper
Kushal Chauhan, Pradeep Shenoy, Manish Gupta, Devarajan Sridharan Robust outlier detection by de-biasing VAE likelihoods paper
Yixiu Zhao, Scott Linderman Revisiting the Structured Variational Autoencoder paper
Max-Heinrich Laves, Malte Tölle, Alexander Schlaefer, Sandy Engelhardt Posterior Temperature Optimization in Variational Inference for Inverse Problems paper
Jongha Jon Ryu, Yoojin Choi, Young-Han Kim, Mostafa El-Khamy, Jungwon Lee Adversarial Learning of a Variational Generative Model with Succinct Bottleneck Representation paper
Soufiane Hayou, Bobby He, Gintare Karolina Dziugaite Stochastic Pruning: Fine-Tuning, and PAC-Bayes bound optimization paper
Natalia Evgenievna Khanzhina, Alexey Lapenok, Andrey Filchenkov Towards Robust Object Detection: Bayesian RetinaNet for Homoscedastic Aleatoric Uncertainty Modeling paper
Michael John Hutchinson, Matthias Reisser, Christos Louizos Federated Functional Variational Inference paper
Au Khai Xiang, Alexandre H. Thiery Reflected Hamiltonian Monte Carlo paper
Mayank Kumar Nagda, Charu James, Sophie Burkhardt, Marius Kloft Hierarchical Topic Evaluation: Statistical vs. Neural Models paper
Sepideh Saran, Mahsa Ghanbari, Uwe Ohler An Empirical Analysis of Uncertainty Estimation in Genomics Applications paper
Alexandre Almin, Anh Ngoc Phuong Duong, Léo Lemarié, Ravi Kiran Reducing redundancy in Semantic-KITTI: Study on data augmentations within Active Learning paper
Sankalp Gilda, Neel Bhandari, Wendy Mak, Andrea Panizza Regularizations Are All You Need: Weather Prediction Under Distributional Shift paper
Yashvir Singh Grewal, Thang D Bui Diversity is All You Need to Improve Bayesian Model Averaging paper

Abstract

To deploy deep learning in the wild responsibly, we must know when models are making unsubstantiated guesses. The field of Bayesian Deep Learning (BDL) has been a focal point in the ML community for the development of such tools. Big strides have been made in BDL in recent years, with the field making an impact outside of the ML community, in fields including astronomy, medical imaging, physical sciences, and many others. But the field of BDL itself is facing an evaluation crisis: most BDL papers evaluate uncertainty estimation quality of new methods on MNIST and CIFAR alone, ignoring needs of real world applications which use BDL. Therefore, apart from discussing latest advances in BDL methodologies, a particular focus of this year’s programme is on the reliability of BDL techniques in downstream tasks. This focus is reflected through invited talks from practitioners in other fields and by working together with the two NeurIPS challenges in BDL — the Approximate Inference in Bayesian Deep Learning Challenge and the Shifts Challenge on Robustness and Uncertainty under Real-World Distributional Shift — advertising work done in applications including autonomous driving, medical, space, and more. We hope that the mainstream BDL community will adopt real world benchmarks based on such applications, pushing the field forward beyond MNIST and CIFAR evaluations.

Previous workshops:

  • Our 2020 meetup page is available here;
  • Our 2019 workshop page is available here;
  • Our 2018 workshop page is available here;
  • Our 2017 workshop page is available here;
  • Our 2016 workshop page is available here; videos from the 2016 workshop are available online as well.

Call for papers

This year we will have multiple tracks, offering a self-critical, reflective, or otherwise meta-assessment of the state of BDL: reliability of BDL techniques in downstream tasks & metrics for uncertainty in real world applications; non-conventional & position papers; negative results & purely experimental papers; and general submissions.

We invite researchers to submit work for the tracks above in any of the areas below. We additionally invite participants of the Approximate Inference and Shifts challenges to submit work on their observations, intermediate results and improved assessment metrics.

We solicit extended abstract submissions, as well as poster-only submissions. All accepted extended abstracts will also be invited to present a poster at the poster session, and select extended abstracts will be invited to contribute a talk. Posters will be presented at the socials, offering a platform for open discussion.

An extended abstract submission should take the form of a 3 pages long paper in PDF format using the NeurIPS style file. Author names do not need to be anonymised, and conflicts of interest in assessing submitted contributions will be based on authors' institution (reviewers will not be involved in the assessment of a submission by authors within the same institution). References may extend as far as needed beyond the 3 page upper limit. Submissions may extend beyond the 3 pages upper limit, but reviewers are not expected to read beyond the first 3 pages. If the research has previously appeared in a journal, workshop, or conference (including the NeurIPS 2021 conference), the workshop submission should extend that previous work. Dual submissions to ICLR 2021, AAAI 2021, and AISTATS 2021 are permitted.

A poster-only submission should take the form of a poster in PDF format (1-page PDF of maximum size 5MB in landscape orientation). Attendees will only have regular computer screens to see it in its entirety, so please do not over-crowd your poster. The title should be on the top of the poster and use large fonts, as this is what will be shown to attendees as they approach your poster. Author names do not need to be anonymised during submission. A light-weight editorial review will be carried out, and only posters of no relevance to the community will be rejected. For poster-only submissions, you are welcome to submit research that has previously appeared in a journal, workshop, or conference (including the NeurIPS 2021 conference and AABI), as the aim of the poster presentation is to be a platform for discussions and to advertise your work with your colleagues.

Extended abstracts should be submitted by Oct 8, 2021, AoE; submission page is here. Final versions will be posted on the workshop website (and are archival but do not constitute a proceedings). Notification of acceptance will be made before Oct 30, 2021, AoE. Posters should be submitted by Dec 1, 2021, AoE (please submit papers through your account at the NeurIPS website).

Key Dates:

  • Extended abstract submission deadline: Oct 8, 2021, AoE (submission page is here)
  • Acceptance notification: before Oct 30, 2021, AoE
  • Poster submission deadline: Dec 1, 2021, AoE (please submit papers through your account at the NeurIPS website)
  • Workshop: Tuesday, December 14, 2021

Please make sure to apply to the NeurIPS workshop registration to participate in the event.

Topics

  • Uncertainty in deep learning,
  • Applications of Bayesian deep learning,
  • Reliability of BDL techniques in downstream tasks,
  • Probabilistic deep models (such as extensions and application of Bayesian neural networks),
  • Deep probabilistic models (such as hierarchical Bayesian models and their applications),
  • Generative deep models (such as variational autoencoders),
  • Information theory in deep learning,
  • Deep ensemble uncertainty,
  • NTK and Bayesian modelling,
  • Connections between NNs and GPs,
  • Incorporating explicit prior knowledge in deep learning (such as posterior regularisation with logic rules),
  • Approximate inference for Bayesian deep learning (such as variational Bayes / expectation propagation / etc. in Bayesian neural networks),
  • Scalable MCMC inference in Bayesian deep models,
  • Deep recognition models for variational inference (amortised inference),
  • Bayesian deep reinforcement learning,
  • Deep learning with small data,
  • Deep learning in Bayesian modelling,
  • Probabilistic semi-supervised learning techniques,
  • Active learning and Bayesian optimisation for experimental design,
  • Kernel methods in Bayesian deep learning,
  • Implicit inference,
  • Applying non-parametric methods, one-shot learning, and Bayesian deep learning in general.