To deploy deep learning in the wild responsibly, we must know when models are making unsubstantiated guesses. The field of Bayesian Deep Learning (BDL) has been a focal point in the ML community for the development of such tools. Big strides have been made in BDL in recent years, with the field making an impact outside of the ML community, in fields including astronomy, medical imaging, physical sciences, and many others. But the field of BDL itself is facing an evaluation crisis: most BDL papers evaluate uncertainty estimation quality of new methods on MNIST and CIFAR alone, ignoring needs of real world applications which use BDL. Therefore, apart from discussing latest advances in BDL methodologies, a particular focus of this year’s programme is on the reliability of BDL techniques in downstream tasks. This focus is reflected through invited talks from practitioners in other fields and by working together with the two NeurIPS challenges in BDL — the Approximate Inference in Bayesian Deep Learning Challenge and the Shifts Challenge on Robustness and Uncertainty under Real-World Distributional Shift — advertising work done in applications including autonomous driving, medical, space, and more. We hope that the mainstream BDL community will adopt real world benchmarks based on such applications, pushing the field forward beyond MNIST and CIFAR evaluations.

Previous workshops:

  • Our 2020 meetup page is available here;
  • Our 2019 workshop page is available here;
  • Our 2018 workshop page is available here;
  • Our 2017 workshop page is available here;
  • Our 2016 workshop page is available here; videos from the 2016 workshop are available online as well.

Call for papers

This year we will have multiple tracks, offering a self-critical, reflective, or otherwise meta-assessment of the state of BDL: reliability of BDL techniques in downstream tasks & metrics for uncertainty in real world applications; non-conventional & position papers; negative results & purely experimental papers; and general submissions.

We invite researchers to submit work for the tracks above in any of the areas below. We additionally invite participants of the Approximate Inference and Shifts challenges to submit work on their observations, intermediate results and improved assessment metrics.

We solicit extended abstract submissions, as well as poster-only submissions. All accepted extended abstracts will also be invited to present a poster at the poster session, and select extended abstracts will be invited to contribute a talk. Posters will be presented at the socials, offering a platform for open discussion.

An extended abstract submission should take the form of a 3 pages long paper in PDF format using the NeurIPS style file. Author names do not need to be anonymised, and conflicts of interest in assessing submitted contributions will be based on authors' institution (reviewers will not be involved in the assessment of a submission by authors within the same institution). References may extend as far as needed beyond the 3 page upper limit. Submissions may extend beyond the 3 pages upper limit, but reviewers are not expected to read beyond the first 3 pages. If the research has previously appeared in a journal, workshop, or conference (including the NeurIPS 2021 conference), the workshop submission should extend that previous work. Dual submissions to ICLR 2021, AAAI 2021, and AISTATS 2021 are permitted.

A poster-only submission should take the form of a poster in PDF format (1-page PDF of maximum size 5MB in landscape orientation). Attendees will only have regular computer screens to see it in its entirety, so please do not over-crowd your poster. The title should be on the top of the poster and use large fonts, as this is what will be shown to attendees as they approach your poster. Author names do not need to be anonymised during submission. A light-weight editorial review will be carried out, and only posters of no relevance to the community will be rejected. For poster-only submissions, you are welcome to submit research that has previously appeared in a journal, workshop, or conference (including the NeurIPS 2021 conference and AABI), as the aim of the poster presentation is to be a platform for discussions and to advertise your work with your colleagues.

Extended abstracts should be submitted by Oct 8, 2021, AoE; submission page is here. Final versions will be posted on the workshop website (and are archival but do not constitute a proceedings). Notification of acceptance will be made before Oct 30, 2021, AoE. Posters should be submitted by Dec 1, 2021, AoE; submission page will be provided towards the date.

Key Dates:

  • Extended abstract submission deadline: Oct 8, 2021, AoE (submission page is here)
  • Acceptance notification: before Oct 30, 2021, AoE
  • Poster submission deadline: Dec 1, 2021, AoE (submission page will be provided towards the date)
  • Workshop: Tuesday, December 14, 2021

Please make sure to apply to the NeurIPS workshop registration to participate in the event.


  • Uncertainty in deep learning,
  • Applications of Bayesian deep learning,
  • Reliability of BDL techniques in downstream tasks,
  • Probabilistic deep models (such as extensions and application of Bayesian neural networks),
  • Deep probabilistic models (such as hierarchical Bayesian models and their applications),
  • Generative deep models (such as variational autoencoders),
  • Information theory in deep learning,
  • Deep ensemble uncertainty,
  • NTK and Bayesian modelling,
  • Connections between NNs and GPs,
  • Incorporating explicit prior knowledge in deep learning (such as posterior regularisation with logic rules),
  • Approximate inference for Bayesian deep learning (such as variational Bayes / expectation propagation / etc. in Bayesian neural networks),
  • Scalable MCMC inference in Bayesian deep models,
  • Deep recognition models for variational inference (amortised inference),
  • Bayesian deep reinforcement learning,
  • Deep learning with small data,
  • Deep learning in Bayesian modelling,
  • Probabilistic semi-supervised learning techniques,
  • Active learning and Bayesian optimisation for experimental design,
  • Kernel methods in Bayesian deep learning,
  • Implicit inference,
  • Applying non-parametric methods, one-shot learning, and Bayesian deep learning in general.