Schedule

Confirmed Speakers

Schedule

The start and end times are 11am -- 6pm GMT / 12pm -- 7pm CET / 6am -- 1pm EST / 3am - 10am PST / 8pm -- 3am JST. Our friends in the Americas are welcome to join the latter sessions, and our friends in eastern time zones are welcome to join the earlier sessions.

The schedule interleaves main conference events together with our invited speakers, as well as gather.town poster presentations to allow for networking and socialising.

11.00 - 11.05 (GMT)
12.00 - 12.05 (CET)
Welcome and Opening Remarks
11.05 - 11.25 (GMT)
12.05 - 12.25 (CET)
Invited talk Mark van der Wilk
(Imperial College London)
Bayesian Model Selection in Deep Learning
11.30 - 11.50 (GMT)
12.30 - 12.50 (CET)
Invited talk Mihaela van der Schaar
(University of Cambridge)
Bayesian Uncertainty Estimation under Covariate Shift: Application to Cross-population Clinical Prognosis
11.55 - 13.00 (GMT)
12.55 - 14.00 (CET)
Social + Posters
13.00 - 14.40 (GMT)
14.00 - 15.40 (CET)
Lunch break (NeurIPS Breiman Lecture: Causal Learning)
14.40 - 15.00 (GMT)
15.40 - 16.00 (CET)
Invited talk Daniela Rus
(MIT CSAIL)
Uncertainty in Transportation
15.05 - 15.25 (GMT)
16.05 - 16.25 (CET)
Invited talk David Duvenaud
(University of Toronto)
Infinitely Deep Bayesian Neural Networks with Stochastic Differential Equations
15.30 - 15.50 (GMT)
16.30 - 16.50 (CET)
Invited talk Tal Arbel
(MILA)
Modelling and Propagating Uncertainties in Machine Learning for Medical Images of Patients with Neurological Diseases
15.55 - 16.15 (GMT)
16.55 - 17.15 (CET)
Invited talk Zack Chase Lipton
(CMU)
What are we so uncertain about? Broadening the scope of epistemic uncertainty in the application of machine learning
16.20 - 16.40 (GMT)
17.20 - 17.40 (CET)
Invited talk Durk Kingma
(Google)
On Diffusion-Based Generative Models
16.45 - 16.50 (GMT)
17.45 - 17.50 (CET)
Closing remarks
16.50 - 18.00 (GMT)
17.50 - 19.00 (CET)
Social + Posters
18.00 - 19.00 (GMT)
19.00 - 20.00 (CET)
NeurIPS posters

Accepted Posters

Posters and socials take place in Gather.Town. Please see instructions below. Password to access the space will be shared with registered attendees when the event starts. Posters will be uploaded to this website after the event.

Title Authors Poster Location
Uncertainty via Stochastic Gradient Langevin Boosting: Bayesian Gradient Boosted Decision TreesAndrey Malinin, Liudmila Prokhorenkova, Alexei UstimenkoA1
Know Where to Drop Your Weights: Towards Faster Uncertainty EstimationAkshatha Kamanth, Dwaraknath Ganeshwar, Matias Valdenegro-ToroA10
Last Layer Marginal Likelihood for Invariance LearningPola Schwobel, Martin Jorgensen, Mark van der WilkA11
One Versus All for Deep Neural Network Incertitude (OVNNI) QuantificationGianni Franchi, Andrei Bursuc, Emanuel Aldea, Severine Dubuisson, Isabelle BlochA12
Encoding the Latent Posterior of Bayesian Neural Networks for Uncertainty QuantificationGianni Franchi, Andrei Bursuc, Emanuel Aldea, Severine Dubuisson, Isabelle BlochA13
Identifying Causal-effect Inference Failure Using Uncertainty-aware ModelsAndrew Jesson, Soren Mindermann, Uri Shalit, Yarin GalA14
End-to-End Semi-Supervised Learning for Differentiable Particle FiltersHao Wen, Xiongjie Chen, Georgios Papagiannis, Conghui Hu, Yunpeng LiA15
Neural Empricical Bayes: Source Distribution Estimation and its Applications to Simulation-Based InferenceMaxime Vandegar, Michael Kagan, Antoine Wehenkel, Gilles LouppeA16
Uncertainty in Structured Prediction: Pushing the Scale Limits of UncertaintyAndrey Malinin, Mark GalesA2
TyXe: Pyro-Based Bayesian Neural Networks for Pytorch Users in 5 Lines of CodeHippolyt Ritter, Theofanis KaraletsosA3
Expressive yet Tractable Bayesian Deep Learning via Subnetwork InferenceErik Daxberger, Eric Nalisnick, James Urquhart Allingham, Javier Antoran, Jose Miguel Hernandez-LobatoA4
Sparse Encoding for More-interpretable feature-selecting representations in probabilistic (Poisson) matrix factorizationJoshua C. Chang, Patrick Fletcher, Jungmin Han, Ted L. chang, Shashaank Vattikuti, Bart Desmet, Ayah Zirikly, Carson C. ChowA5
On Signal-to-noise Ratio Issues in Variational Inference for Deep Gaussian ProcessesTim G. J. Rudner, Oscar Key, Yarin Gal, Tom RainforthA6
Rethinking Function-Space Variational Inference in Bayesian Neural NetworksTim G. J. Rudner, Zonghao Chen, Yarin GalA7
Outcome-Driven Reinforcement Learning via Variational InferenceTim G. J. Rudner, Vitchyr H. Pong, Rowan McAllister, Yarin Gal, Sergey LevineA8
A Probabilistic Perspective on Pathologies in Behavioural Cloning for Reinforcement LearningTim G. J. Rudner, Cong Lu, Michael A. Osborne, Yarin GalA9
Self Normalizing FlowsT. Anderson Keller, Jorn W. T. Peters, Priyank Jaini, Emiel Hoogeboom, Patrick Forre, Max WellingB1
Fixing Asymptotic Uncertainty of BNNs with Infinite ReLU FeaturesAgustinus Kristiadi, Mathias Hein, Philipp HennigB10
Deep Kernel ProcessesLawrence Aitchison, Sebastian Ober, Adam X. YangB11
Liberty or Depth: Deep Bayesian Neural Nets Do Not Need Complex Weight Posterior ApproximationsSebastian Farquhar, Lewis Smith, Yarin GalB12
Augmented Sliced Wasserstein DistancesXiongjie Chen, Yongxin Yang, Yunpeng LiB15
Bayesian Active Learning with Pretrained Language ModelsKaterina Margatina, Loic Barrault, Nikos AletrasB16
ThompsonBALD: Bayesian Batch Active Learning for Deep Learning via Thompson SamplingJaeik Jeon, Brooks PaigeB2
Learning under Model Misspecification: Applications to Variational and Ensemble MethodsAndres R. MasegosaB7
Global Canopy Height Regression from Space-borne LiDARNico Lang, Nikolai Kalishek, John Armston, Konrad Schindler, Ralph Duaya, Jan Dirk WegnerB8
Sparse Uncertainty Representation in Deep Learning with Inducing WeightsHippolyt Ritter, Martin Kukla, Cheng Zhang, Yingzhen LiB9
Designing Priors for Bayesian Neural NetworksTim Pearce, Russell Tsuchida, Alexandra Brintrup, Mohamed Zaki, Andy Neely and Andrew Y.K. FoongC1
Deep Evidential RegressionAlexander Amini, Wilko Schwarting, Ava Soleimany, Danela RusC16
Evidential Deep Learning for Guided Molecular Property Prediction and DiscoverAva P. Soleimany, alexander amini, Samuel Goldman, Daniela Rus, Sangeeta N. Bhatia, Connor W. ColeyD1
Depth Uncertainty in Neural NetworksJavier Antoran, James Urquhart Allingham, Jose Miguel Hernandez-LobatoD10
Decentralized Langevin Dynamics for Bayesian LearningAnjaly Parayil, He Bai, Jemin George, Prudhvi GurramD11
i-DenseNetsYura Perugachi-Diaz, Jakub M. Tomczak, Sandjai BhulaiD12
BayesFlow: Scalable Amortized Bayesian Inference with Invertible NetworksStefan T. Radev, Ullrich KotheD16
Wavelet Flow: Fast Training of High Resolution Normalizing FlowsJason J. Yu, Konstantinos G. Derpanis, Marcus A. BrubakerD2
General Invertible Transformations for Flow-based Generative ModelsJakub M. TomczakD3
SurVAE Flows: Surjections to Bridge the Gap between VAEs and FlowsDidrik Nielsen, Priyank Jaini, Emiel Hoogeboom, Ole Winther, Max WellingD4
A Bayesian Perspective on Training Speed and Model SelectionClare Lyle, Lisa Schut, Binxin Ru, Yarin Gal, Mark van der WilkD6
The Ridgelet Prior: A Covariance Function Approach to Prior Specification for Bayesian Neural NetworksTakuo Matsubara, Christ Oates, Francois-Xavier BriolD7
Bayesian Neural Network Priors RevisitedVincent Fortuin, Adria Garriga-Alonso, Florian Wenzel, Gunnar Ratsch, Richard Turner, Mark van der Wilk, Lawrence AitchisonD8
Sampling-free Variational Inference for Neural Networks with Multiplicative Activation NoiseJannik Schmitt, Stefan RothD9
Clue: A Method for Explaining Uncertainty EstimatesJavier Antoran, Umang Bhatt, Tameem Adel, Adrian Weller, Jose Miguel Hernandez-LobatoE1
Temporal-hierarchical VAE for Heterogenous and Missing Data HandlingDaniel Barrejon-Moreno, Pablo M. Olmos, Antonio Artes-RodriguezE10
Efficient Low Rank Gaussian Variational Inference for Neural NetworksMarcin B. Tomczak, Siddharth Swaroop, Richard E. TurnerE11
Ensemble Distribution Distillation: Ensemble Uncertainty via a Single ModelAndrey Malinin, Sergey Chervontsev, Ivan Provilkov, Bruno Mlodozeniec, Mark GalesE12
Towards a Unified Framework for Bayesian Neural Networks in PyTorchAudrey Flower, Beliz Gokkaya, Sahar Karimi, Jessica Ai, Ousmane Dia, Ehsan Emamjomeh-Zadeh, Ilknur Kaynar Kabul, Erik Meijer, Adly TempletonE13
Feature Space Singularity for Out-of-Distribution DetectionHaiwen Huang, Zhihan Li, Lulu Wang, Sishuo Chen, Bin Dong, Xinyu ZhouE14
Hierarchical Gaussian Processes with Wasserstein-2 KernelsSebastian G. Popescu, David J. Sharp, James H. Cole, and Ben GlockerE15
Sample-efficient Optimization in the Latent Space of Deep Generative Models via Weighted RetrainingAustin Tripp, Erik Daxberger, Jose Miguel Hernandez-LobatoE16
A Comparative Evaluation of Methods for Epistemic Uncertainty EstimationLisha Chen, Hanjing Wang, Shiyu Chang, Hui Su, Qiang JiE2
Estimating Model Uncertainty of Neural Networks in Sparse Information FormJongseok Lee, Matthias Humt, Jianxing Feng, Rudolph TriebelE3
Simple & Principled Uncertainty Estimation with Single Deep Model via Distance AwarenessJeremiah Liu, Zi Lin, Shreyas Padhy, Dustin Tran, Tania Bedrax-Weiss, Balaji LakshminarayananE4
Global Inducing Point Variational Posteriors for Bayesian Neural Networks and Deep Gaussian ProcessesSebastian W. Ober, Laurence AitchisonE5
Unpacking Information BottlenecksAndreas Kirsch, Clare Lyle, Yarin GalE6
Revisiting the Train Loss: An Efficient Performance Estimator for Neural Architecture SearchBinxin Ru, Clare Lyle, Lisa Schut, Mark van der Wilk, Yarin GalE7
Using hamiltorch to Perform HMC over BNNs with Symmetric SplittingAdam D. Cobb, Brian JalaianE8
Cross-Pollinated Deep EnsemblesAlexander Lyzhov, Daria Voronkova, Dmitry VetrovE9
Multi-headed Bayesian U-NetMoritz Fuchs, Simon Kiefhaber, Hendrik Mehrtens, Faraz Zaidi, Camila Gonzalez, Arjan Kuijper, Anirban MukhopadhyayF1
Bayesian Active Learning for Wearable and Mobile HealthGautham Krishna Gudur, Abhijith Ragav, Prahalathan Sundaramoorthy, Venkatesh UmaashankarF10
Hierarchical Gaussian Process Priors for Bayesian Neural NetworksTheofnis Karaletsos, Thang D. BuiF11
Bayesian Neural Networks for Acoustic Mosquito DetectionIvan Kiskin, Adam D. Cobb, Steve RobertsF12
The Hidden Uncertainty in a Neural Network's ActivationsJanis Postels, Hermann Blum, Cesar Cadena, Roland Siegwart, Luc van Gool, Federico TombariF13
Mixed-curvature Conditional Prior VAEMaciej FalkiewiczF14
Bayesian BERT for Trustful Hate Speech DetectionKristian Miok, Blaz Skrlj, Daniela Zaharie, Marko Robnik-SikonjaF15
Uncertainty Quantification for Spectral Virtual DiagnosticOwen Convery, Lewis Smith, Yarin Gal, Adi HanukaF16
Bayesian Multi-task Learning: Fully Differentiable Model DiscoveryGert-Jan BoothF2
Towards Principled Prior Assumption in Deep LearningLassi Meronen, Martin Trapp, Arno SolinF3
Perfect Density Models Cannot Guarantee Anomaly DetectionCharline Le Lan, Laurent DinhF4
Semi-supervised Learning of Galaxy Morphology Using Equivariant Transformer Variational AutoencodersMizu Nishikawa-Toomey, Lewis Smith, Yarin GalF5
Bayesian Deep Ensembles via the Neural Tangent KernelBobby He, Balaji Lakshminarayanan, Yee Whye TehF6
Robustness of Bayesian Neural Networks to Gradient-Based AttacksGinevra Carbone, Matthew Wicker, Luca Laurenti, Andrea Patane, Luca Bortolussi, Guido SanguinettiF7
DrugEx2: Drug Molecule De Novo Design by Multi-Objective Reinforcement Learning for PolypharmacologyX. Liu, K. Ye, H.W.T van Vlijmen, M.T.M. Emmerich, A.P. IJzerman, G.J.P. van WestenF8
Why Aren't bootstrapped Neural Networks Better?Jeremy Nixon, Dustin Tran, Balaji LakshminarayananF9

Call for Participation and Poster Presentations

This year the BDL workshop will take a new form, and will be organised as a NeurIPS European event together with the ELLIS programme on Robustness in ML. The event will be virtual, taking place in Gather.Town (link will be provided to registered participants), with a schedule and socials to accommodate European timezones. Participants are welcome to join from around the world though. No paid registration is required for the NeurIPS Europe meetup on Bayesian Deep Learning, and the event will be open to all. If you wish to attend the talks and participate in gather.town, please sign-up here: Registration.

Update [28/11]: We got some questions about the submission process:

  • The socials are intended to be a platform to advertise your work to your colleagues
  • So unlike previous years, this year we also encourage work which was submitted/presented elsewhere
  • The submission process is easy and only requires a poster (i.e. no need to prepare a full paper)
  • The poster deadline has been extended to 6/12


We invite researchers to submit posters for presentation during the socials. Unlike previous years, this year you are welcome to submit research that has previously appeared in a journal, workshop, or conference (including the NeurIPS 2020 conference and AABI), as the aim of the poster presentation is to be a platform for discussions and to advertise your work with your colleagues.

Submitted posters can be in any of the following areas:

  • Uncertainty in deep learning,
  • applications of Bayesian deep learning,
  • probabilistic deep models (such as extensions and application of Bayesian neural networks),
  • deep probabilistic models (such as hierarchical Bayesian models and their applications),
  • deep generative models (such as variational autoencoders),
  • alternative approaches for uncertainty in deep learning (including deep ensembles and ad hoc tools),
  • analysis to understand tools in Bayesian deep learning,
  • practical approximate inference techniques in Bayesian deep learning,
  • connections between deep learning and Gaussian processes,
  • or any of the topics below.

A submission should take the form of a poster in PDF format (1-page PDF of maximum size 5MB in landscape orientation). Attendees will only have regular computer screens to see it in its entirety, so please do not over-crowd your poster. The title should be on the top of the poster and use large fonts, as this is what will be shown to attendees as they approach your poster, see the screenshot here. Author names do not need to be anonymised during submission. A light-weight editorial review will be carried out, and only posters of no relevance to the community will be rejected.

Posters should be submitted by December 1, 2020 Deadline has been extended to Sunday, December 6, 2020; please email poster submissions to bayesiandeeplearning2020@gmail.com. Posters will be posted on this website (and are archival but do not constitute a proceedings). Notification of acceptance will be made within a few days of the deadline.

Key Dates:

  • Poster submission deadline: December 1, 2020 Deadline has been extended to Sunday, December 6, 2020 (23:59 AOE) (submit to this email bayesiandeeplearning2020@gmail.com)
  • Acceptance notification: within a few days
  • Workshop presentations and talks: Thursday, 10 December, 2020

Abstract

While deep learning has been revolutionary for machine learning, most modern deep learning models cannot represent their uncertainty nor take advantage of the well studied tools of probability theory. This has started to change following recent developments of tools and techniques combining Bayesian approaches with deep learning. The intersection of the two fields has received great interest from the community over the past few years, with the introduction of new deep learning models that take advantage of Bayesian techniques, as well as Bayesian models that incorporate deep learning elements. In fact, the use of Bayesian techniques in deep learning can be traced back to the 1990s’, in seminal works by Radford Neal, David MacKay, and Dayan et al. These gave us tools to reason about deep models’ confidence, and achieved state-of-the-art performance on many tasks. However earlier tools did not adapt when new needs arose (such as scalability to big data), and were consequently forgotten. Such ideas are now being revisited in light of new advances in the field, yielding many exciting new results.

Previous workshops:

  • Our 2019 workshop page is available here;
  • Our 2018 workshop page is available here;
  • Our 2017 workshop page is available here;
  • Our 2016 workshop page is available here; videos from the 2016 workshop are available online as well.

Topics

  • Uncertainty in deep learning,
  • Applications of Bayesian deep learning,
  • Probabilistic deep models (such as extensions and application of Bayesian neural networks),
  • Deep probabilistic models (such as hierarchical Bayesian models and their applications),
  • Generative deep models (such as variational autoencoders),
  • Information theory in deep learning,
  • Deep ensemble uncertainty,
  • NTK and Bayesian modelling,
  • Connections between NNs and GPs,
  • Incorporating explicit prior knowledge in deep learning (such as posterior regularisation with logic rules),
  • Approximate inference for Bayesian deep learning (such as variational Bayes / expectation propagation / etc. in Bayesian neural networks),
  • Scalable MCMC inference in Bayesian deep models,
  • Deep recognition models for variational inference (amortised inference),
  • Bayesian deep reinforcement learning,
  • Deep learning with small data,
  • Deep learning in Bayesian modelling,
  • Probabilistic semi-supervised learning techniques,
  • Active learning and Bayesian optimisation for experimental design,
  • Kernel methods in Bayesian deep learning,
  • Implicit inference,
  • Applying non-parametric methods, one-shot learning, and Bayesian deep learning in general.