Call for papers

We invite researchers to submit work in any of the following areas:

  • Applications of Bayesian deep learning,
  • deep generative models,
  • variational inference using neural network recognition models,
  • practical approximate inference techniques in Bayesian neural networks,
  • applications of Bayesian neural networks,
  • information theory in deep learning,
  • or any of the topics below.

A submission should take the form of an extended abstract (3 pages long) in PDF format using the NIPS style. Author names do not need to be anonymised and references may extend as far as needed beyond the 3 page upper limit. Submissions may extend beyond the 3 pages upper limit, but reviewers are not expected to read beyond the first 3 pages. If research has previously appeared in a journal, workshop, or conference (including NIPS 2018 conference), the workshop submission should extend that previous work. Parallel submissions (such as to ICLR) are permitted.

Submissions will be accepted as contributed talks or poster presentations. Extended abstracts should be submitted by Friday 2 November 2018; submission page is here. Final versions will be posted on the workshop website (and are archival but do not constitute a proceedings).

Key Dates:

  • Extended abstract submission deadline: Friday 2 November 2018 (midnight AOE) (submission page is here)
  • Acceptance notification: 16 November 2018
  • Camera ready submission: 30 November 2018
  • Workshop: 7 December 2018

We will do our best to guarantee workshop registration for all accepted workshop submissions — we have multiple workshop tickets reserved for accepted submissions. In addition, several complimentary workshop registrations will be awarded to authors of accepted workshop abstracts. These will be announced by 16 November 2018. Award recipients will be reimbursed by NIPS for their workshop registration. Further workshop endorsements and travel awards to junior researchers will be updated on the workshop website.

Abstract

While deep learning has been revolutionary for machine learning, most modern deep learning models cannot represent their uncertainty nor take advantage of the well studied tools of probability theory. This has started to change following recent developments of tools and techniques combining Bayesian approaches with deep learning. The intersection of the two fields has received great interest from the community over the past few years, with the introduction of new deep learning models that take advantage of Bayesian techniques, as well as Bayesian models that incorporate deep learning elements [1-11]. In fact, the use of Bayesian techniques in deep learning can be traced back to the 1990s’, in seminal works by Radford Neal [12], David MacKay [13], and Dayan et al. [14]. These gave us tools to reason about deep models’ confidence, and achieved state-of-the-art performance on many tasks. However earlier tools did not adapt when new needs arose (such as scalability to big data), and were consequently forgotten. Such ideas are now being revisited in light of new advances in the field, yielding many exciting new results.

Extending on last year’s workshop’s success, this workshop will again study the advantages and disadvantages of such ideas, and will be a platform to host the recent flourish of ideas using Bayesian approaches in deep learning and using deep learning tools in Bayesian modelling. The program includes a mix of invited talks, contributed talks, and contributed posters. It will be composed of five themes: deep generative models, variational inference using neural network recognition models, practical approximate inference techniques in Bayesian neural networks, applications of Bayesian neural networks, and information theory in deep learning. Future directions for the field will be debated in a panel discussion.

This year’s main theme will focus on applications of Bayesian deep learning within machine learning and outside of it.

Previous workshops:

Our 2017 workshop page is available here; Our 2016 workshop page is available here; videos from the 2016 workshop are available online as well.

Topics

  • Applications of Bayesian deep learning,
  • Probabilistic deep models for classification and regression (such as extensions and application of Bayesian neural networks),
  • Generative deep models (such as variational autoencoders),
  • Incorporating explicit prior knowledge in deep learning (such as posterior regularization with logic rules),
  • Approximate inference for Bayesian deep learning (such as variational Bayes / expectation propagation / etc. in Bayesian neural networks),
  • Scalable MCMC inference in Bayesian deep models,
  • Deep recognition models for variational inference (amortized inference),
  • Model uncertainty in deep learning,
  • Bayesian deep reinforcement learning,
  • Deep learning with small data,
  • Deep learning in Bayesian modelling,
  • Probabilistic semi-supervised learning techniques,
  • Active learning and Bayesian optimization for experimental design,
  • Information theory in deep learning,
  • Kernel methods in Bayesian deep learning,
  • Implicit inference,
  • Applying non-parametric methods, one-shot learning, and Bayesian deep learning in general.

Awards

Complimentary workshop registration

Several NIPS 2018 complimentary workshop registrations will be awarded to authors of accepted workshop submissions. These will be announced by 16 November 2018. Award recipients will be reimbursed by NIPS for their workshop registration. Further workshop endorsements and travel awards to junior researchers will be updated on the workshop website.

Schedule

Invited Speakers

This year's theme is the use of deep learning uncertainty in real-world applications, with speakers working on various problems:

Schedule

8.00 - 8.05 Opening remarks Yarin Gal (Oxford)
8.05 - 8.25 Invited talk Frank Wood (UBC)
8.25 - 8.45 Invited talk Dmitry Vetrov (Samsung AI centre) (Semi-)Implicit Modeling as New Deep Tool for Approximate Bayesian Inference
8.45 - 9.00 Contributed talk TBD TBD
9.00 - 9.20 Invited talk Debora Marks (Harvard Medical School)
9.20 - 9.40 Invited talk Harri Valpola (Curious AI Company) Estimating uncertainty for model-based reinforcement learning
9.40 - 9.55 Poster spotlights
9.55 - 10.55 Discussion over coffee and poster session
10.55 - 11.15 Invited talk Christian Leibig (Tuebingen) Leveraging (Bayesian) uncertainty information: opportunities and failure modes
11.15 - 11.30 Contributed talk TBD TBD
11.30 - 11.50 Invited talk Balaji Lakshminarayanan (DeepMind) Probabilistic model ensembles for predictive uncertainty estimation
11.50 - 13.20 Lunch
13.20 - 13.40 Invited talk Sergey Levine (Berkeley) The Role of Uncertainty in Reinforcement Learning and Meta-Learning
13.40 - 13.55 Contributed talk TBD TBD
13.55 - 14.10 Invited talk Yashar Hezaveh (Stanford) Mapping the most distant galaxies of the universe with Bayesian neural networks
14.10 - 14.30 Invited talk Tim Genewein (Bosch Center for AI) A Bayesian view on neural network compression
14.30 - 15.30 Discussion over coffee and poster session
15.30 - 15.50 Invited talk David Sontag (MIT)
15.50 - 16.05 Contributed talk TBD TBD
16.05 - 16.25 Invited talk Yarin Gal (Oxford) Bayesian Deep Learning in Self-Driving Cars
16.30 - 17.30 Panel Session Panellists: TBC
Topic:
TBC
17.30 - 19.00 Poster session

References

  1. Kingma, DP and Welling, M, ‘’Auto-encoding variational bayes’’, 2013.
  2. Rezende, D, Mohamed, S, and Wierstra, D, ‘’Stochastic backpropagation and approximate inference in deep generative models’’, 2014.
  3. Blundell, C, Cornebise, J, Kavukcuoglu, K, and Wierstra, D, ‘’Weight uncertainty in neural network’’, 2015.
  4. Hernandez-Lobato, JM and Adams, R, ’’Probabilistic backpropagation for scalable learning of Bayesian neural networks’’, 2015.
  5. Gal, Y and Ghahramani, Z, ‘’Dropout as a Bayesian approximation: Representing model uncertainty in deep learning’’, 2015.
  6. Gal, Y and Ghahramani, G, ‘’Bayesian convolutional neural networks with Bernoulli approximate variational inference’’, 2015.
  7. Kingma, D, Salimans, T, and Welling, M. ‘’Variational dropout and the local reparameterization trick’’, 2015.
  8. Balan, AK, Rathod, V, Murphy, KP, and Welling, M, ‘’Bayesian dark knowledge’’, 2015.
  9. Louizos, C and Welling, M, “Structured and Efficient Variational Deep Learning with Matrix Gaussian Posteriors”, 2016.
  10. Lawrence, ND and Quinonero-Candela, J, “Local distance preservation in the GP-LVM through back constraints”, 2006.
  11. Tran, D, Ranganath, R, and Blei, DM, “Variational Gaussian Process”, 2015.
  12. Neal, R, ‘’Bayesian Learning for Neural Networks’’, 1996.
  13. MacKay, D, ‘’A practical Bayesian framework for backpropagation networks‘’, 1992.
  14. Dayan, P, Hinton, G, Neal, R, and Zemel, S, ‘’The Helmholtz machine’’, 1995.
  15. Wilson, AG, Hu, Z, Salakhutdinov, R, and Xing, EP, “Deep Kernel Learning”, 2016.
  16. Saatchi, Y and Wilson, AG, “Bayesian GAN”, 2017.
  17. MacKay, D.J.C. “Bayesian Methods for Adaptive Models”, PhD thesis, 1992.