Abstract

While deep learning has been revolutionary for machine learning, most modern deep learning models cannot represent their uncertainty nor take advantage of the well studied tools of probability theory. This has started to change following recent developments of tools and techniques combining Bayesian approaches with deep learning. The intersection of the two fields has received great interest from the community over the past few years, with the introduction of new deep learning models that take advantage of Bayesian techniques, as well as Bayesian models that incorporate deep learning elements [1-11].

In fact, the use of Bayesian techniques in deep learning can be traced back to the 1990s’, in seminal works by Radford Neal [12], David MacKay [13], and Dayan et al. [14]. These gave us tools to reason about deep models’ confidence, and achieved state-of-the-art performance on many tasks. However earlier tools did not adapt when new needs arose (such as scalability to big data), and were consequently forgotten. Such ideas are now being revisited in light of new advances in the field, yielding many exciting new results.

This workshop will study the advantages and disadvantages of such ideas, and will be a platform to host the recent flourish of ideas using Bayesian approaches in deep learning and using deep learning tools in Bayesian modelling. The historic context of key developments in the field will be explained in an invited talk, followed by a tribute talk to David MacKay’s work. Future directions for the field will be debated in a panel discussion.

Update 02/03/2017:

Our videos from the workshop are now available online.

Schedule & Accepted Papers

Invited Speakers

Videos from the workshop are available online.

Schedule

8.30 - 8.55 Invited talk Finale Doshi-Velez (Harvard University) BNNs for RL: A Success Story and Open Questions
8.55 - 9.10 Contributed talk Eric Jang, Shixiang Gu and Ben Poole
Chris J. Maddison, Andriy Mnih and Yee Whye Teh
Categorical Reparameterization with Gumbel-Softmax,
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
9.10 - 9.40 Keynote talk Zoubin Ghahramani (University of Cambridge) History of Bayesian neural networks [slides]
9.40 - 9.55 Poster spotlights
9.55 - 10.55 Discussion over coffee and poster session
10.55 - 11.20 Invited talk David Blei (Columbia University) Deep exponential families
11.20 - 11.35 Contributed talk Xiaoyu Lu, Valerio Perrone, Leonard Hasenclever, Yee Whye Teh and Sebastian Vollmer Relativistic Monte Carlo
11.35 - 12.00 Invited talk Jose Miguel Hernandez Lobato (University of Cambridge) Alpha divergence minimization for Bayesian deep learning
12.00 - 13.30 Lunch
13.30 - 14.00 Plenary talk Ryan Adams (Twitter) A Tribute to David MacKay
14.00 - 14.25 Invited talk Ian Goodfellow (OpenAI) Adversarial Approaches to Bayesian Learning and Bayesian Approaches to Adversarial Robustness
14.25 - 14.40 Contributed talk Dilin Wang, Yihao Feng and Qiang Liu Learning to Draw Samples: With Application to Amortized MLE for Generative Adversarial Training
14.40 - 15.35 Discussion over coffee and poster session
15.35 - 16.00 Invited talk Shakir Mohamed (DeepMind) Bayesian Agents: Bayesian Reasoning and Deep Learning in Agent-based Systems
16.00 - 17.00 Panel Session Panelists:
Max Welling
Ryan Adams
Jose Miguel Hernandez Lobato
Ian Goodfellow
Shakir Mohamed
Moderator: Neil Lawrence
Will Bayesian deep learning be the next big thing? Or is Bayesian modelling dead?
17.00 - 19.00 Poster session

Accepted Abstracts

Xiaoyu Lu, Valerio Perrone, Leonard Hasenclever, Yee Whye Teh and Sebastian VollmerRelativistic Monte Carlo [paper]
Ian OsbandRisk versus Uncertainty in Deep Learning: Bayes, Bootstrap and the Dangers of Dropout [paper]
Neal Jean, Michael Xie and Stefano ErmonSemi-supervised deep kernel learning [paper]
Jakub Tomczak and Max WellingImproving Variational Auto-Encoder using Householder Flow [paper]
Eric Jang, Shixiang Gu and Ben PooleCategorical Reparameterization with Gumbel-Softmax [paper]
Jonas Langhabel, Jannik Wolff and Raphael Holca-LamarreLearning to Optimise: Using Bayesian Deep Learning for Transfer Learning in Optimisation [paper]
Jordan Burgess, James R. Lloyd, and Zoubin GhahramaniOne-Shot Learning in Discriminative Neural Networks [paper]
Leonard Hasenclever, Stefan Webb, Thibaut Lienart, Sebastian Vollmer, Balaji Lakshminarayanan, Charles Blundell and Yee Whye TehDistributed Bayesian Learning with Stochastic Natural-gradient Expectation Propagation [paper]
Kevin Chen, Anthony Gamst and Alden WalkerKnots in random neural networks [paper]
Christian Leibig and Siegfried WahlDiscriminative Bayesian neural networks know what they do not know [paper]
Wolfgang Roth and Franz PernkopfVariational Inference in Neural Networks using an Approximate Closed-Form Objective [paper]
Jos van der Westhuizen and Joan LasenbyCombining sequential deep learning and variational Bayes for semi-supervised inference [paper]
Daniel Hernandez-Lobato, Thang D. Bui, Yinzhen Li, Jose Miguel Hernandez-Lobato and Richard E. TurnerImportance Weighted Autoencoders with Uncertain Neural Network Parameters [paper]
Thomas N. Kipf and Max WellingVariational Graph Auto-Encoders [paper]
Dmitry Molchanov, Arseniy Ashuha and Dmitry VetrovDropout-based Automatic Relevance Determination [paper]
Maruan Al-Shedivat, Andrew Gordon Wilson, Yunus Saatchi, Zhiting Hu and Eric P. XingScalable GP-LSTMs with Semi-Stochastic Gradients [paper]
Eric Nalisnick, Lars Hertel and Padhraic SmythApproximate Inference for Deep Latent Gaussian Mixture Models [paper]
Dilin Wang, Yihao Feng and Qiang LiuLearning to Draw Samples: With Application to Amortized MLE for Generative Adversarial Training [paper]
Stefan Depeweg, José Miguel Hernández-Lobato, Finale Doshi-Velez and Steffen UdluftLearning and Policy Search in Stochastic Dynamical Systems with Bayesian Neural Networks
Kurt Cutajar, Edwin V. Bonilla, Pietro Michiardi and Maurizio FilipponeAccelerating Deep Gaussian Processes Inference with Arc-Cosine Kernels [paper]
Arthur Bražinskas, Serhii Havrylov and Ivan TitovEmbedding Words as Distributions with a Bayesian Skip-gram Model [paper]
Mohammad Emtiyaz Khan and Wu LinVariational Inference on Deep Exponential Family by using Variational Inferences on Conjugate Models [paper]
Akash Srivastava and Charles SuttonNeural Variational Inference for Latent Dirichlet Allocation [paper]
Ajjen Joshi, Soumya Ghosh, Margrit Betke and Hanspeter PfisterHierarchical Bayesian Neural Networks for Personalized Classification [paper]
Balaji Lakshminarayanan, Alexander Pritzel and Charles BlundellSimple and Scalable Predictive Uncertainty Estimation using Deep Ensembles [paper]
Jost Tobias Springenberg, Aaron Klein, Stefan Falkner and Frank HutterAsynchronous Stochastic Gradient MCMC with Elastic Coupling [paper]
Chris J. Maddison, Andriy Mnih and Yee Whye TehThe Concrete Distribution: A Continuous Relaxation of Discrete Random Variables [paper]
Ramon Oliveira, Pedro Tabacof and Eduardo ValleKnown Unknowns: Uncertainty Quality in Bayesian Neural Networks [paper]
Mevlana Gemici, Danilo Rezende and Shakir MohamedNormalizing Flows on Riemannian Manifolds [paper]
Pavel Myshkov and Simon JulierPosterior Distribution Analysis for Bayesian Inference in Neural Networks [paper]
Yarin Gal, Riashat Islam and Zoubin GhahramaniDeep Bayesian Active Learning with Image Data [paper]
Rui Shu, Hung Bui and Mohammad GhavamzadehBottleneck Conditional Density Estimators [paper]
Stefan Webb and Yee Whye TehA Tighter Monte Carlo Objective with Renyi alpha-Divergence Measures [paper]
Aaron Klein, Stefan Falkner, Jost Tobias Springenberg and Frank HutterBayesian Neural Networks for Predicting Learning Curves [paper]
Tuan Anh Le, Atılım Güneş Baydin and Frank WoodNested Compiled Inference for Hierarchical Reinforcement Learning [paper]
Robert Loftin and David RobertsOpen Problems for Online Bayesian Inference in Neural Networks [paper]
Dustin Tran, Matt Hoffman, Kevin Murphy, Rif Saurous, Eugene Brevdo, and David BleiDeep Probabilistic Programming [paper]
Matthew HoffmanMarkov Chain Monte Carlo for Deep Latent Gaussian Models [paper]
Amar Shah and Zoubin GhahramaniSemi-supervised Active Learning with Deep Probabilistic Generative Models

Topics

  • Probabilistic deep models for classification and regression (such as extensions and application of Bayesian neural networks),
  • Generative deep models (such as variational autoencoders),
  • Incorporating explicit prior knowledge in deep learning (such as posterior regularization with logic rules),
  • Approximate inference for Bayesian deep learning (such as variational Bayes / expectation propagation / etc. in Bayesian neural networks),
  • Scalable MCMC inference in Bayesian deep models,
  • Deep recognition models for variational inference (amortized inference),
  • Model uncertainty in deep learning,
  • Bayesian deep reinforcement learning,
  • Deep learning with small data,
  • Deep learning in Bayesian modelling,
  • Probabilistic semi-supervised learning techniques,
  • Active learning and Bayesian optimization for experimental design,
  • Applying non-parametric methods, one-shot learning, and Bayesian deep learning in general.

Call for papers

We invite researchers to submit work in any of the following areas:

  • deep generative models,
  • variational inference using neural network recognition models,
  • practical approximate inference techniques in Bayesian neural networks,
  • applications of Bayesian neural networks,
  • information theory in deep learning,
  • or any of the topics below.

A submission should take the form of an extended abstract (roughly 2 pages long) in PDF format using the NIPS style. Author names do not need to be anonymised and references may extend as far as needed beyond the 2 page upper limit. If research has previously appeared in a journal, workshop, or conference (including NIPS 2016 conference), the workshop submission should extend that previous work.

Submissions will be accepted as contributed talks or poster presentations. Extended abstracts should be submitted by 1 November 2016; submission page is here. Final versions will be posted on the workshop website (and are archival but do not constitute a proceedings).

Key Dates:

  • To be considered for an ISBA@NIPS travel award an extended abstract must be submitted by 7 October 2016
  • Extended abstract submission deadline: 1 November 2016 (submission page is here)
  • Acceptance notification: 16 November 2016
  • Complimentary workshop registration award notification: 16 November 2016
  • Final paper submission: 5 December 2016

The workshop is endorsed by the International Society for Bayesian Analysis (ISBA), which will also provide a Travel Award to a graduate student or a junior researcher.

Travel Awards

Complimentary workshop registration

Several NIPS 2016 complimentary workshop registrations will be awarded to authors of accepted workshop submissions. These will be announced by 16 November 2016. Please register to the workshop early — award recipients will be reimbursed by NIPS for their workshop registration.

Congratulations to the recipients of the complimentary workshop registration: Eric Jang, Valerio Perrone, and Maruan Al-Shedivat.


ISBA@NIPS Travel Award

As part of the ISBA@NIPS 2016 initiative, the ISBA Program Council will grant two ISBA special Travel Awards of at most 700 USD. The organizers of ISBA endorsed workshops at NIPS will be invited to propose candidates for the competition.

Eligibility:

  • The recipients should be graduate students or junior researchers (up to five years after graduation) who will be presenting at the workshop.
  • The recipients must be ISBA members at the time they receive the award.

To apply:

Submit an extended abstract, and send a current CV, ISBA membership status, and a brief (few sentences) description of your research as it relates to the theme of the workshop by 7 October 2016. Applications should be emailed to yg279 -at- cam.ac.uk with the subject "ISBA@NIPS Travel Award application". The winners will be recognized as ISBA@NIPS Travel Award recipients at the Workshops at NIPS and in the ISBA Bulletin.
The ISBA@NIPS Travel Award application deadline is 7 October 2016.

References

  1. Kingma, DP and Welling, M, ‘’Auto-encoding variational bayes’’, 2013.
  2. Rezende, D, Mohamed, S, and Wierstra, D, ‘’Stochastic backpropagation and approximate inference in deep generative models’’, 2014.
  3. Blundell, C, Cornebise, J, Kavukcuoglu, K, and Wierstra, D, ‘’Weight uncertainty in neural network’’, 2015.
  4. Hernandez-Lobato, JM and Adams, R, ’’Probabilistic backpropagation for scalable learning of Bayesian neural networks’’, 2015.
  5. Gal, Y and Ghahramani, Z, ‘’Dropout as a Bayesian approximation: Representing model uncertainty in deep learning’’, 2015.
  6. Gal, Y and Ghahramani, G, ‘’Bayesian convolutional neural networks with Bernoulli approximate variational inference’’, 2015.
  7. Kingma, D, Salimans, T, and Welling, M. ‘’Variational dropout and the local reparameterization trick’’, 2015.
  8. Balan, AK, Rathod, V, Murphy, KP, and Welling, M, ‘’Bayesian dark knowledge’’, 2015.
  9. Louizos, C and Welling, M, “Structured and Efficient Variational Deep Learning with Matrix Gaussian Posteriors”, 2016.
  10. Lawrence, ND and Quinonero-Candela, J, “Local distance preservation in the GP-LVM through back constraints”, 2006.
  11. Tran, D, Ranganath, R, and Blei, DM, “Variational Gaussian Process”, 2015.
  12. Neal, R, ‘’Bayesian Learning for Neural Networks’’, 1996.
  13. MacKay, D, ‘’A practical Bayesian framework for backpropagation networks‘’, 1992.
  14. Dayan, P, Hinton, G, Neal, R, and Zemel, S, ‘’The Helmholtz machine’’, 1995.