Schedule & Accepted Papers

Invited Speakers

This year's theme is the use of deep learning uncertainty in real-world applications, with speakers working on various problems:

Schedule

8.00 - 8.05 Opening remarks Yarin Gal (Oxford)
8.05 - 8.25 Invited talk Frank Wood (UBC) Challenges at the confluence of deep learning and probabilistic programming
8.25 - 8.45 Invited talk Dmitry Vetrov (Samsung AI centre) (Semi-)Implicit Modeling as New Deep Tool for Approximate Bayesian Inference
8.45 - 9.00 Contributed talk Matthias Bauer and Andriy Mnih Resampled Priors for Variational Autoencoders
9.00 - 9.20 Invited talk Debora Marks (Harvard Medical School) Generative deep models for challenging biological problems
9.20 - 9.40 Invited talk Harri Valpola (Curious AI Company) Estimating uncertainty for model-based reinforcement learning
9.40 - 9.55 Poster spotlights
9.55 - 10.55 Discussion over coffee and poster session
10.55 - 11.15 Invited talk Christian Leibig (Tuebingen) Leveraging (Bayesian) uncertainty information: opportunities and failure modes
11.15 - 11.30 Contributed talk Dan Rosenbaum, Frederic Besse, Fabio Viola, Danilo Rezende and Ali Eslami Learning models for visual 3D localization with implicit mapping
11.30 - 11.50 Invited talk Balaji Lakshminarayanan (DeepMind) Probabilistic model ensembles for predictive uncertainty estimation
11.50 - 13.20 Lunch
13.20 - 13.40 Invited talk Sergey Levine (Berkeley) Control as Inference: a Connection Between Reinforcement Learning and Graphical Models
13.40 - 13.55 Contributed talk Frank Soboczenski, Michael D. Himes, Molly D. O'Beirne, Simone Zorzan, Atılım Günes Baydin, Adam D. Cobb, Daniel Angerhausen, Giada N. Arney and Shawn D. Domagal-Goldman Bayesian Deep Learning for Exoplanet Atmospheric Retrieval
13.55 - 14.10 Invited talk Yashar Hezaveh (Stanford) Mapping the most distant galaxies of the universe with Bayesian neural networks
14.10 - 14.30 Invited talk Tim Genewein (Bosch Center for AI) A Bayesian view on neural network compression
14.30 - 15.30 Discussion over coffee and poster session
15.30 - 15.50 Invited talk David Sontag (MIT) Bayesian deep learning for healthcare
15.50 - 16.05 Contributed talk Hyunjik Kim, Andriy Mnih, Jonathan Schwarz, Marta Garnelo, Ali Eslami, Dan Rosenbaum, Oriol Vinyals and Yee Whye Teh Attentive Neural Processes
16.05 - 16.25 Invited talk Yarin Gal (Oxford) Bayesian Deep Learning in Self-Driving Cars (and more)
16.30 - 17.30 Panel Session Panellists:
Sergey Levine
Debora Marks
Frank Wood
Yarin Gal
Harri Valpola
Christian Leibig
Yashar Hezaveh
Balaji Lakshminarayanan
David Sontag
Moderator: Neil Lawrence
17.30 - 19.30 Poster session

Accepted Abstracts

Authors Title
Matthias Bauer and Andriy MnihResampled Priors for Variational Autoencoders [paper]
Ben Poole, Sherjil Ozair, Aaron van den Oord, Alexander Alemi and George TuckerOn variational lower bounds of mutual information [paper]
Adrià Garriga-Alonso, Carl E. Rasmussen and Laurence AitchisonDeep Convolutional Networks as shallow Gaussian Processes [paper]
Gintare Karolina Dziugaite, Gabriel Arpino and Daniel RoyTowards generalization guarantees for SGD: Data-dependent PAC-Bayes priors [paper]
Bo Dai, Hanjun Dai, Niao He, Arthur Gretton, Le Song and Dale SchuurmansExponential Family Estimation via Dynamics Embedding [paper]
Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur and Balaji LakshminarayananDo Deep Generative Models Know What They Don’t Know? [paper]
Ke Li and Jitendra MalikImplicit Maximum Likelihood Estimation [paper]
Emile Mathieu, Tom Rainforth, Siddharth Narayanaswamy and Yee Whye TehDisentangling Disentanglement [paper]
George Tucker, Dieterich Lawson, Shane Gu and Chris J. MaddisonDoubly Reparameterized Gradient Estimators for Monte Carlo Objectives [paper]
Chen Zeno, Itay Golan, Elad Hoffer and Daniel SoudryTask Agnostic Continual Learning Using Online Variational Bayes [paper]
Xu Hu, Pablo Garcia Moreno, Neil Lawrence and Andreas Damianouβ-BNN: A Rate-Distortion Perspective on Bayesian Neural Networks [paper]
Mariia Vladimirova, Julyan Arbel and Pablo MesejoBayesian neural networks become heavier-tailed with depth [paper]
Hyunjik Kim, Andriy Mnih, Jonathan Schwarz, Marta Garnelo, Ali Eslami, Dan Rosenbaum, Oriol Vinyals and Yee Whye TehAttentive Neural Processes [paper]
Salvator Lombardo, Jun Han, Christopher Schroers and Stephan MandtVideo Compression through Deep Bayesian Learning [paper]
Emily Fertig, Aryan Arbabi and Alex Alemiβ-VAEs can retain label information even at high compression [paper]
Hamid Eghbal-Zadeh, Werner Zellinger and Gerhard WidmerMixture Density Generative Adversarial Networks [paper]
Pengyu Cheng, Chang Liu, Chunyuan Li, Dinghan Shen, Ricardo Henao and Lawrence CarinStraight-Through Estimator as Projected Wasserstein Gradient Flow [paper]
Fabio Viola and Danilo RezendeGeneralized ELBO with Constrained Optimization, GECO [paper]
Chao Ma, Yingzhen Li and Jose Miguel Hernandez LobatoVariational Implicit Processes [paper]
Tom Ryder, Dennis Prangle, Andy Golightly and Stephen McGoughBlack-Box Autoregressive Density Estimation for State-Space Models [paper]
Mingzhang YinSemi-implicit generative model [paper]
Conor Durkan, George Papamakarios and Iain MurraySequential Neural Methods for Likelihood-free Inference [paper]
Florian Wenzel, Alexander Buchholz and Stephan MandtQuasi-Monte Carlo Flows [paper]
Kento Nozawa and Issei SatoPAC-Bayes Analysis of Transferred Sentence Vectors [paper]
Iryna Korshunova, Yarin Gal, Joni Dambre and Arthur GrettonConditional BRUNO: A Deep Recurrent Process for Exchangeable Labelled Data [paper]
Ranganath Krishnan, Mahesh Subedar and Omesh TickooBAR: Bayesian Activity Recognition using variational inference [paper]
Jonathan Gordon, John Bronskill, Matthias Bauer, Sebastian Nowozin and Richard TurnerVersa: Versatile and Efficient Few-shot Learning [paper]
Kimin Lee, Sukmin Yun, Kibok Lee, Honglak Lee, Bo Li and Jinwoo ShinRobust Determinantal Generative Classifier for Noisy Labels and Adversarial Attacks [paper]
Dejiao Zhang, Tianchen Zhao and Laura BalzanoInformation Maximization Auto-Encoding [paper]
Anusha Lalitha, Shubhanshu Shekhar, Tara Javidi and Farinaz KoushanfarFully Decentralized Federated Learning [paper]
Maxime Wabartha, Audrey Durand, Vincent François-Lavet and Joelle PineauSampling diverse neural networks for exploration in reinforcement learning [paper]
Yao-Hung Hubert Tsai, Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency and Ruslan SalakhutdinovLearning Multimodal Representations using Factorized Deep Generative Models [paper]
Samuel Smith, Daniel Duckworth, Semon Rezchikov, Quoc Le and Jascha Sohl-DicksteinStochastic natural gradient descent draws posterior samples in function space [paper]
Chao Ma, Jose Miguel Hernandez Lobato, Noam Koenigstein, Sebastian Nowozin and Cheng ZhangPartial VAE for Hybrid Recommender System [paper]
Pranav Shyam, Wojciech Jaśkowski and Faustino GomezModel-Based Active Exploration [paper]
Roman Novak, Lechao Xiao, Yasaman Bahri, Jaehoon Lee, Greg Yang, Daniel Abolafia, Jeffrey Pennington and Jascha Sohl-DicksteinBayesian Deep Convolutional Networks with Many Channels are Gaussian Processes [paper]
Fabio Viola and Danilo RezendeOn the properties of high-capacity VAEs [paper]
Jishnu Mukhoti, Pontus Stenetorp and Yarin GalOn the Importance of Strong Baselines in Bayesian Deep Learning [paper]
Pierre-Alexandre Mattei and Jes FrellsenRefit your Encoder when New Data Comes by [paper]
Xinyu Hu, Paul Szerlip, Theofanis Karaletsos and Rohit SinghApplying SVGD to Bayesian Neural Networks for Cyclical Time-Series Prediction and Inference [paper]
Ananya Kumar, Ali Eslami, Danilo Rezende, Marta Garnelo, Fabio Viola, Edward Lockhart and Murray ShanahanConsistent Jumpy Predictions for Videos and Scenes [paper]
Christian Henning, Johannes von Oswald, Joao Sacramento, Jean-Pascal Pfister and Benjamin F. GreweApproximating the Predictive Distribution via Adversarially-Trained Hypernetworks [paper]
Ivan OvinnikovPoincaré Wasserstein Autoencoder [paper]
Shengyang Sun, Guodong Zhang, Jiaxin Shi and Roger GrosseFunctional Variational Bayesian Neural Networks [paper]
Da Tang, Dawen Liang and Tony JebaraCorrelated Variational Auto-Encoders [paper]
Jiawei He, Yu Gong, Joseph Marino, Greg Mori and Andreas LehrmannVariational Latent Dependency Learning [paper]
Tim PearceBayesian Neural Network Ensembles [paper]
Artur Bekasov and Iain MurrayBayesian Adversarial Spheres: Bayesian Inference and Adversarial Examples in a Noiseless Setting [paper]
Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur and Balaji LakshminarayananHybrid Models with Deep and Invertible Features [paper]
Andrey Malinin and Mark GalesPrior Networks for Detection of Adversarial Attacks [paper]
Seb Farquhar and Yarin GalA Unifying View of Bayesian Continual Learning [paper]
Micha Livne and David FleetTzK Flow - Conditional Generative Model [paper]
Jason Ramapuram, Alexandros Kalousis, Russ Webb and Maurits DiephuisVariational Saccading: Efficient Inference for Large Resolution Images [paper]
Stanislav FortMachine learning approach to detection and characterization of X-ray cavities in clusters of galaxies [paper]
Yibo Yang and Paris PerdikarisPhysics-informed deep generative models [paper]
Kumar Sricharan and Ashok SrivastavaBuilding robust classifiers through generation of confident out of distribution examples [paper]
Benjamin Bloem-Reddy and Yee Whye TehNeural network models of exchangeable sequences [paper]
Ghassen Jerfel, Erin Grant, Thomas L. Griffiths and Katherine A. HellerStochastic Gradient-Based Mixture Models for Transfer Modulation in Meta-Learning [paper]
Kurt Cutajar, Mark Pullin, Andreas Damianou, Neil Lawrence and Javier GonzalezDeep Gaussian Processes for Multi-fidelity Modeling [paper]
Eric Nalisnick and José Miguel Hérnandez-LobatoAutomatic Depth Determination for Bayesian ResNets [paper]
Artem Sobolev and Dmitry VetrovImportance Weighted Hierarchical Variational Inference [paper]
Kashyap Chitta, Jose M. Alvarez and Adam LesnikowskiDeep Probabilistic Ensembles: Approximate Variational Inference through KL Regularization [paper]
Dan Rosenbaum, Frederic Besse, Fabio Viola, Danilo Rezende and Ali EslamiLearning models for visual 3D localization with implicit mapping [paper]
Dieterich Lawson, George Tucker, Christian Naesseth, Chris Maddison, Ryan Adams and Yee Whye TehTwisted Variational Sequential Monte Carlo [paper]
Mahdi Karami, Laurent Dinh, Daniel Duckworth, Jascha Sohl-Dickstein and Dale SchuurmansGenerative Convolutional Flow for Density Estimation [paper]
Adam Kortylewski, Mario Wieser, Andreas Morel-Forster, Aleksander Wieczorek, Sonali Parbhoo, Volker Roth and Thomas VetterInformed MCMC with Bayesian Neural Networks for Facial Image Analysis [paper]
Patrick Dallaire and Francois LavioletteBayesian Nonparametric Deep Learning [paper]
Zihao Zhang, Stefan Zohren and Stephen RobertsBDLOB: Bayesian Deep Convolutional Neural Networks for Limit Order Books [paper]
Pierre-Alexandre Mattei and Jes FrellsenmissIWAE: Deep Generative Modelling and Imputation of Incomplete Data Sets [paper]
Danielle Maddix, Yuyang Wang and Alex SmolaDeep Factors with Gaussian Processes for Forecasting [paper]
Prasanna Sattigeri, Soumya Ghosh, Abhishek Kumar, Karthikeyan Ramamurthy, Samuel Hoffman, Youssef Drissi and Inkit PadhiProbabilistic Mixture of Model-Agnostic Meta-Learners [paper]
Frank Soboczenski, Michael D. Himes, Molly D. O'Beirne, Simone Zorzan, Atılım Günes Baydin, Adam D. Cobb, Daniel Angerhausen, Giada N. Arney and Shawn D. Domagal-GoldmanBayesian Deep Learning for Exoplanet Atmospheric Retrieval [paper]
Albert Shaw, Bo Dai, Weiyang Liu and Le SongBayesian Meta-network Architecture Learning [paper]
Marcel Nassar, Xin Wang and Evren TumerConditional Graph Neural Processes: A Functional Autoencoder Approach [paper]
Victor Gallego and David RiosStochastic Gradient MCMC with Repulsive Forces [paper]
Daniel Flam-Shepherd, James Requiema and David DuvenaudCharacterizing and Warping the Function space of Bayesian Neural Networks [paper]
Qiang Zhang, Shangsong Liang and Emine YilmazVariational Self-attention Model for Sentence Representation [paper]
Weiwei Pan, Melanie Fernandez Pradier, Jiayu Yao, Finale Doshi-Velez and Soumya GhoshProjected BNNs: Avoiding weight-space pathologies by projecting neural network weights [paper]
Buu Phan, Rick Salay, Krzysztof Czarnecki, Vahdat Abdelzad, Taylor Denouden and Sachin VernekarCalibrating Uncertainties in Object Localization Task [paper]
Tim Georg Johann Rudner, Vincent Fortuin, Yee Whye Teh and Yarin GalOn the Connection between Neural Processes and Gaussian Processes with Deep Kernels [paper]
Kyle Cranmer, Stefan Gadatsch, Aishik Ghosh, Tobias Golling, Gilles Louppe, David Rousseau, Dalila Salamani and Graeme StewartDeep generative models for fast shower simulation in ATLAS [paper]
Ziyin Liu, Junxiang Chen, Paul Pu Liang and Masahito UedaRelational Attention Networks via Fully-Connected Conditional Random Fields [paper]
Adam Foster, Martin Jankowiak, Eli Bingham, Yee Whye Teh, Tom Rainforth and Noah GoodmanVariational Optimal Experiment Design: Efficient Automation of Adaptive Experiments [paper]
Natasa Tagasovska and David Lopez-PazFrequentist uncertainty estimates for deep learning [paper]
Valery Kharitonov, Dmitry Molchanov and Dmitry VetrovVariational Dropout via Empirical Bayes [paper]
Thang Bui, Cuong Nguyen, Siddharth Swaroop and Richard TurnerPartitioned Variational Inference for Federated Bayesian Deep Learning [paper]
Gregory Gundersen, Bianca Dumitrascu, Jordan Ash and Barbara EngelhardtEnd-to-end training of deep probabilistic CCA for joint modeling of paired biomedical observations [paper]
Beliz Gokkaya, Jessica Ai, Michael Tingley, Yonglong Zhang, Ning Dong, Thomas Jiang, Anitha Kubendran and Arun KumarBayesian Neural Networks using HackPPL with Application to User Location State Prediction [paper]
J. Jon Ryu, Young-Han Kim, Yoojin Choi, Mostafa El-Khamy and Jungwon LeeVariational Inference via a Joint Latent Variable Model with Common Information Extraction [paper]
Ziyin Liu, Hubert Tsai Yao-Hung Tsai, Makoto Yamada and Ruslan SalakhutdinovSemi-Supervised Pairing via Basis-Sharing Wasserstein Matching Auto-Encoder [paper]
Arunesh Mittal, Paul Sajda and John PaisleyDeep Bayesian Nonparametric Factor Analysis [paper]
Sophie Burkhardt, Julia Siekiera and Stefan KramerSemi-Supervised Bayesian Active Learning for Text Classification [paper]
Tal Kachman, Michal Moshkovitz and Michal Rosen-ZviNovel Uncertainty Framework for Deep Learning Ensembles [paper]
Xuechen Li and Will GrathwohlTraining Glow with Constant Memory Cost [paper]
Martin JankowiakClosed Form Variational Objectives For Bayesian Neural Networks with a Single Hidden Layer [paper]
Belhal Karimi and Eric MoulinesMISSO: Minimization by Incremental Stochastic Surrogate for large-scale nonconvex Optimization [paper]
Pashupati Hegde, Markus Heinonen, Harri Lähdesmäki and Samuel KaskiDeep learning with differential Gaussian process flows [paper]
Ryan Turner, Jane Hung, Jason Yosinski and Yunus SaatciMetropolis-Hastings GANs [paper]
Juhan Bae, Guodong Zhang and Roger GrosseEigenvalue Corrected Noisy Natural Gradient [paper]
Cusuh Ham, Amit Raj, Vincent Cartillier and Irfan EssaVariational Image Inpainting [paper]
Jaehoon Lee, Lechao Xiao, Jascha Sohl-Dickstein and Jeffrey PenningtonGaussian Predictions from Gradient Descent Training of Wide Neural Networks [paper]
Eugene Golikov and Maksim KretovEmbedding-reparameterization trick for manifold-valued latent variables in generative models [paper]
Fábio Perez, Rémi Lebret and Karl AbererCluster-Based Active Learning [paper]
Maksim Kuznetsov, Daniil Polykovskiy, Dmitry Vetrov and Alexander ZhebrakSubset-Conditioned Generation Using Variational Autoencoder With A Learnable Tensor-Train Induced Prior [paper]
Remus Pop and Patric FulopDeep Ensemble Bayesian Active Learning [paper]
Tuan Anh Le, Hyunjik Kim, Marta Garnelo, Dan Rosenbaum, Jonathan Schwarz and Yee Whye TehEmpirical Evaluation of Neural Process Objectives [paper]
Bin Dai and David Wipf Diagnosing and Enhancing Gaussian VAE Models [paper]
Jiaming Zeng, Adam Lesnikowski and Jose AlvarezThe Relevance of Bayesian Layer Positioning to Model Uncertainty in Deep Bayesian Active Learning [paper]
Rui Zhao and Qiang JiAn Empirical Evaluation of Bayesian Inference Methods for Bayesian Neural Networks [paper]
Maithra Raghu, Katy Blumer, Rory Sayres, Ziad Obermeyer, Sendhil Mullainathan and Jon KleinbergDirect Uncertainty Prediction for Medical Second Opinions [paper]
Ershad Banijamali, Amir-Hossein Karimi and Ali GhodsiDeep Variational Sufficient Dimensionality Reduction [paper]
Chanwoo Park, Jae Myung Kim, Seok Hyeon Ha and Jungwoo LeeA simple method for predictive uncertainty estimation using gradient uncertainty [paper]
Luca Ambrogioni, Umut Guclu, Yagmur Gucluturk and Marcel van GervenWasserstein Variational Gradient Descent: From Semi-Discrete Optimal Transport to Ensemble Variational Inference [paper]
Kumar Sricharan, Kumar Saketh and Ashok SrivastavaImproving robustness of classifiers by training against live traffic [paper]
Daniel Flam-Shepherd, Yuxiang Gao and Zhaoyu GuoStick-Breaking Neural Latent Variable Models [paper]
Lavanya Sita Tekumalla, Priyanka Agrawal and Indrajit BhattacharyaDeep Nested Hierarchical Dirichlet Processes [paper]
Matt Benatan and Edward Pyzer-KnappPractical Considerations for Probabilistic Backpropagation [paper]
Noah Weber, Janez Starc, Arpit Mittal, Roi Blanco and Lluis MarquezOptimizing over a Bayesian Last Layer [paper]
Mahmoud Elnaggar, Kamin Whitehouse and Cody FlemingBayesian Wireless Channel Prediction for Safety-Critical Connected Autonomous Vehicles [paper]
Siddhartha Jain, Ge Liu and Jonas MuellerMaximizing Overall Diversity to Control Out-of-Distribution Behavior of Deep Ensembles [paper]
Siddhartha Jain and Nathan HuntApproximate Mutual Information-based Acquisition for General Models in Bayesian Optimization [paper]
Navneet Madhu KumarEmpowerment-driven Exploration using Mutual Information Estimation [paper]
Yifeng Li and Xiaodan ZhuCapsule Restricted Boltzmann Machine [paper]
Matthew Willetts, Aiden Doherty, Stephen J. Roberts and Christopher C. HolmesSemi-unsupervised Learning using Deep Generative Models [paper]
Danil Kuzin, Olga Isupova and Lyudmila MihaylovaUncertainty propagation in neural networks for sparse coding [paper]
Jovana Mitrovic, Peter Wirnsberger, Charles Blundell, Dino Sejdinovic and Yee Whye TehInfinitely Deep Infinite-Width Networks [paper]
Daniel Park, Samuel Smith, Jascha Sohl-Dickstein and Quoc LeOptimal SGD Hyperparameters for Fully Connected Networks [paper]
Sambarta Dasgupta, Kumar Sricharan and Ashok SrivastavaFinite Rank Deep Kernel Learning [paper]
Dustin Tran, Mike Dusenberry, Mark van der Wilk and Danijar HafnerBayesian Layers [paper]
Michael Tschannen, Mario Lucic and Olivier BachemRecent Advances in Autoencoder-Based Representation Learning [paper]
Mahmoud Hossam, Trung Le, Viet Huynh and Dinh PhungText Generation with Deep Variational GAN [paper]
Lisha Chen and Qiang JiKernel Density Network for Quantifying Uncertainty in Face Alignment [paper]
Alexander Sagel and Martin GottwaldNever Mind the Density, Here's the Level Set [paper]
Yilun Du and Igor MordatchImplicit Generation and Representation Learning with Energy Based Models [paper]
Brian Trippe, Jonathan Huggins and Tamara BroderickFast Bayesian Inference in GLMs with Low Rank Data Approximations [paper]
Rita Kuznetsova, Oleg Bakhteev and Alexander OgaltsovVariational learning across domains with triplet information [paper]

Awards

Complimentary workshop registration

Several NIPS 2018 complimentary workshop registrations will be awarded to authors of accepted workshop submissions. These will be announced by 16 November 2018. Award recipients will be reimbursed by NIPS for their workshop registration. Further workshop endorsements and travel awards to junior researchers will be updated on the workshop website.


Sponsorship Travel Awards

We managed to secure multiple sponsorships, each donating several travel awards for junior researchers, with 5 travel awards in total. Each travel award will be awarded to selected submissions based on reviewer recommendation. These will be announced at the workshop. We are deeply grateful to our sponsors this year: Google and Qualcomm.

Call for papers

We invite researchers to submit work in any of the following areas:

  • Applications of Bayesian deep learning,
  • deep generative models,
  • variational inference using neural network recognition models,
  • practical approximate inference techniques in Bayesian neural networks,
  • applications of Bayesian neural networks,
  • information theory in deep learning,
  • or any of the topics below.

A submission should take the form of an extended abstract (3 pages long) in PDF format using the NIPS style. Author names do not need to be anonymised and references may extend as far as needed beyond the 3 page upper limit. Submissions may extend beyond the 3 pages upper limit, but reviewers are not expected to read beyond the first 3 pages. If research has previously appeared in a journal, workshop, or conference (including NIPS 2018 conference), the workshop submission should extend that previous work. Parallel submissions (such as to ICLR) are permitted.

Submissions will be accepted as contributed talks or poster presentations. Extended abstracts should be submitted by Friday 2 November 2018; submission page is closed. Final versions will be posted on the workshop website (and are archival but do not constitute a proceedings).

Key Dates:

  • Extended abstract submission deadline: Friday 2 November 2018 (midnight AOE) (submission page is closed)
  • Acceptance notification: 16 November 2018
  • Camera ready submission: 30 November 2018
  • Workshop: 7 December 2018

We will do our best to guarantee workshop registration for all accepted workshop submissions — we have multiple workshop tickets reserved for accepted submissions. In addition, several complimentary workshop registrations will be awarded to authors of accepted workshop abstracts. These will be announced by 16 November 2018. Award recipients will be reimbursed by NIPS for their workshop registration. Further workshop endorsements and travel awards to junior researchers will be updated on the workshop website.

Abstract

While deep learning has been revolutionary for machine learning, most modern deep learning models cannot represent their uncertainty nor take advantage of the well studied tools of probability theory. This has started to change following recent developments of tools and techniques combining Bayesian approaches with deep learning. The intersection of the two fields has received great interest from the community over the past few years, with the introduction of new deep learning models that take advantage of Bayesian techniques, as well as Bayesian models that incorporate deep learning elements [1-11]. In fact, the use of Bayesian techniques in deep learning can be traced back to the 1990s’, in seminal works by Radford Neal [12], David MacKay [13], and Dayan et al. [14]. These gave us tools to reason about deep models’ confidence, and achieved state-of-the-art performance on many tasks. However earlier tools did not adapt when new needs arose (such as scalability to big data), and were consequently forgotten. Such ideas are now being revisited in light of new advances in the field, yielding many exciting new results.

Extending on last year’s workshop’s success, this workshop will again study the advantages and disadvantages of such ideas, and will be a platform to host the recent flourish of ideas using Bayesian approaches in deep learning and using deep learning tools in Bayesian modelling. The program includes a mix of invited talks, contributed talks, and contributed posters. It will be composed of five themes: deep generative models, variational inference using neural network recognition models, practical approximate inference techniques in Bayesian neural networks, applications of Bayesian neural networks, and information theory in deep learning. Future directions for the field will be debated in a panel discussion.

This year’s main theme will focus on applications of Bayesian deep learning within machine learning and outside of it.

Previous workshops:

Our 2017 workshop page is available here; Our 2016 workshop page is available here; videos from the 2016 workshop are available online as well.

Topics

  • Applications of Bayesian deep learning,
  • Probabilistic deep models for classification and regression (such as extensions and application of Bayesian neural networks),
  • Generative deep models (such as variational autoencoders),
  • Incorporating explicit prior knowledge in deep learning (such as posterior regularization with logic rules),
  • Approximate inference for Bayesian deep learning (such as variational Bayes / expectation propagation / etc. in Bayesian neural networks),
  • Scalable MCMC inference in Bayesian deep models,
  • Deep recognition models for variational inference (amortized inference),
  • Model uncertainty in deep learning,
  • Bayesian deep reinforcement learning,
  • Deep learning with small data,
  • Deep learning in Bayesian modelling,
  • Probabilistic semi-supervised learning techniques,
  • Active learning and Bayesian optimization for experimental design,
  • Information theory in deep learning,
  • Kernel methods in Bayesian deep learning,
  • Implicit inference,
  • Applying non-parametric methods, one-shot learning, and Bayesian deep learning in general.

References

  1. Kingma, DP and Welling, M, ‘’Auto-encoding variational bayes’’, 2013.
  2. Rezende, D, Mohamed, S, and Wierstra, D, ‘’Stochastic backpropagation and approximate inference in deep generative models’’, 2014.
  3. Blundell, C, Cornebise, J, Kavukcuoglu, K, and Wierstra, D, ‘’Weight uncertainty in neural network’’, 2015.
  4. Hernandez-Lobato, JM and Adams, R, ’’Probabilistic backpropagation for scalable learning of Bayesian neural networks’’, 2015.
  5. Gal, Y and Ghahramani, Z, ‘’Dropout as a Bayesian approximation: Representing model uncertainty in deep learning’’, 2015.
  6. Gal, Y and Ghahramani, G, ‘’Bayesian convolutional neural networks with Bernoulli approximate variational inference’’, 2015.
  7. Kingma, D, Salimans, T, and Welling, M. ‘’Variational dropout and the local reparameterization trick’’, 2015.
  8. Balan, AK, Rathod, V, Murphy, KP, and Welling, M, ‘’Bayesian dark knowledge’’, 2015.
  9. Louizos, C and Welling, M, “Structured and Efficient Variational Deep Learning with Matrix Gaussian Posteriors”, 2016.
  10. Lawrence, ND and Quinonero-Candela, J, “Local distance preservation in the GP-LVM through back constraints”, 2006.
  11. Tran, D, Ranganath, R, and Blei, DM, “Variational Gaussian Process”, 2015.
  12. Neal, R, ‘’Bayesian Learning for Neural Networks’’, 1996.
  13. MacKay, D, ‘’A practical Bayesian framework for backpropagation networks‘’, 1992.
  14. Dayan, P, Hinton, G, Neal, R, and Zemel, S, ‘’The Helmholtz machine’’, 1995.
  15. Wilson, AG, Hu, Z, Salakhutdinov, R, and Xing, EP, “Deep Kernel Learning”, 2016.
  16. Saatchi, Y and Wilson, AG, “Bayesian GAN”, 2017.
  17. MacKay, D.J.C. “Bayesian Methods for Adaptive Models”, PhD thesis, 1992.