Schedule & Accepted Papers

Slides for invited talks are now online (see [slides] below), as well as workshop video recordings.

Confirmed Speakers

Schedule

8.00 - 8.05 Opening remarks
8.05 - 8.25 Invited talk Alexander G. de G. Matthews (DeepMind) [slides] Gaussian Process Behaviour in Wide Deep Neural Networks
8.25 - 8.40 Contributed talk Tim G. J. Rudner, Florian Wenzel and Yarin Gal The Natural Neural Tangent Kernel: Neural Network Training Dynamics under Natural Gradient Descent
8.40 - 9.00 Invited talk Yingzhen Li (Microsoft Research) [slides] On estimating epistemic uncertainty (tentative)
9.00 - 9.15 Contributed talk Stanislav Fort, Huiyi Hu and Balaji Lakshminarayanan Deep Ensembles: A Loss Landscape Perspective
9.15 - 9.30 Poster spotlights
9.30 - 10.30 Discussion over coffee and poster session
10.30 - 10.50 Invited talk Andrew Gordon Wilson (NYU) [slides] Using Loss Surface Geometry for Scalable Bayesian Deep Learning
10.50 - 11.05 Contributed talk Abhishek Kumar and Ben Poole On Implicit Regularization in β-VAE
11.05 - 11.25 Invited talk Jasper Snoek (Google) [slides] Uncertainty under distributional shift
11.25 - 11.40 Contributed talk Sicong Huang, Alireza Makhzani, Yanshuai Cao and Roger Grosse Evaluating Lossy Compression Rates of Deep Generative Models
11.40 - 13.20 Lunch
13.20 - 13.40 Invited talk Chelsea Finn (Google Brain / Berkeley / Stanford) [slides] The Big Problem with Meta-Learning and How Bayesians Can Fix It
13.40 - 13.55 Contributed talk Riccardo Moriconi, Marc Peter Deisenroth and Senanayak Sesh Kumar Karri High-dimensional Bayesian optimization using low-dimensional feature spaces
13.55 - 14.15 Invited talk Roger Grosse (Toronto) [slides] Functional variational Bayesian neural networks
14.15 - 14.30 Contributed talk Sebastian Farquhar, Lewis Smith and Yarin Gal Try Depth Instead of Weight Correlations: Mean-field is a Less Restrictive Assumption for Variational Inference in Deep Networks
14.30 - 15.30 Discussion over coffee and poster session
15.30 - 15.45 Contributed talk Andrew Ross, Jianzhun Du, Yonadav Shavit and Finale Doshi-Velez Controlled Direct Effect Priors for Bayesian Neural Networks
15.45 - 16.05 Invited talk Debora Marks (Harvard Medical School / Broad Institute) [slides] Deep generative models for genetic variation and drug design
16.05 - 16.20 Contributed talk Mihaela Rosca, Michael Figurnov, Shakir Mohamed and Andriy Mnih Measure Valued Derivatives for Approximate Bayesian Inference
16.20 - 17.30 Panel Session Panellists:
Debora Marks
Jasper Snoek
Chelsea Finn
Andrew Gordon Wilson
Yingzhen Li
Roger Grosse
Alexander G. de G. Matthews
Moderator: Finale Doshi-Velez
17.30 - 19.30 Poster session

Accepted Abstracts

We added all camera ready submissions sent to us by 11/12/2019. If a paper is not online, please contact the lead author and encourage them to send us the camera ready.

Authors Title
Andrew Y. K. Foong, David R. Burt, Yingzhen Li and Richard E. TurnerPathologies of Factorised Gaussian and MC Dropout Posteriors in Bayesian Neural Networks [paper]
Paulo Rauber, Aditya Ramesh and Jürgen SchmidhuberRecurrent neural-linear posterior sampling for non-stationary bandits [paper]
Andreas Look and Melih KandemirDifferential Bayesian Neural Nets [paper]
Matt Benatan and Edward Pyzer-KnappFully Bayesian Recurrent Neural Networks for Safe Reinforcement Learning [paper]
Ruiyi Zhang, Changyou Chen, Zhe Gan, Zheng Wen and Lawrence CarinNested-Wasserstein Self-Imitation Learning for Sequence Generation [paper]
Marton Havasi, Jasper Snoek, Dustin Tran, Jonathan Gordon and Jose Miguel Hernandez-LobatoRefining the variational posterior through iterative optimization [paper]
Hlynur Jonsson, Giovanni Cherubini and Evangelos EleftheriouConvergence of DNNs with mutual-information-based regularization [paper]
Fredrik K. Gustafsson, Martin Danelljan and Thomas B. SchönEvaluating Scalable Bayesian Deep Learning Methods for Robust Computer Vision [paper]
Angelos Filos, Sebastian Farquhar, Aidan N. Gomez, Tim G. J. Rudner, Zachary Kenton, Lewis Smith, Milad Alizadeh, Arnoud de Kroon and Yarin GalA Systematic Comparison of Bayesian Deep Learning Robustness in Diabetic Retinopathy Tasks [paper]
James Brofos, Rui Shu and Roy LedermanA Bias-Variance Decomposition for Bayesian Deep Learning [paper]
Nabeel Seedat and Christopher KananTowards calibrated and scalable uncertainty representations for neural networks [paper]
Otmane Sakhi, Stephen Bonner, David Rohde and Flavian VasileReconsidering analytical variational bounds for output layers of deep networks [paper]
Micha Livne, Kevin Swersky and David FleetHigh Mutual Information in Representation Learning with Symmetric Variational Inference [paper]
Hao Fu, Chunyuan Li, Ke Bai, Jianfeng Gao and Lawrence CarinFlexible Text Modeling withSemi-Implicit Latent Representations [paper]
Suwen Lin, Martin Wistuba, Ambrish Rawat and Nitesh ChawlaNeural Tree Kernel Learning [paper]
Ivan Ustyuzhaninov, Ieva Kazlauskaite, Markus Kaiser, Erik Bodin, Carl Henrik Ek and Neill CampbellCompositional uncertainty in deep Gaussian processes [paper]
Adam D Cobb, Atilim Günes Baydin, Ivan Kiskin, Andrew Markham and Stephen RobertsSemi-separable Hamiltonian Monte Carlo for inference in Bayesian neural networks [paper]
Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh and Balaji LakshminarayananDetecting Out-of-Distribution Inputs to Deep Generative Models Using Typicality [paper]
Felix McGregor, Arnu Pretorius, Johan du Preez and Steve KroonStabilising priors for robust Bayesian deep learning [paper]
John Moberg, Lennart Svensson, Juliano Pinto and Henk WymeerschBayesian Linear Regression on Deep Representations [paper]
Bang An, Xuannan Dong and Changyou ChenRepulsive Bayesian Sampling for Diversified Attention Modeling [paper]
Colin White, Willie Neiswanger and Yash SavaniDeep Uncertainty Estimation for Model-based Neural Architecture Search [paper]
Vincent Fortuin and Gunnar RätschDeep Mean Functions for Meta-Learning in Gaussian Processes [paper]
Anirudh Suresh and Srivatsan SrinivasanImproved Attentive Neural Processes [paper]
Patrick McClure, Nao Rho, John Lee, Jakub Kaczmarzyk, Charles Zheng, Satrajit Ghosh, Dylan Nielson, Adam Thomas, Peter Bandettini and Francisco PereiraImproving 3D Brain Segmentation using a Spike-and-Slab Bayesian Deep Neural Network [paper]
Tim R. Davidson, Jakub M. Tomczak and Efstratios GavvesIncreasing Expressivity of a Hyperspherical VAE [paper]
Gaurush Hiranandani, Sumeet Katariya, Nikhil Rao and Karthik SubbianOnline Bayesian Learning for E-Commerce Query Reformulation [paper]
Sanjeev Arora, Simon Du, Zhiyuan Li, Ruslan Salakhutdinov, Ruosong Wang and Dingli YuOn the Power of NTK on Small Data [paper]
Prithvijit Chakrabarty and Subhransu MajiThe Spectral Bias of the Deep Image Prior [paper]
Masha Itkina, Boris Ivanovic, Ransalu Senanayake, Mykel Kochenderfer and Marco PavoneEvidential Disambiguation of Latent Multimodality in Conditional Variational Autoencoders [paper]
Simone Rossi, Sébastien Marmin and Maurizio FilipponeEfficient Approximate Inference with Walsh-Hadamard Variational Inference [paper]
Didrik Nielsenand Ole WintherPixelCNN as a Single-Layer Flow [paper]
Changyong Oh, Kamil Adamczewski and Mijung ParkThe Radial and Directional Posteriors for Bayesian Deep Learning [paper]
William Harvey, Michael Teng and Frank WoodNear-Optimal Glimpse Sequences for Training Hard Attention Neural Networks [paper]
Matias Valdenegro-ToroDeep Sub-Ensembles for Fast Uncertainty Estimation in Image Classification [paper]
Stefano Peluchetti and Stefano FavaroNeural SDE - Information propagation through the lens of diffusion processes [paper]
Riccardo Moriconi, Marc Peter Deisenroth and Senanayak Sesh Kumar KarriHigh-dimensional Bayesian optimization using low-dimensional feature spaces [paper]
Ali Hebbal, Loic Brevault, Mathieu Balesdent, El-Ghazali Talbi and Nouredine MelabMulti-fidelity modeling using DGPs: Improvements and a generalization to varying input space dimensions [paper]
Wen Yao, Jun Zhang, Qiang Chang, Xiaozhou Zhu and Weien ZhouError Estimation of Sampling-free Uncertainty Propagation in Bayesian Neural Network with Simplified Covariance Matrix [paper]
Sebastian Farquhar, Lewis Smith and Yarin GalTry Depth Instead of Weight Correlations: Mean-field is a Less Restrictive Assumption for Variational Inference in Deep Networks [paper]
Tianyu Cui, Pekka Marttinen and Samuel KaskiLearning Global Pairwise Interactions with Bayesian Neural Networks [paper]
Da Tang, Dawen Liang, Nicholas Ruozzi and Tony JebaraLearning Correlated Latent Representations with Adaptive Priors [paper]
Vaclav Smidl, Jan Bim and Tomas PevnyOrthogonal Approximation of Marginal Likelihood of Generative Models [paper]
Augustin Prado, Ravinath Kausik and Lalitha VenkataramananDual Neural Network Architecture for Determining Epistemic and Aleatoric Uncertainties [paper]
Chunlin Ji and Haige ShenStochastic Variational Inference via Upper Bound [paper]
Taylan Cemgil, Sumedh Ghaisas, Krishnamurthy Dvijotham and Pushmeet KohliLearning Perturbation-Invariant Representations with Smooth Encoders [paper]
Jack Fitzsimons, Sebastian Schmon and Stephen RobertsImplicit Priors for Knowledge Sharing in Bayesian Neural Networks [paper]
Mike Wu, Kristy Choi, Noah Goodman and Stefano ErmonMeta-Amortized Variational Inference and Learning [paper]
He Zhao, Piyush Rai, Lan Du, Wray Buntine, Dinh Phung and Mingyuan ZhouA Bayesian Extension to VAEs for Discrete Data [paper]
Agustinus Kristiadi, Sina Däubener and Asja FischerUncertainty quantification with compound density networks [paper]
Niklas Heim, Václav Šmídl and Tomáš PevnýRodent: Relevance determination in ODE [paper]
Kathrin Grosse, David Pfaff, Michael T. Smith and Michael BackesThe Limitations of Model Uncertainty in Adversarial Settings [paper]
Timon Willi, Jonathan Masci, Jürgen Schmidhuber and Christian OsendorferRecurrent Neural Processes [paper]
Samuel Kessler, Vu Nguyen, Stefan Zohren and Steve RobertsIndian Buffet Neural Networks for Continual Learning [paper]
Mariana Vargas Vieyra, Aurélien Bellet and Pascal DenisProbabilistic End-to-End Graph-based Semi-Supervised Learning [paper]
Apratim Bhattacharyya, Mario Fritz and Bernt Schiele“Best of Many” Samples Distribution Matching [paper]
Apratim Bhattacharyya, Michael Hanselmann, Mario Fritz, Bernt Schiele and Christoph-Nikolas StraehleConditional Flow Variational Autoencoders for Structured Sequence Prediction [paper]
Rahul Sharma, Abhishek Kumar and Piyush RaiRefined $\alpha$-Divergence Variational Inference via Rejection Sampling [paper]
Luis A. Perez Rey, Vlado Menkovski and Jacobus W. PortegiesCan VAEs capture topological properties? [paper]
Geoffroy Dubourg-Felonneau, Omar Darwish, Christopher Parsons, Dàmi Rebergen, John Cassidy, Nirmesh Patel and Harry CliffordDeep Bayesian Recurrent Neural Networks for Somatic Variant Calling in Cancer [paper]
Jonathan Warrell and Mark GersteinHierarchical PAC-Bayes Bounds via Deep Probabilistic Programming [paper]
Slava Voloshynovskiy, Mouad Kondah, Shideh Rezaeifar, Olga Taran, Taras Holotyak and Danilo J. RezendeInformation bottleneck through variational glasses [paper]
Mihaela Rosca, Michael Figurnov, Shakir Mohamed and Andriy MnihMeasure Valued Derivatives for Approximate Bayesian Inference [paper]
Max-Heinrich Laves, Sontje Ihler, Karl-Philipp Kortmann and Tobias OrtmaierWell-calibrated Model Uncertainty with Temperature Scaling for Dropout Variational Inference [paper]
Matthew Willetts, Stephen Roberts and Chris HolmesDisentangling to Cluster: Gaussian Mixture Variational Ladder Autoencoders [paper]
Jiaming Song and Stefano ErmonMutual Information Estimation as Optimization over Density Ratios: A Unifying Perspective [paper]
Andrew Ross, Jianzhun Du, Yonadav Shavit and Finale Doshi-VelezControlled Direct Effect Priors for Bayesian Neural Networks [paper]
Yeming Wen, Dustin Tran and Jimmy BaBatchEnsemble: Efficient Ensemble of Deep Neural Networks via Rank-1 Perturbation [paper]
Homa Fashandi and Darin GrahamEmpirical Studies on Sensitivity to Perturbations and Hyperparameters in Bayesian Neural Networks [paper]
Ruiqi Gao, Erik Nijkamp, Zhen Xu, Andrew Dai, Diederik Kingma and Ying Nian WuFlow Contrastive Estimation of Energy-Based Model [paper]
Hooshmand Shokri Razaghi and Liam PaninskiFiltering Normalizing Flows [paper]
Joshua Chang, Shashaank Vattikuti and Carson ChowProbabilistically-autoencoded horseshoe-disentangled multidomain item-response theory models [paper]
Tim Xiao, Aidan Gomez and Yarin GalWat heb je gezegd? Detecting Out-of-Distribution Translations with Variational Transformers [paper]
Vanessa Boehm, Francois Lanusse and Uros SeljakUncertainty Quantification with Generative Models [paper]
Giorgio Giannone, Christian Osendorfer and Jonathan MasciNo Representation without Transformation [paper]
Gintare Karolina Dziugaite, Waseem Gharbieh, Kyle Hsu and Daniel RoyOptimal (PAC-Bayes) priors are data dependent [paper]
Ranganath Krishnan, Mahesh Subedar, Omesh Tickoo, Angelos Filos and Yarin GalImproving MFVI in Bayesian Neural Networks with Empirical Bayes: a Study with Diabetic Retinopathy Diagnosis [paper]
Ari Heljakka, Yuxin Hou, Juho Kannala and Arno SolinConditional Image Sampling by Deep Automodulators [paper]
Joe Davison, Kristen Severson and Soumya GhoshCross-population Variational Autoencoders [paper]
Arsenii Ashukha, Alexander Lyzhov, Dmitry Molchanov and Dmitry VetrovPitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep Learning [paper]
Jeremiah LiuVariable Selection with Rigorous Uncertainty Quantification using Bayesian Deep Neural Networks [paper]
Sicong Huang, Alireza Makhzani, Yanshuai Cao and Roger GrosseEvaluating Lossy Compression Rates of Deep Generative Models [paper]
Dian Ang Yap, Nicholas Roberts and Vinay PrabhuGrassmannian Packings in Neural Networks: Learning with Maximal Subspace Packings for Diversity and Anti-Sparsity [paper]
Evgenii Nikishin, Arsenii Ashukha and Dmitry VetrovUnsupervised Domain Adaptation with Shared Latent Dynamics for Reinforcement Learning [paper]
Imant Daunhawer, Thomas Sutter and Julia E. VogtImproving Multimodal Generative Models with Disentangled Latent Partitions [paper]
Sunho Park, Saehoon Kim, Hongming Xu and Tae Hyun HwangDeep Gaussian processes for weakly supervised learning: tumor mutation burden (TMB) prediction [paper]
Nilesh Ahuja, Ibrahima Ndiour, Trushant Kalyanpur and Omesh TickooProbabilistic Modeling of Deep Features for Out-of-Distribution and Adversarial Detection [paper]
Stanislav Fort, Huiyi Hu and Balaji LakshminarayananDeep Ensembles: A Loss Landscape Perspective [paper]
Abhishek Kumar and Ben PooleOn Implicit Regularization in β-VAE [paper]
Steindor Saemundsson, Katja Hofmann and Marc DeisenrothVariational Integrator Networks [paper]
Mahesh Subedar, Nilesh Ahuja, Ranganath Krishnan, Ibrahima Ndiour and Omesh TickooDeep Probabilistic Models to Detect Data Poisoning Attacks [paper]
Roman Novak, Lechao Xiao, Jiri Hron, Jaehoon Lee, Jascha Sohl-Dickstein and Samuel SchoenholzNeural Tangents: Easy and Fast Infinite Networks in Python [paper]
Artyom Gadetsky, Kirill Struminsky, Christopher Robinson, Novi Quadrianto and Dmitry VetrovLow-variance Gradient Estimates for the Plackett-Luce Distribution [paper]
Jhosimar Arias FigueroaSemi-supervised Learning using Deep Generative Models and Auxiliary Tasks [paper]
Erik Daxberger and José Miguel Hernández-LobatoBayesian VAEs for Unsupervised Anomaly Detection [paper]
Xavier Gitiaux, Shane Maloney, Anna Jungbluth, Carl Shneider, Atılım Güneş Baydin, Paul J. Wright, Yarin Gal, Michel Deudon, Alfredo Kalaitzis and Andres Munoz-JaramilloProbabilistic Super-Resolution of Solar Magnetograms: Generating Many Explanations and Measuring Uncertainties [paper]
Tim G. J. Rudner, Florian Wenzel and Yarin GalThe Natural Neural Tangent Kernel: Neural Network Training Dynamics under Natural Gradient Descent [paper]
Waseem Aslam, Tim G. J. Rudner and Yarin GalTighter Variational Bounds for Deep Gaussian Processes [paper]
Adrián Csiszárik, Beatrix Benkő and Dániel VargaNegative Sampling in Variational Autoencoders [paper]
Aditya Grover, Dustin Tran, Rui Shu, Ben Poole and Kevin MurphyProbing Uncertainty Estimates of Neural Processes [paper]

Call for papers

We invite researchers to submit work in any of the following areas:

  • Uncertainty in deep learning,
  • probabilistic deep models (such as extensions and application of Bayesian neural networks),
  • deep probabilistic models (such as hierarchical Bayesian models and their applications),
  • deep generative models (such as variational autoencoders),
  • practical approximate inference techniques in Bayesian deep learning,
  • connections between deep learning and Gaussian processes,
  • applications of Bayesian deep learning,
  • or any of the topics below.

A submission should take the form of an extended abstract (3 pages long) in PDF format using the NeurIPS 2019 style. Author names do not need to be anonymized, and conflicts of interest in assessing submitted contributions will be based on this (reviewers will not be involved in the assessment of a submission by authors within the same institution). References may extend as far as needed beyond the 3 page upper limit. Submissions may extend beyond the 3 pages upper limit, but reviewers are not expected to read beyond the first 3 pages. If the research has previously appeared in a journal, workshop, or conference (including the NeurIPS 2019 conference), the workshop submission should extend that previous work. Dual submissions to ICLR 2019, AAAI 2019, and AISTATS 2019 are permitted.

Submissions will be accepted as contributed talks or poster presentations. Extended abstracts should be submitted by September 9, 2019 Deadline has been extended to Friday, September 13, 2019; submission page is here. Final versions will be posted on the workshop website (and are archival but do not constitute a proceedings). Notification of acceptance will be made before October 1, 2019.

Key Dates:

  • Extended abstract submission deadline: September 9, 2019 (23:59 AOE) Deadline has been extended to Friday, September 13, 2019 (23:59 AOE) (submission page is here)
  • Acceptance notification: before October 1, 2019
  • Camera ready submission: December 1, 2019
  • Workshop: Friday, December 13, 2019

Please make sure to apply to the lottery registration as the workshop is allocated a limited number of tickets. We will do our best to guarantee workshop registration for all accepted workshop submissions which did not receive a lottery registration.

Abstract

While deep learning has been revolutionary for machine learning, most modern deep learning models cannot represent their uncertainty nor take advantage of the well studied tools of probability theory. This has started to change following recent developments of tools and techniques combining Bayesian approaches with deep learning. The intersection of the two fields has received great interest from the community over the past few years, with the introduction of new deep learning models that take advantage of Bayesian techniques, as well as Bayesian models that incorporate deep learning elements [1-11]. In fact, the use of Bayesian techniques in deep learning can be traced back to the 1990s’, in seminal works by Radford Neal [12], David MacKay [13], and Dayan et al. [14]. These gave us tools to reason about deep models’ confidence, and achieved state-of-the-art performance on many tasks. However earlier tools did not adapt when new needs arose (such as scalability to big data), and were consequently forgotten. Such ideas are now being revisited in light of new advances in the field, yielding many exciting new results.

Extending on last year’s workshop’s success, this workshop will again study the advantages and disadvantages of such ideas, and will be a platform to host the recent flourish of ideas using Bayesian approaches in deep learning and using deep learning tools in Bayesian modelling. The program includes a mix of invited talks, contributed talks, and contributed posters. It will be composed of five themes: deep generative models, variational inference using neural network recognition models, practical approximate inference techniques in Bayesian neural networks, applications of Bayesian neural networks, and information theory in deep learning. Future directions for the field will be debated in a panel discussion.

Previous workshops:

Our 2018 workshop page is available here; Our 2017 workshop page is available here; Our 2016 workshop page is available here; videos from the 2016 workshop are available online as well.

Topics

  • Uncertainty in deep learning,
  • Applications of Bayesian deep learning,
  • Probabilistic deep models (such as extensions and application of Bayesian neural networks),
  • Deep probabilistic models (such as hierarchical Bayesian models and their applications),
  • Generative deep models (such as variational autoencoders),
  • Information theory in deep learning,
  • Deep ensemble uncertainty,
  • NTK and Bayesian modelling,
  • Connections between NNs and GPs,
  • Incorporating explicit prior knowledge in deep learning (such as posterior regularisation with logic rules),
  • Approximate inference for Bayesian deep learning (such as variational Bayes / expectation propagation / etc. in Bayesian neural networks),
  • Scalable MCMC inference in Bayesian deep models,
  • Deep recognition models for variational inference (amortised inference),
  • Bayesian deep reinforcement learning,
  • Deep learning with small data,
  • Deep learning in Bayesian modelling,
  • Probabilistic semi-supervised learning techniques,
  • Active learning and Bayesian optimisation for experimental design,
  • Kernel methods in Bayesian deep learning,
  • Implicit inference,
  • Applying non-parametric methods, one-shot learning, and Bayesian deep learning in general.

References

  1. Kingma, DP and Welling, M, ‘’Auto-encoding variational bayes’’, 2013.
  2. Rezende, D, Mohamed, S, and Wierstra, D, ‘’Stochastic backpropagation and approximate inference in deep generative models’’, 2014.
  3. Blundell, C, Cornebise, J, Kavukcuoglu, K, and Wierstra, D, ‘’Weight uncertainty in neural network’’, 2015.
  4. Hernandez-Lobato, JM and Adams, R, ’’Probabilistic backpropagation for scalable learning of Bayesian neural networks’’, 2015.
  5. Gal, Y and Ghahramani, Z, ‘’Dropout as a Bayesian approximation: Representing model uncertainty in deep learning’’, 2015.
  6. Gal, Y and Ghahramani, G, ‘’Bayesian convolutional neural networks with Bernoulli approximate variational inference’’, 2015.
  7. Kingma, D, Salimans, T, and Welling, M. ‘’Variational dropout and the local reparameterization trick’’, 2015.
  8. Balan, AK, Rathod, V, Murphy, KP, and Welling, M, ‘’Bayesian dark knowledge’’, 2015.
  9. Louizos, C and Welling, M, “Structured and Efficient Variational Deep Learning with Matrix Gaussian Posteriors”, 2016.
  10. Lawrence, ND and Quinonero-Candela, J, “Local distance preservation in the GP-LVM through back constraints”, 2006.
  11. Tran, D, Ranganath, R, and Blei, DM, “Variational Gaussian Process”, 2015.
  12. Neal, R, ‘’Bayesian Learning for Neural Networks’’, 1996.
  13. MacKay, D, ‘’A practical Bayesian framework for backpropagation networks‘’, 1992.
  14. Dayan, P, Hinton, G, Neal, R, and Zemel, S, ‘’The Helmholtz machine’’, 1995.
  15. Wilson, AG, Hu, Z, Salakhutdinov, R, and Xing, EP, “Deep Kernel Learning”, 2016.
  16. Saatchi, Y and Wilson, AG, “Bayesian GAN”, 2017.
  17. MacKay, D.J.C. “Bayesian Methods for Adaptive Models”, PhD thesis, 1992.