Skip to content

Modern Gaussian Processes: Scalable Inference and Novel Applications

License

Notifications You must be signed in to change notification settings

Mypathissional/gaussianprocesses

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

81 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Abstract

Since the celebrated book by Rasmussen and Williams, there have been a considerable amount of novel contributions that are allowing the applicability of Gaussian processes (GPs) to problems at an unprecedented scale and to new areas where uncertainty quantification is of fundamental importance. This tutorial will expose attendees to recent advances in GP research; describe the current challenges in modeling and inference with GPs; their relationship to neural networks and deep neural networks and stimulate the debate about the role of GP models in solving complex machine-learning tasks.

Table of Content

Presenters

Edwin V. Bonilla

Photo Edwin

Edwin V. Bonilla received a Master's degree in Artificial Intelligence and a PhD in Informatics at the University of Edinburgh (UK) in 2004 and 2008, respectively.

He worked at the University of Edinburgh as a Research Associate from 2008 to 2009. He then moved to National ICT Australia as a machine-learning researcher, while also being an adjunct research fellow at the Australian National University (2010-2014). He worked as a Senior Lecturer at UNSW Sydney from 2014 to 2018. He now works as a principal research scientist at CSIRO's Data61 (Australia).

Edwin's more recent work and interests are in the areas of generic and efficient inference for models with Gaussian process priors and general likelihoods; deep Gaussian processes; probabilistic methods for network-structure discovery; doubly stochastic Poisson process models; graph neural networks; and inference in implicit probabilistic models.

Maurizio Filippone

Photo Maurizio

Maurizio Filippone received a Master's degree in Physics and a Ph.D. in Computer Science from the University of Genova, Italy, in 2004 and 2008, respectively.

In 2007, he was a Research Scholar with George Mason University, Fairfax, VA. From 2008 to 2011, he was a Research Associate with the University of Sheffield, U.K. (2008-2009), with the University of Glasgow, U.K. (2010), and with University College London, U.K (2011). From 2011 to 2015 he was a Lecturer at the University of Glasgow, U.K, and he is currently AXA Chair of Computational Statistics and Associate Professor at EURECOM, Sophia Antipolis, France.

His current research interests include the development of tractable and scalable Bayesian inference techniques for nonparametric statistical models with applications in environmental and life sciences.

Link to Slides

Notebooks

Notebook Sampling from GP prior{:target="_blank"}

Notebook on GP Regression{:target="_blank"}

Acknowledgements

We would like to thank Simone Rossi and Jonas Wacker for their help in preparing the jupyter notebooks.

References

Bayesian Deep Nets and Deep Gaussian Processes

  • A. G. de G. Matthews, J. Hron, M. Rowland, R. E. Turner, and Z. Ghahramani. Gaussian process behaviour in wide deep neural networks. In International Conference on Learning Representations, 2018.

  • K. Cutajar, E. V. Bonilla, P. Michiardi, and M. Filippone. Random feature expansions for deep Gaussian processes. In D. Precup and Y. W. Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 884--893, International Convention Centre, Sydney, Australia, Aug. 2017. PMLR.

  • Y. Gal and Z. Ghahramani. Dropout As a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML'16, pages 1050--1059. JMLR.org, 2016.

  • D. K. Duvenaud, O. Rippel, R. P. Adams, and Z. Ghahramani. Avoiding pathologies in very deep networks. In Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, AISTATS 2014, Reykjavik, Iceland, April 22-25, 2014, volume 33 of JMLR Workshop and Conference Proceedings, pages 202--210. JMLR.org, 2014.

  • R. M. Neal. Bayesian Learning for Neural Networks (Lecture Notes in Statistics). Springer, 1 edition, Aug. 1996.

### Inference for Deep Gaussian Processes
  • H. Salimbeni and M. Deisenroth. Doubly Stochastic Variational Inference for Deep Gaussian Processes. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 4588--4599. Curran Associates, Inc., 2017.

  • M. D. Hoffman. Learning deep latent Gaussian models with Markov chain Monte Carlo. In D. Precup and Y. W. Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1510--1519, International Convention Centre, Sydney, Australia, Aug. 2017. PMLR.

  • M. Havasi, J. M. Hernández-Lobato, and J. J. Murillo-Fuentes. Inference in Deep Gaussian Processes using Stochastic Gradient Hamiltonian Monte Carlo. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 7506--7516. Curran Associates, Inc., 2018.

  • K. Cutajar, E. V. Bonilla, P. Michiardi, and M. Filippone. Random feature expansions for deep Gaussian processes. In D. Precup and Y. W. Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 884--893, International Convention Centre, Sydney, Australia, Aug. 2017. PMLR.

  • T. D. Bui, D. Hernández-Lobato, J. M. Hernández-Lobato, Y. Li, and R. E. Turner. Deep Gaussian Processes for Regression using Approximate Expectation Propagation. In M.-F. Balcan and K. Q. Weinberger, editors, Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, volume 48, pages 1472--1481. JMLR.org, 2016.

  • A. C. Damianou and N. D. Lawrence. Deep Gaussian Processes. In Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics, AISTATS 2013, Scottsdale, AZ, USA, April 29 - May 1, 2013, volume 31 of JMLR Proceedings, pages 207--215. JMLR.org, 2013.

  • J. Hensman and N. D. Lawrence. Nested Variational Compression in Deep Gaussian Processes, Dec. 2014. arxiv:1412.1370.

### Convolutional Nets and Gaussian Processes
  • V. Kumar, V. Singh, P. K. Srijith, and A. Damianou. Deep Gaussian Processes with Convolutional Kernels, June 2018. arXiv:1806.01655.

  • M. van der Wilk, C. E. Rasmussen, and J. Hensman. Convolutional Gaussian Processes. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 2849--2858. Curran Associates, Inc., 2017.

  • J. Bradshaw, Alexander, and Z. Ghahramani. Adversarial Examples, Uncertainty, and Transfer Testing Robustness in Gaussian Process Hybrid Deep Networks, July 2017. arXiv:1707.02476.

  • R. Calandra, J. Peters, C. E. Rasmussen, and M. P. Deisenroth. Manifold Gaussian Processes for regression. In 2016 International Joint Conference on Neural Networks, IJCNN 2016, Vancouver, BC, Canada, July 24-29, 2016, pages 3338--3345, 2016.

  • A. G. Wilson, Z. Hu, R. R. Salakhutdinov, and E. P. Xing. Stochastic Variational Deep Kernel Learning. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 2586--2594. Curran Associates, Inc., 2016.

### Bayesian Convolutional Nets
  • G.-L. Tran, E. V. Bonilla, J. Cunningham, P. Michiardi, and M. Filippone. Calibrating Deep Convolutional Gaussian Processes. In K. Chaudhuri and M. Sugiyama, editors, Proceedings of Machine Learning Research, volume 89 of Proceedings of Machine Learning Research, pages 1554--1563. PMLR, 16--18 Apr 2019.

  • F. Laumann, K. Shridhar, and A. L. Maurin. Bayesian Convolutional Neural Networks, June 2018. arXiv:1806.05978.

  • A. Garriga-Alonso, C. E. Rasmussen, and L. Aitchison. Deep Convolutional Networks as shallow Gaussian Processes. In International Conference on Learning Representations, 2019.

  • Y. Gal and Z. Ghahramani. Bayesian convolutional neural networks with Bernoulli approximate variational inference. In 4th International Conference on Learning Representations (ICLR) workshop track, 2016.

### Calibration of Bayesian Convolutional Nets
  • G.-L. Tran, E. V. Bonilla, J. Cunningham, P. Michiardi, and M. Filippone. Calibrating Deep Convolutional Gaussian Processes. In K. Chaudhuri and M. Sugiyama, editors, Proceedings of Machine Learning Research, volume 89 of Proceedings of Machine Learning Research, pages 1554--1563. PMLR, 16--18 Apr 2019.

  • B. Lakshminarayanan, A. Pritzel, and C. Blundell. Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 6402--6413. Curran Associates, Inc., 2017.

  • A. Niculescu-Mizil and R. Caruana. Predicting Good Probabilities with Supervised Learning. In Proceedings of the 22Nd International Conference on Machine Learning, ICML '05, pages 625--632, New York, NY, USA, 2005. ACM.

  • C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger. On Calibration of Modern Neural Networks. In D. Precup and Y. W. Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1321--1330, International Convention Centre, Sydney, Australia, Aug. 2017. PMLR.

### Random Feature Expansions for Shallow and Deep Gaussian Processes
  • K. Cutajar, E. V. Bonilla, P. Michiardi, and M. Filippone. Random feature expansions for deep Gaussian processes. In D. Precup and Y. W. Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 884--893, International Convention Centre, Sydney, Australia, Aug. 2017. PMLR.

  • F. X. Yu, A. T. Suresh, K. M. Choromanski, D. N. Holtmann-Rice, and S. Kumar. Orthogonal Random Features. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 1975--1983. Curran Associates, Inc., 2016.

  • Q. Le, T. Sarlos, and A. Smola. Fastfood - Approximating Kernel Expansions in Loglinear Time. In 30th International Conference on Machine Learning (ICML), 2013.

  • Y. Gal and Z. Ghahramani. Dropout As a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML'16, pages 1050--1059. JMLR.org, 2016.

  • A. Rahimi and B. Recht. Random Features for Large-Scale Kernel Machines. In J. C. Platt, D. Koller, Y. Singer, and S. T. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 1177--1184. Curran Associates, Inc., 2008.

### Variational Inference
  • D. P. Kingma and M. Welling. Auto-Encoding Variational Bayes. In Proceedings of the Second International Conference on Learning Representations (ICLR 2014), Apr. 2014.

  • A. Graves. Practical Variational Inference for Neural Networks. In J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. Pereira, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 24, pages 2348--2356. Curran Associates, Inc., 2011.

### Variational Inference for Gaussian Process Models
  • T. V. Nguyen and E. V. Bonilla. Automated variational inference for Gaussian process models. In Advances in Neural Information Processing Systems. 2014.

  • J. Hensman, N. Fusi, and N. D. Lawrence. Gaussian processes for big data. In Uncertainty in Artificial Intelligence, 2013.

  • A. Dezfouli and E. V. Bonilla. Scalable inference for Gaussian process models with black-box likelihoods. In Advances in Neural Information Processing Systems. 2015.

  • E. V. Bonilla, K. Krauth, and A. Dezfouli. Generic Inference in Latent Gaussian Process Models. arXiv e-prints, page arXiv:1609.00577, Sep 2016.

  • M. Titsias. Variational learning of inducing variables in sparse Gaussian processes. In Artificial Intelligence and Statistics, 2009.

  • K. Krauth, E. V. Bonilla, K. Cutajar, and M. Filippone. AutoGP: Exploring the capabilities and limitations of Gaussian process models. In Uncertainty in Artificial Intelligence, 2017.

### Unsupervised learning with Deep Gaussian Processes
  • R. Domingues, P. Michiardi, J. Zouaoui, and M. Filippone. Deep Gaussian process autoencoders for novelty detection. Machine Learning, 107(8-10):1363--1383, 2018.

  • Z. Dai, A. Damianou, J. González, and N. Lawrence. Variational Auto-encoded Deep Gaussian Processes, Feb. 2016.

  • N. Lawrence. Probabilistic non-linear principal component analysis with Gaussian process latent variable models. Journal of Machine Learning Research, 6:1783--1816, 2005.

  • K. Grochow, S. L. Martin, A. Hertzmann, and Z. Popović. Style-based inverse kinematics. In ACM SIGGRAPH 2004 Papers, SIGGRAPH '04, pages 522--531, New York, NY, USA, 2004. ACM.

### Multi-task Learning with Gaussian Processes
  • K. M. A. Chai, C. K. Williams, S. Klanke, and S. Vijayakumar. Multi-task Gaussian process learning of robot inverse dynamics. In Advances in Neural Processing Systems, pages 265--272, 2008.

  • E. V. Bonilla, K. M. A. Chai, and C. K. I. Williams. Multi-task Gaussian process prediction. In Advances in Neural Processing Systems, 2008.

  • M. A. Álvarez and N. D. Lawrence. Computationally efficient convolved multiple output Gaussian processes. Journal of Machine Learning Research, 12(5):1459--1500, 2011.

  • A. G. Wilson, D. A. Knowles, and Z. Ghahramani. Gaussian process regression networks. In International Conference on Machine Learning, 2012.

  • P. Boyle. Gaussian Processes for Regression and Optimisation. PhD thesis, Victoria University of Wellington, 2007.

### Bayesian Optimization
  • D. R. Jones. A Taxonomy of Global Optimization Methods Based on Response Surfaces. Journal of Global Optimization, 21(4):345--383, 2001.

  • J. Snoek, H. Larochelle, and R. P. Adams. Practical Bayesian optimization of machine learning algorithms. In Advances in neural information processing systems, pages 2951--2959, 2012.

  • W. Chu and Z. Ghahramani. Preference learning with Gaussian processes. In Proceedings of the 22nd international conference on Machine learning, pages 137--144. ACM, 2005.

  • K. Swersky, J. Snoek, and R. P. Adams. Multi-task Bayesian optimization. In Advances in neural information processing systems, pages 2004--2012, 2013.

### Other GP and DGP Models
  • R. Domingues, P. Michiardi, J. Zouaoui, and M. Filippone. Deep Gaussian process autoencoders for novelty detection. Machine Learning, 107(8-10):1363--1383, 2018.

  • M. van der Wilk, C. E. Rasmussen, and J. Hensman. Convolutional Gaussian Processes. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 2849--2858. Curran Associates, Inc., 2017.

  • M. Lorenzi, M. Filippone, G. B. Frisoni, D. C. Alexander, and S. Ourselin. Probabilistic disease progression modeling to characterize diagnostic uncertainty: staging and prediction in Alzheimer's disease. NeuroImage, 2017. to appear.

  • Z. Dai, A. Damianou, J. González, and N. Lawrence. Variational Auto-encoded Deep Gaussian Processes, Feb. 2016.

  • K. Blomqvist, S. Kaski, and M. Heinonen. Deep convolutional Gaussian processes. arXiv preprint arXiv:1810.03052, 2018.

  • P. Galliani, A. Dezfouli, E. Bonilla, and N. Quadrianto. Gray-box inference for structured Gaussian process models. In A. Singh and J. Zhu, editors, Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, volume 54 of Proceedings of Machine Learning Research, pages 353--361, Fort Lauderdale, FL, USA, 20--22 Apr 2017. PMLR.

  • S. Linderman and R. Adams. Discovering latent network structure in point process data. In International Conference on Machine Learning, pages 1413--1421, 2014.

  • A. Dezfouli, E. Bonilla, and R. Nock. Variational network inference: Strong and stable with concrete support. In J. Dy and A. Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 1204--1213, Stockholmsmässan, Stockholm Sweden, 10--15 Jul 2018. PMLR.

  • M. Kuss and C. E. Rasmussen. Gaussian processes in reinforcement learning. In Advances in neural information processing systems, pages 751--758, 2004.

  • Y. Engel, S. Mannor, and R. Meir. Reinforcement learning with Gaussian processes. In Proceedings of the 22nd international conference on Machine learning, pages 201--208. ACM, 2005.

  • M. Deisenroth and C. E. Rasmussen. Pilco: A model-based and data-efficient approach to policy search. In Proceedings of the 28th International Conference on machine learning (ICML-11), pages 465--472, 2011.

  • J. Martin and B. Englot. Recursive sparse pseudo-input Gaussian process sarsa. arXiv preprint arXiv:1811.07201, 2018.

  • R. P. Adams, I. Murray, and D. J. MacKay. Tractable nonparametric Bayesian inference in poisson processes with Gaussian process intensities. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 9--16. ACM, 2009.

  • C. Lloyd, T. Gunter, M. A. Osborne, and S. J. Roberts. Variational Inference for Gaussian Process Modulated Poisson Processes. In International Conference on Machine Learning, 2015.

  • S. John and J. Hensman. Large-scale cox process inference using variational fourier features. International Conference on Machine Learning, 2018.

  • V. Aglietti, T. Damoulas, and E. Bonilla. Efficient inference in multi-task cox process models. Artificial Intelligence and Statistics, 2018.

About

Modern Gaussian Processes: Scalable Inference and Novel Applications

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TeX 80.7%
  • Jupyter Notebook 17.4%
  • HTML 1.4%
  • Shell 0.5%