Conditional Deep Gaussian Processes: Empirical Bayes Hyperdata Learning

It is desirable to combine the expressive power of deep learning with Gaussian Process (GP) in one expressive Bayesian learning model. Deep kernel learning showed success as a deep network used for feature extraction. Then, a GP was used as the function model. Recently, it was suggested that, albeit...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Chi-Ken Lu, Patrick Shafto
Formato: article
Lenguaje:EN
Publicado: MDPI AG 2021
Materias:
Q
Acceso en línea:https://doaj.org/article/5d1c5ce622554acebb7962393986c328
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:5d1c5ce622554acebb7962393986c328
record_format dspace
spelling oai:doaj.org-article:5d1c5ce622554acebb7962393986c3282021-11-25T17:29:12ZConditional Deep Gaussian Processes: Empirical Bayes Hyperdata Learning10.3390/e231113871099-4300https://doaj.org/article/5d1c5ce622554acebb7962393986c3282021-10-01T00:00:00Zhttps://www.mdpi.com/1099-4300/23/11/1387https://doaj.org/toc/1099-4300It is desirable to combine the expressive power of deep learning with Gaussian Process (GP) in one expressive Bayesian learning model. Deep kernel learning showed success as a deep network used for feature extraction. Then, a GP was used as the function model. Recently, it was suggested that, albeit training with marginal likelihood, the deterministic nature of a feature extractor might lead to overfitting, and replacement with a Bayesian network seemed to cure it. Here, we propose the conditional deep Gaussian process (DGP) in which the intermediate GPs in hierarchical composition are supported by the hyperdata and the exposed GP remains zero mean. Motivated by the inducing points in sparse GP, the hyperdata also play the role of function supports, but are hyperparameters rather than random variables. It follows our previous moment matching approach to approximate the marginal prior for conditional DGP with a GP carrying an effective kernel. Thus, as in empirical Bayes, the hyperdata are learned by optimizing the approximate marginal likelihood which implicitly depends on the hyperdata via the kernel. We show the equivalence with the deep kernel learning in the limit of dense hyperdata in latent space. However, the conditional DGP and the corresponding approximate inference enjoy the benefit of being more Bayesian than deep kernel learning. Preliminary extrapolation results demonstrate expressive power from the depth of hierarchy by exploiting the exact covariance and hyperdata learning, in comparison with GP kernel composition, DGP variational inference and deep kernel learning. We also address the non-Gaussian aspect of our model as well as way of upgrading to a full Bayes inference.Chi-Ken LuPatrick ShaftoMDPI AGarticledeep Gaussian processapproximate inferencedeep kernel learningBayesian learningmoment matchinginducing pointsScienceQAstrophysicsQB460-466PhysicsQC1-999ENEntropy, Vol 23, Iss 1387, p 1387 (2021)
institution DOAJ
collection DOAJ
language EN
topic deep Gaussian process
approximate inference
deep kernel learning
Bayesian learning
moment matching
inducing points
Science
Q
Astrophysics
QB460-466
Physics
QC1-999
spellingShingle deep Gaussian process
approximate inference
deep kernel learning
Bayesian learning
moment matching
inducing points
Science
Q
Astrophysics
QB460-466
Physics
QC1-999
Chi-Ken Lu
Patrick Shafto
Conditional Deep Gaussian Processes: Empirical Bayes Hyperdata Learning
description It is desirable to combine the expressive power of deep learning with Gaussian Process (GP) in one expressive Bayesian learning model. Deep kernel learning showed success as a deep network used for feature extraction. Then, a GP was used as the function model. Recently, it was suggested that, albeit training with marginal likelihood, the deterministic nature of a feature extractor might lead to overfitting, and replacement with a Bayesian network seemed to cure it. Here, we propose the conditional deep Gaussian process (DGP) in which the intermediate GPs in hierarchical composition are supported by the hyperdata and the exposed GP remains zero mean. Motivated by the inducing points in sparse GP, the hyperdata also play the role of function supports, but are hyperparameters rather than random variables. It follows our previous moment matching approach to approximate the marginal prior for conditional DGP with a GP carrying an effective kernel. Thus, as in empirical Bayes, the hyperdata are learned by optimizing the approximate marginal likelihood which implicitly depends on the hyperdata via the kernel. We show the equivalence with the deep kernel learning in the limit of dense hyperdata in latent space. However, the conditional DGP and the corresponding approximate inference enjoy the benefit of being more Bayesian than deep kernel learning. Preliminary extrapolation results demonstrate expressive power from the depth of hierarchy by exploiting the exact covariance and hyperdata learning, in comparison with GP kernel composition, DGP variational inference and deep kernel learning. We also address the non-Gaussian aspect of our model as well as way of upgrading to a full Bayes inference.
format article
author Chi-Ken Lu
Patrick Shafto
author_facet Chi-Ken Lu
Patrick Shafto
author_sort Chi-Ken Lu
title Conditional Deep Gaussian Processes: Empirical Bayes Hyperdata Learning
title_short Conditional Deep Gaussian Processes: Empirical Bayes Hyperdata Learning
title_full Conditional Deep Gaussian Processes: Empirical Bayes Hyperdata Learning
title_fullStr Conditional Deep Gaussian Processes: Empirical Bayes Hyperdata Learning
title_full_unstemmed Conditional Deep Gaussian Processes: Empirical Bayes Hyperdata Learning
title_sort conditional deep gaussian processes: empirical bayes hyperdata learning
publisher MDPI AG
publishDate 2021
url https://doaj.org/article/5d1c5ce622554acebb7962393986c328
work_keys_str_mv AT chikenlu conditionaldeepgaussianprocessesempiricalbayeshyperdatalearning
AT patrickshafto conditionaldeepgaussianprocessesempiricalbayeshyperdatalearning
_version_ 1718412285754474496