Conditional Deep Gaussian Processes: Multi-Fidelity Kernel Learning

Deep Gaussian Processes (DGPs) were proposed as an expressive Bayesian model capable of a mathematically grounded estimation of uncertainty. The expressivity of DPGs results from not only the compositional character but the distribution propagation within the hierarchy. Recently, it was pointed out...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Chi-Ken Lu, Patrick Shafto
Formato: article
Lenguaje:EN
Publicado: MDPI AG 2021
Materias:
Q
Acceso en línea:https://doaj.org/article/7279de62091e4c17b2e769ba1c8ad513
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:7279de62091e4c17b2e769ba1c8ad513
record_format dspace
spelling oai:doaj.org-article:7279de62091e4c17b2e769ba1c8ad5132021-11-25T17:30:52ZConditional Deep Gaussian Processes: Multi-Fidelity Kernel Learning10.3390/e231115451099-4300https://doaj.org/article/7279de62091e4c17b2e769ba1c8ad5132021-11-01T00:00:00Zhttps://www.mdpi.com/1099-4300/23/11/1545https://doaj.org/toc/1099-4300Deep Gaussian Processes (DGPs) were proposed as an expressive Bayesian model capable of a mathematically grounded estimation of uncertainty. The expressivity of DPGs results from not only the compositional character but the distribution propagation within the hierarchy. Recently, it was pointed out that the hierarchical structure of DGP well suited modeling the multi-fidelity regression, in which one is provided sparse observations with high precision and plenty of low fidelity observations. We propose the conditional DGP model in which the latent GPs are directly supported by the fixed lower fidelity data. Then the moment matching method is applied to approximate the marginal prior of conditional DGP with a GP. The obtained effective kernels are implicit functions of the lower-fidelity data, manifesting the expressivity contributed by distribution propagation within the hierarchy. The hyperparameters are learned via optimizing the approximate marginal likelihood. Experiments with synthetic and high dimensional data show comparable performance against other multi-fidelity regression methods, variational inference, and multi-output GP. We conclude that, with the low fidelity data and the hierarchical DGP structure, the effective kernel encodes the inductive bias for true function allowing the compositional freedom.Chi-Ken LuPatrick ShaftoMDPI AGarticlemulti-fidelity regressionDeep Gaussian Processapproximate inferencemoment matchingkernel compositionneural networkScienceQAstrophysicsQB460-466PhysicsQC1-999ENEntropy, Vol 23, Iss 1545, p 1545 (2021)
institution DOAJ
collection DOAJ
language EN
topic multi-fidelity regression
Deep Gaussian Process
approximate inference
moment matching
kernel composition
neural network
Science
Q
Astrophysics
QB460-466
Physics
QC1-999
spellingShingle multi-fidelity regression
Deep Gaussian Process
approximate inference
moment matching
kernel composition
neural network
Science
Q
Astrophysics
QB460-466
Physics
QC1-999
Chi-Ken Lu
Patrick Shafto
Conditional Deep Gaussian Processes: Multi-Fidelity Kernel Learning
description Deep Gaussian Processes (DGPs) were proposed as an expressive Bayesian model capable of a mathematically grounded estimation of uncertainty. The expressivity of DPGs results from not only the compositional character but the distribution propagation within the hierarchy. Recently, it was pointed out that the hierarchical structure of DGP well suited modeling the multi-fidelity regression, in which one is provided sparse observations with high precision and plenty of low fidelity observations. We propose the conditional DGP model in which the latent GPs are directly supported by the fixed lower fidelity data. Then the moment matching method is applied to approximate the marginal prior of conditional DGP with a GP. The obtained effective kernels are implicit functions of the lower-fidelity data, manifesting the expressivity contributed by distribution propagation within the hierarchy. The hyperparameters are learned via optimizing the approximate marginal likelihood. Experiments with synthetic and high dimensional data show comparable performance against other multi-fidelity regression methods, variational inference, and multi-output GP. We conclude that, with the low fidelity data and the hierarchical DGP structure, the effective kernel encodes the inductive bias for true function allowing the compositional freedom.
format article
author Chi-Ken Lu
Patrick Shafto
author_facet Chi-Ken Lu
Patrick Shafto
author_sort Chi-Ken Lu
title Conditional Deep Gaussian Processes: Multi-Fidelity Kernel Learning
title_short Conditional Deep Gaussian Processes: Multi-Fidelity Kernel Learning
title_full Conditional Deep Gaussian Processes: Multi-Fidelity Kernel Learning
title_fullStr Conditional Deep Gaussian Processes: Multi-Fidelity Kernel Learning
title_full_unstemmed Conditional Deep Gaussian Processes: Multi-Fidelity Kernel Learning
title_sort conditional deep gaussian processes: multi-fidelity kernel learning
publisher MDPI AG
publishDate 2021
url https://doaj.org/article/7279de62091e4c17b2e769ba1c8ad513
work_keys_str_mv AT chikenlu conditionaldeepgaussianprocessesmultifidelitykernellearning
AT patrickshafto conditionaldeepgaussianprocessesmultifidelitykernellearning
_version_ 1718412229742690304