Sampling the Variational Posterior with Local Refinement

Variational inference is an optimization-based method for approximating the posterior distribution of the parameters in Bayesian probabilistic models. A key challenge of variational inference is to approximate the posterior with a distribution that is computationally tractable yet sufficiently expre...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Marton Havasi, Jasper Snoek, Dustin Tran, Jonathan Gordon, José Miguel Hernández-Lobato
Formato: article
Lenguaje:EN
Publicado: MDPI AG 2021
Materias:
Q
Acceso en línea:https://doaj.org/article/cefaef5f24ee49009bbd835ef6ef3eca
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:cefaef5f24ee49009bbd835ef6ef3eca
record_format dspace
spelling oai:doaj.org-article:cefaef5f24ee49009bbd835ef6ef3eca2021-11-25T17:29:56ZSampling the Variational Posterior with Local Refinement10.3390/e231114751099-4300https://doaj.org/article/cefaef5f24ee49009bbd835ef6ef3eca2021-11-01T00:00:00Zhttps://www.mdpi.com/1099-4300/23/11/1475https://doaj.org/toc/1099-4300Variational inference is an optimization-based method for approximating the posterior distribution of the parameters in Bayesian probabilistic models. A key challenge of variational inference is to approximate the posterior with a distribution that is computationally tractable yet sufficiently expressive. We propose a novel method for generating samples from a highly flexible variational approximation. The method starts with a coarse initial approximation and generates samples by refining it in selected, local regions. This allows the samples to capture dependencies and multi-modality in the posterior, even when these are absent from the initial approximation. We demonstrate theoretically that our method always improves the quality of the approximation (as measured by the evidence lower bound). In experiments, our method consistently outperforms recent variational inference methods in terms of log-likelihood and ELBO across three example tasks: the Eight-Schools example (an inference task in a hierarchical model), training a ResNet-20 (Bayesian inference in a large neural network), and the Mushroom task (posterior sampling in a contextual bandit problem).Marton HavasiJasper SnoekDustin TranJonathan GordonJosé Miguel Hernández-LobatoMDPI AGarticlebayesian inferencevariational inferencedeep neural networkscontextual banditsScienceQAstrophysicsQB460-466PhysicsQC1-999ENEntropy, Vol 23, Iss 1475, p 1475 (2021)
institution DOAJ
collection DOAJ
language EN
topic bayesian inference
variational inference
deep neural networks
contextual bandits
Science
Q
Astrophysics
QB460-466
Physics
QC1-999
spellingShingle bayesian inference
variational inference
deep neural networks
contextual bandits
Science
Q
Astrophysics
QB460-466
Physics
QC1-999
Marton Havasi
Jasper Snoek
Dustin Tran
Jonathan Gordon
José Miguel Hernández-Lobato
Sampling the Variational Posterior with Local Refinement
description Variational inference is an optimization-based method for approximating the posterior distribution of the parameters in Bayesian probabilistic models. A key challenge of variational inference is to approximate the posterior with a distribution that is computationally tractable yet sufficiently expressive. We propose a novel method for generating samples from a highly flexible variational approximation. The method starts with a coarse initial approximation and generates samples by refining it in selected, local regions. This allows the samples to capture dependencies and multi-modality in the posterior, even when these are absent from the initial approximation. We demonstrate theoretically that our method always improves the quality of the approximation (as measured by the evidence lower bound). In experiments, our method consistently outperforms recent variational inference methods in terms of log-likelihood and ELBO across three example tasks: the Eight-Schools example (an inference task in a hierarchical model), training a ResNet-20 (Bayesian inference in a large neural network), and the Mushroom task (posterior sampling in a contextual bandit problem).
format article
author Marton Havasi
Jasper Snoek
Dustin Tran
Jonathan Gordon
José Miguel Hernández-Lobato
author_facet Marton Havasi
Jasper Snoek
Dustin Tran
Jonathan Gordon
José Miguel Hernández-Lobato
author_sort Marton Havasi
title Sampling the Variational Posterior with Local Refinement
title_short Sampling the Variational Posterior with Local Refinement
title_full Sampling the Variational Posterior with Local Refinement
title_fullStr Sampling the Variational Posterior with Local Refinement
title_full_unstemmed Sampling the Variational Posterior with Local Refinement
title_sort sampling the variational posterior with local refinement
publisher MDPI AG
publishDate 2021
url https://doaj.org/article/cefaef5f24ee49009bbd835ef6ef3eca
work_keys_str_mv AT martonhavasi samplingthevariationalposteriorwithlocalrefinement
AT jaspersnoek samplingthevariationalposteriorwithlocalrefinement
AT dustintran samplingthevariationalposteriorwithlocalrefinement
AT jonathangordon samplingthevariationalposteriorwithlocalrefinement
AT josemiguelhernandezlobato samplingthevariationalposteriorwithlocalrefinement
_version_ 1718412313004867584