Conditional Variational Autoencoder for Learned Image Reconstruction

Learned image reconstruction techniques using deep neural networks have recently gained popularity and have delivered promising empirical results. However, most approaches focus on one single recovery for each observation, and thus neglect information uncertainty. In this work, we develop a novel co...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Chen Zhang, Riccardo Barbano, Bangti Jin
Formato: article
Lenguaje:EN
Publicado: MDPI AG 2021
Materias:
Acceso en línea:https://doaj.org/article/09695f2192ce4215aa4579534d43eb9d
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:09695f2192ce4215aa4579534d43eb9d
record_format dspace
spelling oai:doaj.org-article:09695f2192ce4215aa4579534d43eb9d2021-11-25T17:17:12ZConditional Variational Autoencoder for Learned Image Reconstruction10.3390/computation91101142079-3197https://doaj.org/article/09695f2192ce4215aa4579534d43eb9d2021-10-01T00:00:00Zhttps://www.mdpi.com/2079-3197/9/11/114https://doaj.org/toc/2079-3197Learned image reconstruction techniques using deep neural networks have recently gained popularity and have delivered promising empirical results. However, most approaches focus on one single recovery for each observation, and thus neglect information uncertainty. In this work, we develop a novel computational framework that approximates the posterior distribution of the unknown image at each query observation. The proposed framework is very flexible: it handles implicit noise models and priors, it incorporates the data formation process (i.e., the forward operator), and the learned reconstructive properties are transferable between different datasets. Once the network is trained using the conditional variational autoencoder loss, it provides a computationally efficient sampler for the approximate posterior distribution via feed-forward propagation, and the summarizing statistics of the generated samples are used for both point-estimation and uncertainty quantification. We illustrate the proposed framework with extensive numerical experiments on positron emission tomography (with both moderate and low-count levels) showing that the framework generates high-quality samples when compared with state-of-the-art methods.Chen ZhangRiccardo BarbanoBangti JinMDPI AGarticleconditional variational autoencoderuncertainty quantificationdeep learningimage reconstructionElectronic computers. Computer scienceQA75.5-76.95ENComputation, Vol 9, Iss 114, p 114 (2021)
institution DOAJ
collection DOAJ
language EN
topic conditional variational autoencoder
uncertainty quantification
deep learning
image reconstruction
Electronic computers. Computer science
QA75.5-76.95
spellingShingle conditional variational autoencoder
uncertainty quantification
deep learning
image reconstruction
Electronic computers. Computer science
QA75.5-76.95
Chen Zhang
Riccardo Barbano
Bangti Jin
Conditional Variational Autoencoder for Learned Image Reconstruction
description Learned image reconstruction techniques using deep neural networks have recently gained popularity and have delivered promising empirical results. However, most approaches focus on one single recovery for each observation, and thus neglect information uncertainty. In this work, we develop a novel computational framework that approximates the posterior distribution of the unknown image at each query observation. The proposed framework is very flexible: it handles implicit noise models and priors, it incorporates the data formation process (i.e., the forward operator), and the learned reconstructive properties are transferable between different datasets. Once the network is trained using the conditional variational autoencoder loss, it provides a computationally efficient sampler for the approximate posterior distribution via feed-forward propagation, and the summarizing statistics of the generated samples are used for both point-estimation and uncertainty quantification. We illustrate the proposed framework with extensive numerical experiments on positron emission tomography (with both moderate and low-count levels) showing that the framework generates high-quality samples when compared with state-of-the-art methods.
format article
author Chen Zhang
Riccardo Barbano
Bangti Jin
author_facet Chen Zhang
Riccardo Barbano
Bangti Jin
author_sort Chen Zhang
title Conditional Variational Autoencoder for Learned Image Reconstruction
title_short Conditional Variational Autoencoder for Learned Image Reconstruction
title_full Conditional Variational Autoencoder for Learned Image Reconstruction
title_fullStr Conditional Variational Autoencoder for Learned Image Reconstruction
title_full_unstemmed Conditional Variational Autoencoder for Learned Image Reconstruction
title_sort conditional variational autoencoder for learned image reconstruction
publisher MDPI AG
publishDate 2021
url https://doaj.org/article/09695f2192ce4215aa4579534d43eb9d
work_keys_str_mv AT chenzhang conditionalvariationalautoencoderforlearnedimagereconstruction
AT riccardobarbano conditionalvariationalautoencoderforlearnedimagereconstruction
AT bangtijin conditionalvariationalautoencoderforlearnedimagereconstruction
_version_ 1718412510227333120