On Architecture Selection for Linear Inverse Problems with Untrained Neural Networks
In recent years, neural network based image priors have been shown to be highly effective for linear inverse problems, often significantly outperforming conventional methods that are based on sparsity and related notions. While pre-trained generative models are perhaps the most common, it has additi...
Guardado en:
Autores principales: | , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
MDPI AG
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/b575e333a26d494baf46747b142feca2 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:b575e333a26d494baf46747b142feca2 |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:b575e333a26d494baf46747b142feca22021-11-25T17:30:00ZOn Architecture Selection for Linear Inverse Problems with Untrained Neural Networks10.3390/e231114811099-4300https://doaj.org/article/b575e333a26d494baf46747b142feca22021-11-01T00:00:00Zhttps://www.mdpi.com/1099-4300/23/11/1481https://doaj.org/toc/1099-4300In recent years, neural network based image priors have been shown to be highly effective for linear inverse problems, often significantly outperforming conventional methods that are based on sparsity and related notions. While pre-trained generative models are perhaps the most common, it has additionally been shown that even untrained neural networks can serve as excellent priors in various imaging applications. In this paper, we seek to broaden the applicability and understanding of untrained neural network priors by investigating the interaction between architecture selection, measurement models (e.g., inpainting vs. denoising vs. compressive sensing), and signal types (e.g., smooth vs. erratic). We motivate the problem via statistical learning theory, and provide two practical algorithms for tuning architectural hyperparameters. Using experimental evaluations, we demonstrate that the optimal hyperparameters may vary significantly between tasks and can exhibit large performance gaps when tuned for the wrong task. In addition, we investigate which hyperparameters tend to be more important, and which are robust to deviations from the optimum.Yang SunHangdong ZhaoJonathan ScarlettMDPI AGarticlelinear inverse problemsuntrained neural networkscompressive sensingdeep decoderarchitecture designhyperparametersScienceQAstrophysicsQB460-466PhysicsQC1-999ENEntropy, Vol 23, Iss 1481, p 1481 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
linear inverse problems untrained neural networks compressive sensing deep decoder architecture design hyperparameters Science Q Astrophysics QB460-466 Physics QC1-999 |
spellingShingle |
linear inverse problems untrained neural networks compressive sensing deep decoder architecture design hyperparameters Science Q Astrophysics QB460-466 Physics QC1-999 Yang Sun Hangdong Zhao Jonathan Scarlett On Architecture Selection for Linear Inverse Problems with Untrained Neural Networks |
description |
In recent years, neural network based image priors have been shown to be highly effective for linear inverse problems, often significantly outperforming conventional methods that are based on sparsity and related notions. While pre-trained generative models are perhaps the most common, it has additionally been shown that even untrained neural networks can serve as excellent priors in various imaging applications. In this paper, we seek to broaden the applicability and understanding of untrained neural network priors by investigating the interaction between architecture selection, measurement models (e.g., inpainting vs. denoising vs. compressive sensing), and signal types (e.g., smooth vs. erratic). We motivate the problem via statistical learning theory, and provide two practical algorithms for tuning architectural hyperparameters. Using experimental evaluations, we demonstrate that the optimal hyperparameters may vary significantly between tasks and can exhibit large performance gaps when tuned for the wrong task. In addition, we investigate which hyperparameters tend to be more important, and which are robust to deviations from the optimum. |
format |
article |
author |
Yang Sun Hangdong Zhao Jonathan Scarlett |
author_facet |
Yang Sun Hangdong Zhao Jonathan Scarlett |
author_sort |
Yang Sun |
title |
On Architecture Selection for Linear Inverse Problems with Untrained Neural Networks |
title_short |
On Architecture Selection for Linear Inverse Problems with Untrained Neural Networks |
title_full |
On Architecture Selection for Linear Inverse Problems with Untrained Neural Networks |
title_fullStr |
On Architecture Selection for Linear Inverse Problems with Untrained Neural Networks |
title_full_unstemmed |
On Architecture Selection for Linear Inverse Problems with Untrained Neural Networks |
title_sort |
on architecture selection for linear inverse problems with untrained neural networks |
publisher |
MDPI AG |
publishDate |
2021 |
url |
https://doaj.org/article/b575e333a26d494baf46747b142feca2 |
work_keys_str_mv |
AT yangsun onarchitectureselectionforlinearinverseproblemswithuntrainedneuralnetworks AT hangdongzhao onarchitectureselectionforlinearinverseproblemswithuntrainedneuralnetworks AT jonathanscarlett onarchitectureselectionforlinearinverseproblemswithuntrainedneuralnetworks |
_version_ |
1718412306698731520 |