Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks
Canatar et al. propose a predictive theory of generalization in kernel regression applicable to real data. This theory explains various generalization phenomena observed in wide neural networks, which admit a kernel limit and generalize well despite being overparameterized.
Guardado en:
Autores principales: | , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Nature Portfolio
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/3fb570c6ce05419290b8cc1eebe16977 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:3fb570c6ce05419290b8cc1eebe16977 |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:3fb570c6ce05419290b8cc1eebe169772021-12-02T15:52:52ZSpectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks10.1038/s41467-021-23103-12041-1723https://doaj.org/article/3fb570c6ce05419290b8cc1eebe169772021-05-01T00:00:00Zhttps://doi.org/10.1038/s41467-021-23103-1https://doaj.org/toc/2041-1723Canatar et al. propose a predictive theory of generalization in kernel regression applicable to real data. This theory explains various generalization phenomena observed in wide neural networks, which admit a kernel limit and generalize well despite being overparameterized.Abdulkadir CanatarBlake BordelonCengiz PehlevanNature PortfolioarticleScienceQENNature Communications, Vol 12, Iss 1, Pp 1-12 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
Science Q |
spellingShingle |
Science Q Abdulkadir Canatar Blake Bordelon Cengiz Pehlevan Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks |
description |
Canatar et al. propose a predictive theory of generalization in kernel regression applicable to real data. This theory explains various generalization phenomena observed in wide neural networks, which admit a kernel limit and generalize well despite being overparameterized. |
format |
article |
author |
Abdulkadir Canatar Blake Bordelon Cengiz Pehlevan |
author_facet |
Abdulkadir Canatar Blake Bordelon Cengiz Pehlevan |
author_sort |
Abdulkadir Canatar |
title |
Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks |
title_short |
Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks |
title_full |
Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks |
title_fullStr |
Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks |
title_full_unstemmed |
Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks |
title_sort |
spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks |
publisher |
Nature Portfolio |
publishDate |
2021 |
url |
https://doaj.org/article/3fb570c6ce05419290b8cc1eebe16977 |
work_keys_str_mv |
AT abdulkadircanatar spectralbiasandtaskmodelalignmentexplaingeneralizationinkernelregressionandinfinitelywideneuralnetworks AT blakebordelon spectralbiasandtaskmodelalignmentexplaingeneralizationinkernelregressionandinfinitelywideneuralnetworks AT cengizpehlevan spectralbiasandtaskmodelalignmentexplaingeneralizationinkernelregressionandinfinitelywideneuralnetworks |
_version_ |
1718385565313794048 |