Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks
Canatar et al. propose a predictive theory of generalization in kernel regression applicable to real data. This theory explains various generalization phenomena observed in wide neural networks, which admit a kernel limit and generalize well despite being overparameterized.
Guardado en:
Autores principales: | Abdulkadir Canatar, Blake Bordelon, Cengiz Pehlevan |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Nature Portfolio
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/3fb570c6ce05419290b8cc1eebe16977 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Ejemplares similares
-
Deconvoluting kernel density estimation and regression for locally differentially private data
por: Farhad Farokhi
Publicado: (2020) -
Grieving as Limit Situation of Memory: Gadamer, Beamer, and Moules on the Infinite Task Posed by the Dead
por: Theodore George
Publicado: (2017) -
The Ancient Egyptian Second Infinitive? ‘iw + subject + r + infinitive’ Interpreted Through the Biblical Infinitive Absolute and the Polish Second Infinitive
por: Mariusz Izydor Prokopowicz
Publicado: (2014) -
Explaining Myanmar’s Policy of Non-Alignment: An Analytic Eclecticism Approach
por: Sint Sint Myat
Publicado: (2021) -
The role of task-related learned representations in explaining asymmetries in task switching.
por: Ayla Barutchu, et al.
Publicado: (2013)