Complexity control by gradient descent in deep networks
Understanding the underlying mechanisms behind the successes of deep networks remains a challenge. Here, the author demonstrates an implicit regularization in training deep networks, showing that the control of complexity in the training is hidden within the optimization technique of gradient descen...
Guardado en:
Autores principales: | Tomaso Poggio, Qianli Liao, Andrzej Banburski |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Nature Portfolio
2020
|
Materias: | |
Acceso en línea: | https://doaj.org/article/a50e459c37be4efd87800b05dd1d5ce3 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Ejemplares similares
-
Correspondence between neuroevolution and gradient descent
por: Stephen Whitelam, et al.
Publicado: (2021) -
Gradient-Descent-like Ghost Imaging
por: Wen-Kai Yu, et al.
Publicado: (2021) -
Harbor Aquaculture Area Extraction Aided with an Integration-Enhanced Gradient Descent Algorithm
por: Yafeng Zhong, et al.
Publicado: (2021) -
Hyper-parameter optimization for support vector machines using stochastic gradient descent and dual coordinate descent
por: W.e.i. Jiang, et al.
Publicado: (2020) -
A Scalable Bayesian Sampling Method Based on Stochastic Gradient Descent Isotropization
por: Giulio Franzese, et al.
Publicado: (2021)