Complexity control by gradient descent in deep networks
Understanding the underlying mechanisms behind the successes of deep networks remains a challenge. Here, the author demonstrates an implicit regularization in training deep networks, showing that the control of complexity in the training is hidden within the optimization technique of gradient descen...
Enregistré dans:
Auteurs principaux: | Tomaso Poggio, Qianli Liao, Andrzej Banburski |
---|---|
Format: | article |
Langue: | EN |
Publié: |
Nature Portfolio
2020
|
Sujets: | |
Accès en ligne: | https://doaj.org/article/a50e459c37be4efd87800b05dd1d5ce3 |
Tags: |
Ajouter un tag
Pas de tags, Soyez le premier à ajouter un tag!
|
Documents similaires
-
Correspondence between neuroevolution and gradient descent
par: Stephen Whitelam, et autres
Publié: (2021) -
Gradient-Descent-like Ghost Imaging
par: Wen-Kai Yu, et autres
Publié: (2021) -
Harbor Aquaculture Area Extraction Aided with an Integration-Enhanced Gradient Descent Algorithm
par: Yafeng Zhong, et autres
Publié: (2021) -
Hyper-parameter optimization for support vector machines using stochastic gradient descent and dual coordinate descent
par: W.e.i. Jiang, et autres
Publié: (2020) -
A Scalable Bayesian Sampling Method Based on Stochastic Gradient Descent Isotropization
par: Giulio Franzese, et autres
Publié: (2021)