Complexity control by gradient descent in deep networks
Understanding the underlying mechanisms behind the successes of deep networks remains a challenge. Here, the author demonstrates an implicit regularization in training deep networks, showing that the control of complexity in the training is hidden within the optimization technique of gradient descen...
Saved in:
Main Authors: | Tomaso Poggio, Qianli Liao, Andrzej Banburski |
---|---|
Format: | article |
Language: | EN |
Published: |
Nature Portfolio
2020
|
Subjects: | |
Online Access: | https://doaj.org/article/a50e459c37be4efd87800b05dd1d5ce3 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Correspondence between neuroevolution and gradient descent
by: Stephen Whitelam, et al.
Published: (2021) -
Gradient-Descent-like Ghost Imaging
by: Wen-Kai Yu, et al.
Published: (2021) -
Harbor Aquaculture Area Extraction Aided with an Integration-Enhanced Gradient Descent Algorithm
by: Yafeng Zhong, et al.
Published: (2021) -
Hyper-parameter optimization for support vector machines using stochastic gradient descent and dual coordinate descent
by: W.e.i. Jiang, et al.
Published: (2020) -
A Scalable Bayesian Sampling Method Based on Stochastic Gradient Descent Isotropization
by: Giulio Franzese, et al.
Published: (2021)