Correspondence between neuroevolution and gradient descent
Gradient-based and non-gradient-based methods for training neural networks are usually considered to be fundamentally different. The authors derive, and illustrate numerically, an analytic equivalence between the dynamics of neural network training under conditioned stochastic mutations, and under g...
Guardado en:
Autores principales: | Stephen Whitelam, Viktor Selin, Sang-Won Park, Isaac Tamblyn |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Nature Portfolio
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/2228d1b435c34f58901cee411ded17c8 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Ejemplares similares
-
Complexity control by gradient descent in deep networks
por: Tomaso Poggio, et al.
Publicado: (2020) -
Gradient-Descent-like Ghost Imaging
por: Wen-Kai Yu, et al.
Publicado: (2021) -
Hyper-parameter optimization for support vector machines using stochastic gradient descent and dual coordinate descent
por: W.e.i. Jiang, et al.
Publicado: (2020) -
Ensemble Neuroevolution-Based Approach for Multivariate Time Series Anomaly Detection
por: Kamil Faber, et al.
Publicado: (2021) -
Harbor Aquaculture Area Extraction Aided with an Integration-Enhanced Gradient Descent Algorithm
por: Yafeng Zhong, et al.
Publicado: (2021)