Correspondence between neuroevolution and gradient descent

Gradient-based and non-gradient-based methods for training neural networks are usually considered to be fundamentally different. The authors derive, and illustrate numerically, an analytic equivalence between the dynamics of neural network training under conditioned stochastic mutations, and under g...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Stephen Whitelam, Viktor Selin, Sang-Won Park, Isaac Tamblyn
Formato: article
Lenguaje:EN
Publicado: Nature Portfolio 2021
Materias:
Q
Acceso en línea:https://doaj.org/article/2228d1b435c34f58901cee411ded17c8
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:Gradient-based and non-gradient-based methods for training neural networks are usually considered to be fundamentally different. The authors derive, and illustrate numerically, an analytic equivalence between the dynamics of neural network training under conditioned stochastic mutations, and under gradient descent.