A solution to the learning dilemma for recurrent networks of spiking neurons

Bellec et al. present a mathematically founded approximation for gradient descent training of recurrent neural networks without backwards propagation in time. This enables biologically plausible training of spike-based neural network models with working memory and supports on-chip training of neurom...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Guillaume Bellec, Franz Scherr, Anand Subramoney, Elias Hajek, Darjan Salaj, Robert Legenstein, Wolfgang Maass
Formato: article
Lenguaje:EN
Publicado: Nature Portfolio 2020
Materias:
Q
Acceso en línea:https://doaj.org/article/7910940bc2a3480f8457777933723618
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:Bellec et al. present a mathematically founded approximation for gradient descent training of recurrent neural networks without backwards propagation in time. This enables biologically plausible training of spike-based neural network models with working memory and supports on-chip training of neuromorphic hardware.