Random synaptic feedback weights support error backpropagation for deep learning
Multi-layered neural architectures that implement learning require elaborate mechanisms for symmetric backpropagation of errors that are biologically implausible. Here the authors propose a simple resolution to this problem of blame assignment that works even with feedback using random synaptic weig...
Guardado en:
Autores principales: | Timothy P. Lillicrap, Daniel Cownden, Douglas B. Tweed, Colin J. Akerman |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Nature Portfolio
2016
|
Materias: | |
Acceso en línea: | https://doaj.org/article/7c2d8359409f40dc8a188044a42e7c68 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Ejemplares similares
-
Statistical Mechanics of Deep Linear Neural Networks: The Backpropagating Kernel Renormalization
por: Qianyi Li, et al.
Publicado: (2021) -
Feedback Model to Support Designers of Blended Learning Courses
por: Hans G. K. Hummel
Publicado: (2006) -
Event-based backpropagation can compute exact gradients for spiking neural networks
por: Timo C. Wunderlich, et al.
Publicado: (2021) -
Modulation of sensory prediction error in Purkinje cells during visual feedback manipulations
por: Martha L. Streng, et al.
Publicado: (2018) -
The influence of synaptic weight distribution on neuronal population dynamics.
por: Ramakrishnan Iyer, et al.
Publicado: (2013)