Random synaptic feedback weights support error backpropagation for deep learning

Multi-layered neural architectures that implement learning require elaborate mechanisms for symmetric backpropagation of errors that are biologically implausible. Here the authors propose a simple resolution to this problem of blame assignment that works even with feedback using random synaptic weig...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Timothy P. Lillicrap, Daniel Cownden, Douglas B. Tweed, Colin J. Akerman
Formato: article
Lenguaje:EN
Publicado: Nature Portfolio 2016
Materias:
Q
Acceso en línea:https://doaj.org/article/7c2d8359409f40dc8a188044a42e7c68
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:Multi-layered neural architectures that implement learning require elaborate mechanisms for symmetric backpropagation of errors that are biologically implausible. Here the authors propose a simple resolution to this problem of blame assignment that works even with feedback using random synaptic weights.