Random synaptic feedback weights support error backpropagation for deep learning
Multi-layered neural architectures that implement learning require elaborate mechanisms for symmetric backpropagation of errors that are biologically implausible. Here the authors propose a simple resolution to this problem of blame assignment that works even with feedback using random synaptic weig...
Guardado en:
Autores principales: | , , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Nature Portfolio
2016
|
Materias: | |
Acceso en línea: | https://doaj.org/article/7c2d8359409f40dc8a188044a42e7c68 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Sumario: | Multi-layered neural architectures that implement learning require elaborate mechanisms for symmetric backpropagation of errors that are biologically implausible. Here the authors propose a simple resolution to this problem of blame assignment that works even with feedback using random synaptic weights. |
---|