Learning, memory, and the role of neural network architecture.

The performance of information processing systems, from artificial neural networks to natural neuronal ensembles, depends heavily on the underlying system architecture. In this study, we compare the performance of parallel and layered network architectures during sequential tasks that require both a...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Ann M Hermundstad, Kevin S Brown, Danielle S Bassett, Jean M Carlson
Formato: article
Lenguaje:EN
Publicado: Public Library of Science (PLoS) 2011
Materias:
Acceso en línea:https://doaj.org/article/ee8ce64ee4c044f4a62b0be947a4809d
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:ee8ce64ee4c044f4a62b0be947a4809d
record_format dspace
spelling oai:doaj.org-article:ee8ce64ee4c044f4a62b0be947a4809d2021-11-18T05:50:27ZLearning, memory, and the role of neural network architecture.1553-734X1553-735810.1371/journal.pcbi.1002063https://doaj.org/article/ee8ce64ee4c044f4a62b0be947a4809d2011-06-01T00:00:00Zhttps://www.ncbi.nlm.nih.gov/pmc/articles/pmid/21738455/pdf/?tool=EBIhttps://doaj.org/toc/1553-734Xhttps://doaj.org/toc/1553-7358The performance of information processing systems, from artificial neural networks to natural neuronal ensembles, depends heavily on the underlying system architecture. In this study, we compare the performance of parallel and layered network architectures during sequential tasks that require both acquisition and retention of information, thereby identifying tradeoffs between learning and memory processes. During the task of supervised, sequential function approximation, networks produce and adapt representations of external information. Performance is evaluated by statistically analyzing the error in these representations while varying the initial network state, the structure of the external information, and the time given to learn the information. We link performance to complexity in network architecture by characterizing local error landscape curvature. We find that variations in error landscape structure give rise to tradeoffs in performance; these include the ability of the network to maximize accuracy versus minimize inaccuracy and produce specific versus generalizable representations of information. Parallel networks generate smooth error landscapes with deep, narrow minima, enabling them to find highly specific representations given sufficient time. While accurate, however, these representations are difficult to generalize. In contrast, layered networks generate rough error landscapes with a variety of local minima, allowing them to quickly find coarse representations. Although less accurate, these representations are easily adaptable. The presence of measurable performance tradeoffs in both layered and parallel networks has implications for understanding the behavior of a wide variety of natural and artificial learning systems.Ann M HermundstadKevin S BrownDanielle S BassettJean M CarlsonPublic Library of Science (PLoS)articleBiology (General)QH301-705.5ENPLoS Computational Biology, Vol 7, Iss 6, p e1002063 (2011)
institution DOAJ
collection DOAJ
language EN
topic Biology (General)
QH301-705.5
spellingShingle Biology (General)
QH301-705.5
Ann M Hermundstad
Kevin S Brown
Danielle S Bassett
Jean M Carlson
Learning, memory, and the role of neural network architecture.
description The performance of information processing systems, from artificial neural networks to natural neuronal ensembles, depends heavily on the underlying system architecture. In this study, we compare the performance of parallel and layered network architectures during sequential tasks that require both acquisition and retention of information, thereby identifying tradeoffs between learning and memory processes. During the task of supervised, sequential function approximation, networks produce and adapt representations of external information. Performance is evaluated by statistically analyzing the error in these representations while varying the initial network state, the structure of the external information, and the time given to learn the information. We link performance to complexity in network architecture by characterizing local error landscape curvature. We find that variations in error landscape structure give rise to tradeoffs in performance; these include the ability of the network to maximize accuracy versus minimize inaccuracy and produce specific versus generalizable representations of information. Parallel networks generate smooth error landscapes with deep, narrow minima, enabling them to find highly specific representations given sufficient time. While accurate, however, these representations are difficult to generalize. In contrast, layered networks generate rough error landscapes with a variety of local minima, allowing them to quickly find coarse representations. Although less accurate, these representations are easily adaptable. The presence of measurable performance tradeoffs in both layered and parallel networks has implications for understanding the behavior of a wide variety of natural and artificial learning systems.
format article
author Ann M Hermundstad
Kevin S Brown
Danielle S Bassett
Jean M Carlson
author_facet Ann M Hermundstad
Kevin S Brown
Danielle S Bassett
Jean M Carlson
author_sort Ann M Hermundstad
title Learning, memory, and the role of neural network architecture.
title_short Learning, memory, and the role of neural network architecture.
title_full Learning, memory, and the role of neural network architecture.
title_fullStr Learning, memory, and the role of neural network architecture.
title_full_unstemmed Learning, memory, and the role of neural network architecture.
title_sort learning, memory, and the role of neural network architecture.
publisher Public Library of Science (PLoS)
publishDate 2011
url https://doaj.org/article/ee8ce64ee4c044f4a62b0be947a4809d
work_keys_str_mv AT annmhermundstad learningmemoryandtheroleofneuralnetworkarchitecture
AT kevinsbrown learningmemoryandtheroleofneuralnetworkarchitecture
AT daniellesbassett learningmemoryandtheroleofneuralnetworkarchitecture
AT jeanmcarlson learningmemoryandtheroleofneuralnetworkarchitecture
_version_ 1718424826506379264