Comparing feedforward and recurrent neural network architectures with human behavior in artificial grammar learning

Abstract In recent years artificial neural networks achieved performance close to or better than humans in several domains: tasks that were previously human prerogatives, such as language processing, have witnessed remarkable improvements in state of the art models. One advantage of this technologic...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Andrea Alamia, Victor Gauducheau, Dimitri Paisios, Rufin VanRullen
Formato: article
Lenguaje:EN
Publicado: Nature Portfolio 2020
Materias:
R
Q
Acceso en línea:https://doaj.org/article/c2575fa6d2e64527acdf99f9febf6d42
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:c2575fa6d2e64527acdf99f9febf6d42
record_format dspace
spelling oai:doaj.org-article:c2575fa6d2e64527acdf99f9febf6d422021-12-02T12:42:18ZComparing feedforward and recurrent neural network architectures with human behavior in artificial grammar learning10.1038/s41598-020-79127-y2045-2322https://doaj.org/article/c2575fa6d2e64527acdf99f9febf6d422020-12-01T00:00:00Zhttps://doi.org/10.1038/s41598-020-79127-yhttps://doaj.org/toc/2045-2322Abstract In recent years artificial neural networks achieved performance close to or better than humans in several domains: tasks that were previously human prerogatives, such as language processing, have witnessed remarkable improvements in state of the art models. One advantage of this technological boost is to facilitate comparison between different neural networks and human performance, in order to deepen our understanding of human cognition. Here, we investigate which neural network architecture (feedforward vs. recurrent) matches human behavior in artificial grammar learning, a crucial aspect of language acquisition. Prior experimental studies proved that artificial grammars can be learnt by human subjects after little exposure and often without explicit knowledge of the underlying rules. We tested four grammars with different complexity levels both in humans and in feedforward and recurrent networks. Our results show that both architectures can “learn” (via error back-propagation) the grammars after the same number of training sequences as humans do, but recurrent networks perform closer to humans than feedforward ones, irrespective of the grammar complexity level. Moreover, similar to visual processing, in which feedforward and recurrent architectures have been related to unconscious and conscious processes, the difference in performance between architectures over ten regular grammars shows that simpler and more explicit grammars are better learnt by recurrent architectures, supporting the hypothesis that explicit learning is best modeled by recurrent networks, whereas feedforward networks supposedly capture the dynamics involved in implicit learning.Andrea AlamiaVictor GauducheauDimitri PaisiosRufin VanRullenNature PortfolioarticleMedicineRScienceQENScientific Reports, Vol 10, Iss 1, Pp 1-15 (2020)
institution DOAJ
collection DOAJ
language EN
topic Medicine
R
Science
Q
spellingShingle Medicine
R
Science
Q
Andrea Alamia
Victor Gauducheau
Dimitri Paisios
Rufin VanRullen
Comparing feedforward and recurrent neural network architectures with human behavior in artificial grammar learning
description Abstract In recent years artificial neural networks achieved performance close to or better than humans in several domains: tasks that were previously human prerogatives, such as language processing, have witnessed remarkable improvements in state of the art models. One advantage of this technological boost is to facilitate comparison between different neural networks and human performance, in order to deepen our understanding of human cognition. Here, we investigate which neural network architecture (feedforward vs. recurrent) matches human behavior in artificial grammar learning, a crucial aspect of language acquisition. Prior experimental studies proved that artificial grammars can be learnt by human subjects after little exposure and often without explicit knowledge of the underlying rules. We tested four grammars with different complexity levels both in humans and in feedforward and recurrent networks. Our results show that both architectures can “learn” (via error back-propagation) the grammars after the same number of training sequences as humans do, but recurrent networks perform closer to humans than feedforward ones, irrespective of the grammar complexity level. Moreover, similar to visual processing, in which feedforward and recurrent architectures have been related to unconscious and conscious processes, the difference in performance between architectures over ten regular grammars shows that simpler and more explicit grammars are better learnt by recurrent architectures, supporting the hypothesis that explicit learning is best modeled by recurrent networks, whereas feedforward networks supposedly capture the dynamics involved in implicit learning.
format article
author Andrea Alamia
Victor Gauducheau
Dimitri Paisios
Rufin VanRullen
author_facet Andrea Alamia
Victor Gauducheau
Dimitri Paisios
Rufin VanRullen
author_sort Andrea Alamia
title Comparing feedforward and recurrent neural network architectures with human behavior in artificial grammar learning
title_short Comparing feedforward and recurrent neural network architectures with human behavior in artificial grammar learning
title_full Comparing feedforward and recurrent neural network architectures with human behavior in artificial grammar learning
title_fullStr Comparing feedforward and recurrent neural network architectures with human behavior in artificial grammar learning
title_full_unstemmed Comparing feedforward and recurrent neural network architectures with human behavior in artificial grammar learning
title_sort comparing feedforward and recurrent neural network architectures with human behavior in artificial grammar learning
publisher Nature Portfolio
publishDate 2020
url https://doaj.org/article/c2575fa6d2e64527acdf99f9febf6d42
work_keys_str_mv AT andreaalamia comparingfeedforwardandrecurrentneuralnetworkarchitectureswithhumanbehaviorinartificialgrammarlearning
AT victorgauducheau comparingfeedforwardandrecurrentneuralnetworkarchitectureswithhumanbehaviorinartificialgrammarlearning
AT dimitripaisios comparingfeedforwardandrecurrentneuralnetworkarchitectureswithhumanbehaviorinartificialgrammarlearning
AT rufinvanrullen comparingfeedforwardandrecurrentneuralnetworkarchitectureswithhumanbehaviorinartificialgrammarlearning
_version_ 1718393708494192640