Parallel model-based and model-free reinforcement learning for card sorting performance

Abstract The Wisconsin Card Sorting Test (WCST) is considered a gold standard for the assessment of cognitive flexibility. On the WCST, repeating a sorting category following negative feedback is typically treated as indicating reduced cognitive flexibility. Therefore such responses are referred to...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Alexander Steinke, Florian Lange, Bruno Kopp
Formato: article
Lenguaje:EN
Publicado: Nature Portfolio 2020
Materias:
R
Q
Acceso en línea:https://doaj.org/article/d202d99dd7e14776a906d2ed6ffe2d31
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:d202d99dd7e14776a906d2ed6ffe2d31
record_format dspace
spelling oai:doaj.org-article:d202d99dd7e14776a906d2ed6ffe2d312021-12-02T18:48:22ZParallel model-based and model-free reinforcement learning for card sorting performance10.1038/s41598-020-72407-72045-2322https://doaj.org/article/d202d99dd7e14776a906d2ed6ffe2d312020-09-01T00:00:00Zhttps://doi.org/10.1038/s41598-020-72407-7https://doaj.org/toc/2045-2322Abstract The Wisconsin Card Sorting Test (WCST) is considered a gold standard for the assessment of cognitive flexibility. On the WCST, repeating a sorting category following negative feedback is typically treated as indicating reduced cognitive flexibility. Therefore such responses are referred to as ‘perseveration’ errors. Recent research suggests that the propensity for perseveration errors is modulated by response demands: They occur less frequently when their commitment repeats the previously executed response. Here, we propose parallel reinforcement-learning models of card sorting performance, which assume that card sorting performance can be conceptualized as resulting from model-free reinforcement learning at the level of responses that occurs in parallel with model-based reinforcement learning at the categorical level. We compared parallel reinforcement-learning models with purely model-based reinforcement learning, and with the state-of-the-art attentional-updating model. We analyzed data from 375 participants who completed a computerized WCST. Parallel reinforcement-learning models showed best predictive accuracies for the majority of participants. Only parallel reinforcement-learning models accounted for the modulation of perseveration propensity by response demands. In conclusion, parallel reinforcement-learning models provide a new theoretical perspective on card sorting and it offers a suitable framework for discerning individual differences in latent processes that subserve behavioral flexibility.Alexander SteinkeFlorian LangeBruno KoppNature PortfolioarticleMedicineRScienceQENScientific Reports, Vol 10, Iss 1, Pp 1-18 (2020)
institution DOAJ
collection DOAJ
language EN
topic Medicine
R
Science
Q
spellingShingle Medicine
R
Science
Q
Alexander Steinke
Florian Lange
Bruno Kopp
Parallel model-based and model-free reinforcement learning for card sorting performance
description Abstract The Wisconsin Card Sorting Test (WCST) is considered a gold standard for the assessment of cognitive flexibility. On the WCST, repeating a sorting category following negative feedback is typically treated as indicating reduced cognitive flexibility. Therefore such responses are referred to as ‘perseveration’ errors. Recent research suggests that the propensity for perseveration errors is modulated by response demands: They occur less frequently when their commitment repeats the previously executed response. Here, we propose parallel reinforcement-learning models of card sorting performance, which assume that card sorting performance can be conceptualized as resulting from model-free reinforcement learning at the level of responses that occurs in parallel with model-based reinforcement learning at the categorical level. We compared parallel reinforcement-learning models with purely model-based reinforcement learning, and with the state-of-the-art attentional-updating model. We analyzed data from 375 participants who completed a computerized WCST. Parallel reinforcement-learning models showed best predictive accuracies for the majority of participants. Only parallel reinforcement-learning models accounted for the modulation of perseveration propensity by response demands. In conclusion, parallel reinforcement-learning models provide a new theoretical perspective on card sorting and it offers a suitable framework for discerning individual differences in latent processes that subserve behavioral flexibility.
format article
author Alexander Steinke
Florian Lange
Bruno Kopp
author_facet Alexander Steinke
Florian Lange
Bruno Kopp
author_sort Alexander Steinke
title Parallel model-based and model-free reinforcement learning for card sorting performance
title_short Parallel model-based and model-free reinforcement learning for card sorting performance
title_full Parallel model-based and model-free reinforcement learning for card sorting performance
title_fullStr Parallel model-based and model-free reinforcement learning for card sorting performance
title_full_unstemmed Parallel model-based and model-free reinforcement learning for card sorting performance
title_sort parallel model-based and model-free reinforcement learning for card sorting performance
publisher Nature Portfolio
publishDate 2020
url https://doaj.org/article/d202d99dd7e14776a906d2ed6ffe2d31
work_keys_str_mv AT alexandersteinke parallelmodelbasedandmodelfreereinforcementlearningforcardsortingperformance
AT florianlange parallelmodelbasedandmodelfreereinforcementlearningforcardsortingperformance
AT brunokopp parallelmodelbasedandmodelfreereinforcementlearningforcardsortingperformance
_version_ 1718377652000129024