Task complexity interacts with state-space uncertainty in the arbitration between model-based and model-free learning

The brain dynamically arbitrates between two model-based and model-free reinforcement learning (RL). Here, the authors show that participants tended to increase model-based control in response to increasing task complexity, but resorted to model-free when both uncertainty and task complexity were hi...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Dongjae Kim, Geon Yeong Park, John P. O′Doherty, Sang Wan Lee
Formato: article
Lenguaje:EN
Publicado: Nature Portfolio 2019
Materias:
Q
Acceso en línea:https://doaj.org/article/a966c6e9a9d94f03aa0fe7722aa4c2e6
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:a966c6e9a9d94f03aa0fe7722aa4c2e6
record_format dspace
spelling oai:doaj.org-article:a966c6e9a9d94f03aa0fe7722aa4c2e62021-12-02T15:36:02ZTask complexity interacts with state-space uncertainty in the arbitration between model-based and model-free learning10.1038/s41467-019-13632-12041-1723https://doaj.org/article/a966c6e9a9d94f03aa0fe7722aa4c2e62019-12-01T00:00:00Zhttps://doi.org/10.1038/s41467-019-13632-1https://doaj.org/toc/2041-1723The brain dynamically arbitrates between two model-based and model-free reinforcement learning (RL). Here, the authors show that participants tended to increase model-based control in response to increasing task complexity, but resorted to model-free when both uncertainty and task complexity were high.Dongjae KimGeon Yeong ParkJohn P. O′DohertySang Wan LeeNature PortfolioarticleScienceQENNature Communications, Vol 10, Iss 1, Pp 1-14 (2019)
institution DOAJ
collection DOAJ
language EN
topic Science
Q
spellingShingle Science
Q
Dongjae Kim
Geon Yeong Park
John P. O′Doherty
Sang Wan Lee
Task complexity interacts with state-space uncertainty in the arbitration between model-based and model-free learning
description The brain dynamically arbitrates between two model-based and model-free reinforcement learning (RL). Here, the authors show that participants tended to increase model-based control in response to increasing task complexity, but resorted to model-free when both uncertainty and task complexity were high.
format article
author Dongjae Kim
Geon Yeong Park
John P. O′Doherty
Sang Wan Lee
author_facet Dongjae Kim
Geon Yeong Park
John P. O′Doherty
Sang Wan Lee
author_sort Dongjae Kim
title Task complexity interacts with state-space uncertainty in the arbitration between model-based and model-free learning
title_short Task complexity interacts with state-space uncertainty in the arbitration between model-based and model-free learning
title_full Task complexity interacts with state-space uncertainty in the arbitration between model-based and model-free learning
title_fullStr Task complexity interacts with state-space uncertainty in the arbitration between model-based and model-free learning
title_full_unstemmed Task complexity interacts with state-space uncertainty in the arbitration between model-based and model-free learning
title_sort task complexity interacts with state-space uncertainty in the arbitration between model-based and model-free learning
publisher Nature Portfolio
publishDate 2019
url https://doaj.org/article/a966c6e9a9d94f03aa0fe7722aa4c2e6
work_keys_str_mv AT dongjaekim taskcomplexityinteractswithstatespaceuncertaintyinthearbitrationbetweenmodelbasedandmodelfreelearning
AT geonyeongpark taskcomplexityinteractswithstatespaceuncertaintyinthearbitrationbetweenmodelbasedandmodelfreelearning
AT johnpodoherty taskcomplexityinteractswithstatespaceuncertaintyinthearbitrationbetweenmodelbasedandmodelfreelearning
AT sangwanlee taskcomplexityinteractswithstatespaceuncertaintyinthearbitrationbetweenmodelbasedandmodelfreelearning
_version_ 1718386374146523136