Modeling the violation of reward maximization and invariance in reinforcement schedules.
It is often assumed that animals and people adjust their behavior to maximize reward acquisition. In visually cued reinforcement schedules, monkeys make errors in trials that are not immediately rewarded, despite having to repeat error trials. Here we show that error rates are typically smaller in t...
Guardado en:
Autores principales: | , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Public Library of Science (PLoS)
2008
|
Materias: | |
Acceso en línea: | https://doaj.org/article/1e9787227f524e8fb2c0dc048b7efad3 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:1e9787227f524e8fb2c0dc048b7efad3 |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:1e9787227f524e8fb2c0dc048b7efad32021-11-25T05:41:11ZModeling the violation of reward maximization and invariance in reinforcement schedules.1553-734X1553-735810.1371/journal.pcbi.1000131https://doaj.org/article/1e9787227f524e8fb2c0dc048b7efad32008-08-01T00:00:00Zhttps://www.ncbi.nlm.nih.gov/pmc/articles/pmid/18688266/?tool=EBIhttps://doaj.org/toc/1553-734Xhttps://doaj.org/toc/1553-7358It is often assumed that animals and people adjust their behavior to maximize reward acquisition. In visually cued reinforcement schedules, monkeys make errors in trials that are not immediately rewarded, despite having to repeat error trials. Here we show that error rates are typically smaller in trials equally distant from reward but belonging to longer schedules (referred to as "schedule length effect"). This violates the principles of reward maximization and invariance and cannot be predicted by the standard methods of Reinforcement Learning, such as the method of temporal differences. We develop a heuristic model that accounts for all of the properties of the behavior in the reinforcement schedule task but whose predictions are not different from those of the standard temporal difference model in choice tasks. In the modification of temporal difference learning introduced here, the effect of schedule length emerges spontaneously from the sensitivity to the immediately preceding trial. We also introduce a policy for general Markov Decision Processes, where the decision made at each node is conditioned on the motivation to perform an instrumental action, and show that the application of our model to the reinforcement schedule task and the choice task are special cases of this general theoretical framework. Within this framework, Reinforcement Learning can approach contextual learning with the mixture of empirical findings and principled assumptions that seem to coexist in the best descriptions of animal behavior. As examples, we discuss two phenomena observed in humans that often derive from the violation of the principle of invariance: "framing," wherein equivalent options are treated differently depending on the context in which they are presented, and the "sunk cost" effect, the greater tendency to continue an endeavor once an investment in money, effort, or time has been made. The schedule length effect might be a manifestation of these phenomena in monkeys.Giancarlo La CameraBarry J RichmondPublic Library of Science (PLoS)articleBiology (General)QH301-705.5ENPLoS Computational Biology, Vol 4, Iss 8, p e1000131 (2008) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
Biology (General) QH301-705.5 |
spellingShingle |
Biology (General) QH301-705.5 Giancarlo La Camera Barry J Richmond Modeling the violation of reward maximization and invariance in reinforcement schedules. |
description |
It is often assumed that animals and people adjust their behavior to maximize reward acquisition. In visually cued reinforcement schedules, monkeys make errors in trials that are not immediately rewarded, despite having to repeat error trials. Here we show that error rates are typically smaller in trials equally distant from reward but belonging to longer schedules (referred to as "schedule length effect"). This violates the principles of reward maximization and invariance and cannot be predicted by the standard methods of Reinforcement Learning, such as the method of temporal differences. We develop a heuristic model that accounts for all of the properties of the behavior in the reinforcement schedule task but whose predictions are not different from those of the standard temporal difference model in choice tasks. In the modification of temporal difference learning introduced here, the effect of schedule length emerges spontaneously from the sensitivity to the immediately preceding trial. We also introduce a policy for general Markov Decision Processes, where the decision made at each node is conditioned on the motivation to perform an instrumental action, and show that the application of our model to the reinforcement schedule task and the choice task are special cases of this general theoretical framework. Within this framework, Reinforcement Learning can approach contextual learning with the mixture of empirical findings and principled assumptions that seem to coexist in the best descriptions of animal behavior. As examples, we discuss two phenomena observed in humans that often derive from the violation of the principle of invariance: "framing," wherein equivalent options are treated differently depending on the context in which they are presented, and the "sunk cost" effect, the greater tendency to continue an endeavor once an investment in money, effort, or time has been made. The schedule length effect might be a manifestation of these phenomena in monkeys. |
format |
article |
author |
Giancarlo La Camera Barry J Richmond |
author_facet |
Giancarlo La Camera Barry J Richmond |
author_sort |
Giancarlo La Camera |
title |
Modeling the violation of reward maximization and invariance in reinforcement schedules. |
title_short |
Modeling the violation of reward maximization and invariance in reinforcement schedules. |
title_full |
Modeling the violation of reward maximization and invariance in reinforcement schedules. |
title_fullStr |
Modeling the violation of reward maximization and invariance in reinforcement schedules. |
title_full_unstemmed |
Modeling the violation of reward maximization and invariance in reinforcement schedules. |
title_sort |
modeling the violation of reward maximization and invariance in reinforcement schedules. |
publisher |
Public Library of Science (PLoS) |
publishDate |
2008 |
url |
https://doaj.org/article/1e9787227f524e8fb2c0dc048b7efad3 |
work_keys_str_mv |
AT giancarlolacamera modelingtheviolationofrewardmaximizationandinvarianceinreinforcementschedules AT barryjrichmond modelingtheviolationofrewardmaximizationandinvarianceinreinforcementschedules |
_version_ |
1718414503845036032 |