Temporal-difference reinforcement learning with distributed representations.
Temporal-difference (TD) algorithms have been proposed as models of reinforcement learning (RL). We examine two issues of distributed representation in these TD algorithms: distributed representations of belief and distributed discounting factors. Distributed representation of belief allows the beli...
Guardado en:
Autores principales: | Zeb Kurth-Nelson, A David Redish |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Public Library of Science (PLoS)
2009
|
Materias: | |
Acceso en línea: | https://doaj.org/article/10b71edf81334d619f75d3ba97df1661 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Ejemplares similares
-
SSCC TD: a serial and simultaneous configural-cue compound stimuli representation for temporal difference learning.
por: Esther Mondragón, et al.
Publicado: (2014) -
The Mechanical Representation of Temporal Delays
por: Raz Leib, et al.
Publicado: (2017) -
Rapid face adaptation distributes representation in inferior-temporal cortex across time and neuronal dimensions
por: Abdol-Hossein Vahabie, et al.
Publicado: (2017) -
Temporal ordering of cancer microarray data through a reinforcement learning based approach.
por: Gabriela Czibula, et al.
Publicado: (2013) -
Representational changes of latent strategies in rat medial prefrontal cortex precede changes in behaviour
por: Nathaniel James Powell, et al.
Publicado: (2016)