Visual behavior modelling for robotic theory of mind
Abstract Behavior modeling is an essential cognitive ability that underlies many aspects of human and animal social behavior (Watson in Psychol Rev 20:158, 1913), and an ability we would like to endow robots. Most studies of machine behavior modelling, however, rely on symbolic or selected parametri...
Guardado en:
Autores principales: | , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Nature Portfolio
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/cd7a91c276d741a38635cc2a85203743 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:cd7a91c276d741a38635cc2a85203743 |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:cd7a91c276d741a38635cc2a852037432021-12-02T14:12:47ZVisual behavior modelling for robotic theory of mind10.1038/s41598-020-77918-x2045-2322https://doaj.org/article/cd7a91c276d741a38635cc2a852037432021-01-01T00:00:00Zhttps://doi.org/10.1038/s41598-020-77918-xhttps://doaj.org/toc/2045-2322Abstract Behavior modeling is an essential cognitive ability that underlies many aspects of human and animal social behavior (Watson in Psychol Rev 20:158, 1913), and an ability we would like to endow robots. Most studies of machine behavior modelling, however, rely on symbolic or selected parametric sensory inputs and built-in knowledge relevant to a given task. Here, we propose that an observer can model the behavior of an actor through visual processing alone, without any prior symbolic information and assumptions about relevant inputs. To test this hypothesis, we designed a non-verbal non-symbolic robotic experiment in which an observer must visualize future plans of an actor robot, based only on an image depicting the initial scene of the actor robot. We found that an AI-observer is able to visualize the future plans of the actor with 98.5% success across four different activities, even when the activity is not known a-priori. We hypothesize that such visual behavior modeling is an essential cognitive ability that will allow machines to understand and coordinate with surrounding agents, while sidestepping the notorious symbol grounding problem. Through a false-belief test, we suggest that this approach may be a precursor to Theory of Mind, one of the distinguishing hallmarks of primate social cognition.Boyuan ChenCarl VondrickHod LipsonNature PortfolioarticleMedicineRScienceQENScientific Reports, Vol 11, Iss 1, Pp 1-14 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
Medicine R Science Q |
spellingShingle |
Medicine R Science Q Boyuan Chen Carl Vondrick Hod Lipson Visual behavior modelling for robotic theory of mind |
description |
Abstract Behavior modeling is an essential cognitive ability that underlies many aspects of human and animal social behavior (Watson in Psychol Rev 20:158, 1913), and an ability we would like to endow robots. Most studies of machine behavior modelling, however, rely on symbolic or selected parametric sensory inputs and built-in knowledge relevant to a given task. Here, we propose that an observer can model the behavior of an actor through visual processing alone, without any prior symbolic information and assumptions about relevant inputs. To test this hypothesis, we designed a non-verbal non-symbolic robotic experiment in which an observer must visualize future plans of an actor robot, based only on an image depicting the initial scene of the actor robot. We found that an AI-observer is able to visualize the future plans of the actor with 98.5% success across four different activities, even when the activity is not known a-priori. We hypothesize that such visual behavior modeling is an essential cognitive ability that will allow machines to understand and coordinate with surrounding agents, while sidestepping the notorious symbol grounding problem. Through a false-belief test, we suggest that this approach may be a precursor to Theory of Mind, one of the distinguishing hallmarks of primate social cognition. |
format |
article |
author |
Boyuan Chen Carl Vondrick Hod Lipson |
author_facet |
Boyuan Chen Carl Vondrick Hod Lipson |
author_sort |
Boyuan Chen |
title |
Visual behavior modelling for robotic theory of mind |
title_short |
Visual behavior modelling for robotic theory of mind |
title_full |
Visual behavior modelling for robotic theory of mind |
title_fullStr |
Visual behavior modelling for robotic theory of mind |
title_full_unstemmed |
Visual behavior modelling for robotic theory of mind |
title_sort |
visual behavior modelling for robotic theory of mind |
publisher |
Nature Portfolio |
publishDate |
2021 |
url |
https://doaj.org/article/cd7a91c276d741a38635cc2a85203743 |
work_keys_str_mv |
AT boyuanchen visualbehaviormodellingforrobotictheoryofmind AT carlvondrick visualbehaviormodellingforrobotictheoryofmind AT hodlipson visualbehaviormodellingforrobotictheoryofmind |
_version_ |
1718391750543802368 |