Can an android's posture and movement discriminate against the ambiguous emotion perceived from its facial expressions?

Expressing emotions through various modalities is a crucial function not only for humans but also for robots. The mapping method from facial expressions to the basic emotions is widely used in research on robot emotional expressions. This method claims that there are specific facial muscle activatio...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Satoshi Yagi, Yoshihiro Nakata, Yutaka Nakamura, Hiroshi Ishiguro
Formato: article
Lenguaje:EN
Publicado: Public Library of Science (PLoS) 2021
Materias:
R
Q
Acceso en línea:https://doaj.org/article/e656c40edfaf49a085f066d43e5e6f01
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:e656c40edfaf49a085f066d43e5e6f01
record_format dspace
spelling oai:doaj.org-article:e656c40edfaf49a085f066d43e5e6f012021-12-02T20:18:28ZCan an android's posture and movement discriminate against the ambiguous emotion perceived from its facial expressions?1932-620310.1371/journal.pone.0254905https://doaj.org/article/e656c40edfaf49a085f066d43e5e6f012021-01-01T00:00:00Zhttps://doi.org/10.1371/journal.pone.0254905https://doaj.org/toc/1932-6203Expressing emotions through various modalities is a crucial function not only for humans but also for robots. The mapping method from facial expressions to the basic emotions is widely used in research on robot emotional expressions. This method claims that there are specific facial muscle activation patterns for each emotional expression and people can perceive these emotions by reading these patterns. However, recent research on human behavior reveals that some emotional expressions, such as the emotion "intense", are difficult to judge as positive or negative by just looking at the facial expression alone. Nevertheless, it has not been investigated whether robots can also express ambiguous facial expressions with no clear valence and whether the addition of body expressions can make the facial valence clearer to humans. This paper shows that an ambiguous facial expression of an android can be perceived more clearly by viewers when body postures and movements are added. We conducted three experiments and online surveys among North American residents with 94, 114 and 114 participants, respectively. In Experiment 1, by calculating the entropy, we found that the facial expression "intense" was difficult to judge as positive or negative when they were only shown the facial expression. In Experiments 2 and 3, by analyzing ANOVA, we confirmed that participants were better at judging the facial valence when they were shown the whole body of the android, even though the facial expression was the same as in Experiment 1. These results suggest that facial and body expressions by robots should be designed jointly to achieve better communication with humans. In order to achieve smoother cooperative human-robot interaction, such as education by robots, emotion expressions conveyed through a combination of both the face and the body of the robot is necessary to convey the robot's intentions or desires to humans.Satoshi YagiYoshihiro NakataYutaka NakamuraHiroshi IshiguroPublic Library of Science (PLoS)articleMedicineRScienceQENPLoS ONE, Vol 16, Iss 8, p e0254905 (2021)
institution DOAJ
collection DOAJ
language EN
topic Medicine
R
Science
Q
spellingShingle Medicine
R
Science
Q
Satoshi Yagi
Yoshihiro Nakata
Yutaka Nakamura
Hiroshi Ishiguro
Can an android's posture and movement discriminate against the ambiguous emotion perceived from its facial expressions?
description Expressing emotions through various modalities is a crucial function not only for humans but also for robots. The mapping method from facial expressions to the basic emotions is widely used in research on robot emotional expressions. This method claims that there are specific facial muscle activation patterns for each emotional expression and people can perceive these emotions by reading these patterns. However, recent research on human behavior reveals that some emotional expressions, such as the emotion "intense", are difficult to judge as positive or negative by just looking at the facial expression alone. Nevertheless, it has not been investigated whether robots can also express ambiguous facial expressions with no clear valence and whether the addition of body expressions can make the facial valence clearer to humans. This paper shows that an ambiguous facial expression of an android can be perceived more clearly by viewers when body postures and movements are added. We conducted three experiments and online surveys among North American residents with 94, 114 and 114 participants, respectively. In Experiment 1, by calculating the entropy, we found that the facial expression "intense" was difficult to judge as positive or negative when they were only shown the facial expression. In Experiments 2 and 3, by analyzing ANOVA, we confirmed that participants were better at judging the facial valence when they were shown the whole body of the android, even though the facial expression was the same as in Experiment 1. These results suggest that facial and body expressions by robots should be designed jointly to achieve better communication with humans. In order to achieve smoother cooperative human-robot interaction, such as education by robots, emotion expressions conveyed through a combination of both the face and the body of the robot is necessary to convey the robot's intentions or desires to humans.
format article
author Satoshi Yagi
Yoshihiro Nakata
Yutaka Nakamura
Hiroshi Ishiguro
author_facet Satoshi Yagi
Yoshihiro Nakata
Yutaka Nakamura
Hiroshi Ishiguro
author_sort Satoshi Yagi
title Can an android's posture and movement discriminate against the ambiguous emotion perceived from its facial expressions?
title_short Can an android's posture and movement discriminate against the ambiguous emotion perceived from its facial expressions?
title_full Can an android's posture and movement discriminate against the ambiguous emotion perceived from its facial expressions?
title_fullStr Can an android's posture and movement discriminate against the ambiguous emotion perceived from its facial expressions?
title_full_unstemmed Can an android's posture and movement discriminate against the ambiguous emotion perceived from its facial expressions?
title_sort can an android's posture and movement discriminate against the ambiguous emotion perceived from its facial expressions?
publisher Public Library of Science (PLoS)
publishDate 2021
url https://doaj.org/article/e656c40edfaf49a085f066d43e5e6f01
work_keys_str_mv AT satoshiyagi cananandroidspostureandmovementdiscriminateagainsttheambiguousemotionperceivedfromitsfacialexpressions
AT yoshihironakata cananandroidspostureandmovementdiscriminateagainsttheambiguousemotionperceivedfromitsfacialexpressions
AT yutakanakamura cananandroidspostureandmovementdiscriminateagainsttheambiguousemotionperceivedfromitsfacialexpressions
AT hiroshiishiguro cananandroidspostureandmovementdiscriminateagainsttheambiguousemotionperceivedfromitsfacialexpressions
_version_ 1718374318999601152