Intelligent autonomous agents and trust in virtual reality
Intelligent autonomous agents (IAA) are proliferating and rapidly evolving due to the exponential growth in computational power and recent advances, for instance, in artificial intelligence research. Ranging from chatbots, over personal virtual assistants and medical decision-aiding systems, to self...
Guardado en:
Autores principales: | , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Elsevier
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/a5ae8f0338454d4f96a615c53d002bc4 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:a5ae8f0338454d4f96a615c53d002bc4 |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:a5ae8f0338454d4f96a615c53d002bc42021-12-01T05:04:59ZIntelligent autonomous agents and trust in virtual reality2451-958810.1016/j.chbr.2021.100146https://doaj.org/article/a5ae8f0338454d4f96a615c53d002bc42021-08-01T00:00:00Zhttp://www.sciencedirect.com/science/article/pii/S2451958821000944https://doaj.org/toc/2451-9588Intelligent autonomous agents (IAA) are proliferating and rapidly evolving due to the exponential growth in computational power and recent advances, for instance, in artificial intelligence research. Ranging from chatbots, over personal virtual assistants and medical decision-aiding systems, to self-driving or self-piloting systems, whether unbeknownst to the users or not, IAA are increasingly integrated into many aspects of daily life. Despite this technological development, many people remain skeptical of such agents. Conversely, others might have excessive confidence in them. Therefore, establishing an appropriate level of trust is crucial to the successful deployment of IAA in everyday contexts. Virtual Reality (VR) is another domain where IAA play a significant role, yet its experiential and immersive character particularly allows for new ways of interaction and tackling trust-related issues. In this article, we provide an overview of the numerous factors involved in establishing trust between users and IAA, spanning scientific disciplines as diverse as psychology, philosophy, sociology, computer science, and economics. Focusing on VR, we discuss the different types and definitions of trust and identify foundational factors classified into three interrelated dimensions: Human-Technology, Human-System, and Interpersonal. Based on this taxonomy, we identify open issues and a research agenda towards facilitating the study of trustful interaction and collaboration between users and IAA in VR settings.Ningyuan SunJean BotevElsevierarticleTrust modelingVirtual realityAgent interactionCollaborationElectronic computers. Computer scienceQA75.5-76.95PsychologyBF1-990ENComputers in Human Behavior Reports, Vol 4, Iss , Pp 100146- (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
Trust modeling Virtual reality Agent interaction Collaboration Electronic computers. Computer science QA75.5-76.95 Psychology BF1-990 |
spellingShingle |
Trust modeling Virtual reality Agent interaction Collaboration Electronic computers. Computer science QA75.5-76.95 Psychology BF1-990 Ningyuan Sun Jean Botev Intelligent autonomous agents and trust in virtual reality |
description |
Intelligent autonomous agents (IAA) are proliferating and rapidly evolving due to the exponential growth in computational power and recent advances, for instance, in artificial intelligence research. Ranging from chatbots, over personal virtual assistants and medical decision-aiding systems, to self-driving or self-piloting systems, whether unbeknownst to the users or not, IAA are increasingly integrated into many aspects of daily life. Despite this technological development, many people remain skeptical of such agents. Conversely, others might have excessive confidence in them. Therefore, establishing an appropriate level of trust is crucial to the successful deployment of IAA in everyday contexts. Virtual Reality (VR) is another domain where IAA play a significant role, yet its experiential and immersive character particularly allows for new ways of interaction and tackling trust-related issues. In this article, we provide an overview of the numerous factors involved in establishing trust between users and IAA, spanning scientific disciplines as diverse as psychology, philosophy, sociology, computer science, and economics. Focusing on VR, we discuss the different types and definitions of trust and identify foundational factors classified into three interrelated dimensions: Human-Technology, Human-System, and Interpersonal. Based on this taxonomy, we identify open issues and a research agenda towards facilitating the study of trustful interaction and collaboration between users and IAA in VR settings. |
format |
article |
author |
Ningyuan Sun Jean Botev |
author_facet |
Ningyuan Sun Jean Botev |
author_sort |
Ningyuan Sun |
title |
Intelligent autonomous agents and trust in virtual reality |
title_short |
Intelligent autonomous agents and trust in virtual reality |
title_full |
Intelligent autonomous agents and trust in virtual reality |
title_fullStr |
Intelligent autonomous agents and trust in virtual reality |
title_full_unstemmed |
Intelligent autonomous agents and trust in virtual reality |
title_sort |
intelligent autonomous agents and trust in virtual reality |
publisher |
Elsevier |
publishDate |
2021 |
url |
https://doaj.org/article/a5ae8f0338454d4f96a615c53d002bc4 |
work_keys_str_mv |
AT ningyuansun intelligentautonomousagentsandtrustinvirtualreality AT jeanbotev intelligentautonomousagentsandtrustinvirtualreality |
_version_ |
1718405562555695104 |