In search of a Goldilocks zone for credible AI
Abstract If artificial intelligence (AI) is to help solve individual, societal and global problems, humans should neither underestimate nor overestimate its trustworthiness. Situated in-between these two extremes is an ideal ‘Goldilocks’ zone of credibility. But what will keep trust in this zone? We...
Guardado en:
Autores principales: | , , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Nature Portfolio
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/8b690a2115c34615bbf54f43afd6666b |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:8b690a2115c34615bbf54f43afd6666b |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:8b690a2115c34615bbf54f43afd6666b2021-12-02T18:18:51ZIn search of a Goldilocks zone for credible AI10.1038/s41598-021-93109-82045-2322https://doaj.org/article/8b690a2115c34615bbf54f43afd6666b2021-07-01T00:00:00Zhttps://doi.org/10.1038/s41598-021-93109-8https://doaj.org/toc/2045-2322Abstract If artificial intelligence (AI) is to help solve individual, societal and global problems, humans should neither underestimate nor overestimate its trustworthiness. Situated in-between these two extremes is an ideal ‘Goldilocks’ zone of credibility. But what will keep trust in this zone? We hypothesise that this role ultimately falls to the social cognition mechanisms which adaptively regulate conformity between humans. This novel hypothesis predicts that human-like functional biases in conformity should occur during interactions with AI. We examined multiple tests of this prediction using a collaborative remembering paradigm, where participants viewed household scenes for 30 s vs. 2 min, then saw 2-alternative forced-choice decisions about scene content originating either from AI- or human-sources. We manipulated the credibility of different sources (Experiment 1) and, from a single source, the estimated-likelihood (Experiment 2) and objective accuracy (Experiment 3) of specific decisions. As predicted, each manipulation produced functional biases for AI-sources mirroring those found for human-sources. Participants conformed more to higher credibility sources, and higher-likelihood or more objectively accurate decisions, becoming increasingly sensitive to source accuracy when their own capability was reduced. These findings support the hypothesised role of social cognition in regulating AI’s influence, raising important implications and new directions for research on human–AI interaction.Kevin AllanNir OrenJacqui HutchisonDouglas MartinNature PortfolioarticleMedicineRScienceQENScientific Reports, Vol 11, Iss 1, Pp 1-13 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
Medicine R Science Q |
spellingShingle |
Medicine R Science Q Kevin Allan Nir Oren Jacqui Hutchison Douglas Martin In search of a Goldilocks zone for credible AI |
description |
Abstract If artificial intelligence (AI) is to help solve individual, societal and global problems, humans should neither underestimate nor overestimate its trustworthiness. Situated in-between these two extremes is an ideal ‘Goldilocks’ zone of credibility. But what will keep trust in this zone? We hypothesise that this role ultimately falls to the social cognition mechanisms which adaptively regulate conformity between humans. This novel hypothesis predicts that human-like functional biases in conformity should occur during interactions with AI. We examined multiple tests of this prediction using a collaborative remembering paradigm, where participants viewed household scenes for 30 s vs. 2 min, then saw 2-alternative forced-choice decisions about scene content originating either from AI- or human-sources. We manipulated the credibility of different sources (Experiment 1) and, from a single source, the estimated-likelihood (Experiment 2) and objective accuracy (Experiment 3) of specific decisions. As predicted, each manipulation produced functional biases for AI-sources mirroring those found for human-sources. Participants conformed more to higher credibility sources, and higher-likelihood or more objectively accurate decisions, becoming increasingly sensitive to source accuracy when their own capability was reduced. These findings support the hypothesised role of social cognition in regulating AI’s influence, raising important implications and new directions for research on human–AI interaction. |
format |
article |
author |
Kevin Allan Nir Oren Jacqui Hutchison Douglas Martin |
author_facet |
Kevin Allan Nir Oren Jacqui Hutchison Douglas Martin |
author_sort |
Kevin Allan |
title |
In search of a Goldilocks zone for credible AI |
title_short |
In search of a Goldilocks zone for credible AI |
title_full |
In search of a Goldilocks zone for credible AI |
title_fullStr |
In search of a Goldilocks zone for credible AI |
title_full_unstemmed |
In search of a Goldilocks zone for credible AI |
title_sort |
in search of a goldilocks zone for credible ai |
publisher |
Nature Portfolio |
publishDate |
2021 |
url |
https://doaj.org/article/8b690a2115c34615bbf54f43afd6666b |
work_keys_str_mv |
AT kevinallan insearchofagoldilockszoneforcredibleai AT niroren insearchofagoldilockszoneforcredibleai AT jacquihutchison insearchofagoldilockszoneforcredibleai AT douglasmartin insearchofagoldilockszoneforcredibleai |
_version_ |
1718378167689805824 |