“I don’t Think These Devices are Very Culturally Sensitive.”—Impact of Automated Speech Recognition Errors on African Americans
Automated speech recognition (ASR) converts language into text and is used across a variety of applications to assist us in everyday life, from powering virtual assistants, natural language conversations, to enabling dictation services. While recent work suggests that there are racial disparities in...
Guardado en:
Autores principales: | , , , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Frontiers Media S.A.
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/608b5ff418ae43a2b343d07afbe00777 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:608b5ff418ae43a2b343d07afbe00777 |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:608b5ff418ae43a2b343d07afbe007772021-12-01T09:24:41Z“I don’t Think These Devices are Very Culturally Sensitive.”—Impact of Automated Speech Recognition Errors on African Americans2624-821210.3389/frai.2021.725911https://doaj.org/article/608b5ff418ae43a2b343d07afbe007772021-11-01T00:00:00Zhttps://www.frontiersin.org/articles/10.3389/frai.2021.725911/fullhttps://doaj.org/toc/2624-8212Automated speech recognition (ASR) converts language into text and is used across a variety of applications to assist us in everyday life, from powering virtual assistants, natural language conversations, to enabling dictation services. While recent work suggests that there are racial disparities in the performance of ASR systems for speakers of African American Vernacular English, little is known about the psychological and experiential effects of these failures paper provides a detailed examination of the behavioral and psychological consequences of ASR voice errors and the difficulty African American users have with getting their intents recognized. The results demonstrate that ASR failures have a negative, detrimental impact on African American users. Specifically, African Americans feel othered when using technology powered by ASR—errors surface thoughts about identity, namely about race and geographic location—leaving them feeling that the technology was not made for them. As a result, African Americans accommodate their speech to have better success with the technology. We incorporate the insights and lessons learned from sociolinguistics in our suggestions for linguistically responsive ways to build more inclusive voice systems that consider African American users’ needs, attitudes, and speech patterns. Our findings suggest that the use of a diary study can enable researchers to best understand the experiences and needs of communities who are often misunderstood by ASR. We argue this methodological framework could enable researchers who are concerned with fairness in AI to better capture the needs of all speakers who are traditionally misheard by voice-activated, artificially intelligent (voice-AI) digital systems.Zion MengeshaZion MengeshaCourtney HeldrethMichal LahavJuliana SublewskiElyse TuennermanFrontiers Media S.A.articlefair machine learningnatural language processingspeech to textAfrican American Vernacular Englishsociolinguisticssocial psychologyElectronic computers. Computer scienceQA75.5-76.95ENFrontiers in Artificial Intelligence, Vol 4 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
fair machine learning natural language processing speech to text African American Vernacular English sociolinguistics social psychology Electronic computers. Computer science QA75.5-76.95 |
spellingShingle |
fair machine learning natural language processing speech to text African American Vernacular English sociolinguistics social psychology Electronic computers. Computer science QA75.5-76.95 Zion Mengesha Zion Mengesha Courtney Heldreth Michal Lahav Juliana Sublewski Elyse Tuennerman “I don’t Think These Devices are Very Culturally Sensitive.”—Impact of Automated Speech Recognition Errors on African Americans |
description |
Automated speech recognition (ASR) converts language into text and is used across a variety of applications to assist us in everyday life, from powering virtual assistants, natural language conversations, to enabling dictation services. While recent work suggests that there are racial disparities in the performance of ASR systems for speakers of African American Vernacular English, little is known about the psychological and experiential effects of these failures paper provides a detailed examination of the behavioral and psychological consequences of ASR voice errors and the difficulty African American users have with getting their intents recognized. The results demonstrate that ASR failures have a negative, detrimental impact on African American users. Specifically, African Americans feel othered when using technology powered by ASR—errors surface thoughts about identity, namely about race and geographic location—leaving them feeling that the technology was not made for them. As a result, African Americans accommodate their speech to have better success with the technology. We incorporate the insights and lessons learned from sociolinguistics in our suggestions for linguistically responsive ways to build more inclusive voice systems that consider African American users’ needs, attitudes, and speech patterns. Our findings suggest that the use of a diary study can enable researchers to best understand the experiences and needs of communities who are often misunderstood by ASR. We argue this methodological framework could enable researchers who are concerned with fairness in AI to better capture the needs of all speakers who are traditionally misheard by voice-activated, artificially intelligent (voice-AI) digital systems. |
format |
article |
author |
Zion Mengesha Zion Mengesha Courtney Heldreth Michal Lahav Juliana Sublewski Elyse Tuennerman |
author_facet |
Zion Mengesha Zion Mengesha Courtney Heldreth Michal Lahav Juliana Sublewski Elyse Tuennerman |
author_sort |
Zion Mengesha |
title |
“I don’t Think These Devices are Very Culturally Sensitive.”—Impact of Automated Speech Recognition Errors on African Americans |
title_short |
“I don’t Think These Devices are Very Culturally Sensitive.”—Impact of Automated Speech Recognition Errors on African Americans |
title_full |
“I don’t Think These Devices are Very Culturally Sensitive.”—Impact of Automated Speech Recognition Errors on African Americans |
title_fullStr |
“I don’t Think These Devices are Very Culturally Sensitive.”—Impact of Automated Speech Recognition Errors on African Americans |
title_full_unstemmed |
“I don’t Think These Devices are Very Culturally Sensitive.”—Impact of Automated Speech Recognition Errors on African Americans |
title_sort |
“i don’t think these devices are very culturally sensitive.”—impact of automated speech recognition errors on african americans |
publisher |
Frontiers Media S.A. |
publishDate |
2021 |
url |
https://doaj.org/article/608b5ff418ae43a2b343d07afbe00777 |
work_keys_str_mv |
AT zionmengesha idontthinkthesedevicesareveryculturallysensitiveimpactofautomatedspeechrecognitionerrorsonafricanamericans AT zionmengesha idontthinkthesedevicesareveryculturallysensitiveimpactofautomatedspeechrecognitionerrorsonafricanamericans AT courtneyheldreth idontthinkthesedevicesareveryculturallysensitiveimpactofautomatedspeechrecognitionerrorsonafricanamericans AT michallahav idontthinkthesedevicesareveryculturallysensitiveimpactofautomatedspeechrecognitionerrorsonafricanamericans AT julianasublewski idontthinkthesedevicesareveryculturallysensitiveimpactofautomatedspeechrecognitionerrorsonafricanamericans AT elysetuennerman idontthinkthesedevicesareveryculturallysensitiveimpactofautomatedspeechrecognitionerrorsonafricanamericans |
_version_ |
1718405356291358720 |