Rapid online assessment of reading ability
Abstract An accurate model of the factors that contribute to individual differences in reading ability depends on data collection in large, diverse and representative samples of research participants. However, that is rarely feasible due to the constraints imposed by standardized measures of reading...
Guardado en:
Autores principales: | , , , , , , , , , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Nature Portfolio
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/3c80a6cb29f246dfb9077b617f531e13 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:3c80a6cb29f246dfb9077b617f531e13 |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:3c80a6cb29f246dfb9077b617f531e132021-12-02T16:30:47ZRapid online assessment of reading ability10.1038/s41598-021-85907-x2045-2322https://doaj.org/article/3c80a6cb29f246dfb9077b617f531e132021-03-01T00:00:00Zhttps://doi.org/10.1038/s41598-021-85907-xhttps://doaj.org/toc/2045-2322Abstract An accurate model of the factors that contribute to individual differences in reading ability depends on data collection in large, diverse and representative samples of research participants. However, that is rarely feasible due to the constraints imposed by standardized measures of reading ability which require test administration by trained clinicians or researchers. Here we explore whether a simple, two-alternative forced choice, time limited lexical decision task (LDT), self-delivered through the web-browser, can serve as an accurate and reliable measure of reading ability. We found that performance on the LDT is highly correlated with scores on standardized measures of reading ability such as the Woodcock-Johnson Letter Word Identification test (r = 0.91, disattenuated r = 0.94). Importantly, the LDT reading ability measure is highly reliable (r = 0.97). After optimizing the list of words and pseudowords based on item response theory, we found that a short experiment with 76 trials (2–3 min) provides a reliable (r = 0.95) measure of reading ability. Thus, the self-administered, Rapid Online Assessment of Reading ability (ROAR) developed here overcomes the constraints of resource-intensive, in-person reading assessment, and provides an efficient and automated tool for effective online research into the mechanisms of reading (dis)ability.Jason D. YeatmanKenny An TangPatrick M. DonnellyMaya YablonskiMahalakshmi RamamurthyIliana I. KaripidisSendy CaffarraMegumi E. TakadaKlint KanopkaMichal Ben-ShacharBenjamin W. DomingueNature PortfolioarticleMedicineRScienceQENScientific Reports, Vol 11, Iss 1, Pp 1-11 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
Medicine R Science Q |
spellingShingle |
Medicine R Science Q Jason D. Yeatman Kenny An Tang Patrick M. Donnelly Maya Yablonski Mahalakshmi Ramamurthy Iliana I. Karipidis Sendy Caffarra Megumi E. Takada Klint Kanopka Michal Ben-Shachar Benjamin W. Domingue Rapid online assessment of reading ability |
description |
Abstract An accurate model of the factors that contribute to individual differences in reading ability depends on data collection in large, diverse and representative samples of research participants. However, that is rarely feasible due to the constraints imposed by standardized measures of reading ability which require test administration by trained clinicians or researchers. Here we explore whether a simple, two-alternative forced choice, time limited lexical decision task (LDT), self-delivered through the web-browser, can serve as an accurate and reliable measure of reading ability. We found that performance on the LDT is highly correlated with scores on standardized measures of reading ability such as the Woodcock-Johnson Letter Word Identification test (r = 0.91, disattenuated r = 0.94). Importantly, the LDT reading ability measure is highly reliable (r = 0.97). After optimizing the list of words and pseudowords based on item response theory, we found that a short experiment with 76 trials (2–3 min) provides a reliable (r = 0.95) measure of reading ability. Thus, the self-administered, Rapid Online Assessment of Reading ability (ROAR) developed here overcomes the constraints of resource-intensive, in-person reading assessment, and provides an efficient and automated tool for effective online research into the mechanisms of reading (dis)ability. |
format |
article |
author |
Jason D. Yeatman Kenny An Tang Patrick M. Donnelly Maya Yablonski Mahalakshmi Ramamurthy Iliana I. Karipidis Sendy Caffarra Megumi E. Takada Klint Kanopka Michal Ben-Shachar Benjamin W. Domingue |
author_facet |
Jason D. Yeatman Kenny An Tang Patrick M. Donnelly Maya Yablonski Mahalakshmi Ramamurthy Iliana I. Karipidis Sendy Caffarra Megumi E. Takada Klint Kanopka Michal Ben-Shachar Benjamin W. Domingue |
author_sort |
Jason D. Yeatman |
title |
Rapid online assessment of reading ability |
title_short |
Rapid online assessment of reading ability |
title_full |
Rapid online assessment of reading ability |
title_fullStr |
Rapid online assessment of reading ability |
title_full_unstemmed |
Rapid online assessment of reading ability |
title_sort |
rapid online assessment of reading ability |
publisher |
Nature Portfolio |
publishDate |
2021 |
url |
https://doaj.org/article/3c80a6cb29f246dfb9077b617f531e13 |
work_keys_str_mv |
AT jasondyeatman rapidonlineassessmentofreadingability AT kennyantang rapidonlineassessmentofreadingability AT patrickmdonnelly rapidonlineassessmentofreadingability AT mayayablonski rapidonlineassessmentofreadingability AT mahalakshmiramamurthy rapidonlineassessmentofreadingability AT ilianaikaripidis rapidonlineassessmentofreadingability AT sendycaffarra rapidonlineassessmentofreadingability AT megumietakada rapidonlineassessmentofreadingability AT klintkanopka rapidonlineassessmentofreadingability AT michalbenshachar rapidonlineassessmentofreadingability AT benjaminwdomingue rapidonlineassessmentofreadingability |
_version_ |
1718383864735334400 |