Recalibrating expectations about effect size: A multi-method survey of effect sizes in the ABCD study.
Effect sizes are commonly interpreted using heuristics established by Cohen (e.g., small: r = .1, medium r = .3, large r = .5), despite mounting evidence that these guidelines are mis-calibrated to the effects typically found in psychological research. This study's aims were to 1) describe the...
Guardado en:
Autores principales: | , , , , , , , , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Public Library of Science (PLoS)
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/ade5146395a54c36ac789bb92c229451 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:ade5146395a54c36ac789bb92c229451 |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:ade5146395a54c36ac789bb92c2294512021-12-02T20:08:03ZRecalibrating expectations about effect size: A multi-method survey of effect sizes in the ABCD study.1932-620310.1371/journal.pone.0257535https://doaj.org/article/ade5146395a54c36ac789bb92c2294512021-01-01T00:00:00Zhttps://doi.org/10.1371/journal.pone.0257535https://doaj.org/toc/1932-6203Effect sizes are commonly interpreted using heuristics established by Cohen (e.g., small: r = .1, medium r = .3, large r = .5), despite mounting evidence that these guidelines are mis-calibrated to the effects typically found in psychological research. This study's aims were to 1) describe the distribution of effect sizes across multiple instruments, 2) consider factors qualifying the effect size distribution, and 3) identify examples as benchmarks for various effect sizes. For aim one, effect size distributions were illustrated from a large, diverse sample of 9/10-year-old children. This was done by conducting Pearson's correlations among 161 variables representing constructs from all questionnaires and tasks from the Adolescent Brain and Cognitive Development Study® baseline data. To achieve aim two, factors qualifying this distribution were tested by comparing the distributions of effect size among various modifications of the aim one analyses. These modified analytic strategies included comparisons of effect size distributions for different types of variables, for analyses using statistical thresholds, and for analyses using several covariate strategies. In aim one analyses, the median in-sample effect size was .03, and values at the first and third quartiles were .01 and .07. In aim two analyses, effects were smaller for associations across instruments, content domains, and reporters, as well as when covarying for sociodemographic factors. Effect sizes were larger when thresholding for statistical significance. In analyses intended to mimic conditions used in "real-world" analysis of ABCD data, the median in-sample effect size was .05, and values at the first and third quartiles were .03 and .09. To achieve aim three, examples for varying effect sizes are reported from the ABCD dataset as benchmarks for future work in the dataset. In summary, this report finds that empirically determined effect sizes from a notably large dataset are smaller than would be expected based on existing heuristics.Max M OwensAlexandra PotterCourtland S HyattMatthew AlbaughWesley K ThompsonTerry JerniganDekang YuanSage HahnNicholas AllgaierHugh GaravanPublic Library of Science (PLoS)articleMedicineRScienceQENPLoS ONE, Vol 16, Iss 9, p e0257535 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
Medicine R Science Q |
spellingShingle |
Medicine R Science Q Max M Owens Alexandra Potter Courtland S Hyatt Matthew Albaugh Wesley K Thompson Terry Jernigan Dekang Yuan Sage Hahn Nicholas Allgaier Hugh Garavan Recalibrating expectations about effect size: A multi-method survey of effect sizes in the ABCD study. |
description |
Effect sizes are commonly interpreted using heuristics established by Cohen (e.g., small: r = .1, medium r = .3, large r = .5), despite mounting evidence that these guidelines are mis-calibrated to the effects typically found in psychological research. This study's aims were to 1) describe the distribution of effect sizes across multiple instruments, 2) consider factors qualifying the effect size distribution, and 3) identify examples as benchmarks for various effect sizes. For aim one, effect size distributions were illustrated from a large, diverse sample of 9/10-year-old children. This was done by conducting Pearson's correlations among 161 variables representing constructs from all questionnaires and tasks from the Adolescent Brain and Cognitive Development Study® baseline data. To achieve aim two, factors qualifying this distribution were tested by comparing the distributions of effect size among various modifications of the aim one analyses. These modified analytic strategies included comparisons of effect size distributions for different types of variables, for analyses using statistical thresholds, and for analyses using several covariate strategies. In aim one analyses, the median in-sample effect size was .03, and values at the first and third quartiles were .01 and .07. In aim two analyses, effects were smaller for associations across instruments, content domains, and reporters, as well as when covarying for sociodemographic factors. Effect sizes were larger when thresholding for statistical significance. In analyses intended to mimic conditions used in "real-world" analysis of ABCD data, the median in-sample effect size was .05, and values at the first and third quartiles were .03 and .09. To achieve aim three, examples for varying effect sizes are reported from the ABCD dataset as benchmarks for future work in the dataset. In summary, this report finds that empirically determined effect sizes from a notably large dataset are smaller than would be expected based on existing heuristics. |
format |
article |
author |
Max M Owens Alexandra Potter Courtland S Hyatt Matthew Albaugh Wesley K Thompson Terry Jernigan Dekang Yuan Sage Hahn Nicholas Allgaier Hugh Garavan |
author_facet |
Max M Owens Alexandra Potter Courtland S Hyatt Matthew Albaugh Wesley K Thompson Terry Jernigan Dekang Yuan Sage Hahn Nicholas Allgaier Hugh Garavan |
author_sort |
Max M Owens |
title |
Recalibrating expectations about effect size: A multi-method survey of effect sizes in the ABCD study. |
title_short |
Recalibrating expectations about effect size: A multi-method survey of effect sizes in the ABCD study. |
title_full |
Recalibrating expectations about effect size: A multi-method survey of effect sizes in the ABCD study. |
title_fullStr |
Recalibrating expectations about effect size: A multi-method survey of effect sizes in the ABCD study. |
title_full_unstemmed |
Recalibrating expectations about effect size: A multi-method survey of effect sizes in the ABCD study. |
title_sort |
recalibrating expectations about effect size: a multi-method survey of effect sizes in the abcd study. |
publisher |
Public Library of Science (PLoS) |
publishDate |
2021 |
url |
https://doaj.org/article/ade5146395a54c36ac789bb92c229451 |
work_keys_str_mv |
AT maxmowens recalibratingexpectationsabouteffectsizeamultimethodsurveyofeffectsizesintheabcdstudy AT alexandrapotter recalibratingexpectationsabouteffectsizeamultimethodsurveyofeffectsizesintheabcdstudy AT courtlandshyatt recalibratingexpectationsabouteffectsizeamultimethodsurveyofeffectsizesintheabcdstudy AT matthewalbaugh recalibratingexpectationsabouteffectsizeamultimethodsurveyofeffectsizesintheabcdstudy AT wesleykthompson recalibratingexpectationsabouteffectsizeamultimethodsurveyofeffectsizesintheabcdstudy AT terryjernigan recalibratingexpectationsabouteffectsizeamultimethodsurveyofeffectsizesintheabcdstudy AT dekangyuan recalibratingexpectationsabouteffectsizeamultimethodsurveyofeffectsizesintheabcdstudy AT sagehahn recalibratingexpectationsabouteffectsizeamultimethodsurveyofeffectsizesintheabcdstudy AT nicholasallgaier recalibratingexpectationsabouteffectsizeamultimethodsurveyofeffectsizesintheabcdstudy AT hughgaravan recalibratingexpectationsabouteffectsizeamultimethodsurveyofeffectsizesintheabcdstudy |
_version_ |
1718375260903964672 |