A critical meta-analysis of lens model studies in human judgment and decision-making.

Achieving accurate judgment ('judgmental achievement') is of utmost importance in daily life across multiple domains. The lens model and the lens model equation provide useful frameworks for modeling components of judgmental achievement and for creating tools to help decision makers (e.g.,...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Esther Kaufmann, Ulf-Dietrich Reips, Werner W Wittmann
Formato: article
Lenguaje:EN
Publicado: Public Library of Science (PLoS) 2013
Materias:
R
Q
Acceso en línea:https://doaj.org/article/49b260990b36444fa627bc232a45dca2
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:49b260990b36444fa627bc232a45dca2
record_format dspace
spelling oai:doaj.org-article:49b260990b36444fa627bc232a45dca22021-11-18T08:39:35ZA critical meta-analysis of lens model studies in human judgment and decision-making.1932-620310.1371/journal.pone.0083528https://doaj.org/article/49b260990b36444fa627bc232a45dca22013-01-01T00:00:00Zhttps://www.ncbi.nlm.nih.gov/pmc/articles/pmid/24391781/?tool=EBIhttps://doaj.org/toc/1932-6203Achieving accurate judgment ('judgmental achievement') is of utmost importance in daily life across multiple domains. The lens model and the lens model equation provide useful frameworks for modeling components of judgmental achievement and for creating tools to help decision makers (e.g., physicians, teachers) reach better judgments (e.g., a correct diagnosis, an accurate estimation of intelligence). Previous meta-analyses of judgment and decision-making studies have attempted to evaluate overall judgmental achievement and have provided the basis for evaluating the success of bootstrapping (i.e., replacing judges by linear models that guide decision making). However, previous meta-analyses have failed to appropriately correct for a number of study design artifacts (e.g., measurement error, dichotomization), which may have potentially biased estimations (e.g., of the variability between studies) and led to erroneous interpretations (e.g., with regards to moderator variables). In the current study we therefore conduct the first psychometric meta-analysis of judgmental achievement studies that corrects for a number of study design artifacts. We identified 31 lens model studies (N = 1,151, k = 49) that met our inclusion criteria. We evaluated overall judgmental achievement as well as whether judgmental achievement depended on decision domain (e.g., medicine, education) and/or the level of expertise (expert vs. novice). We also evaluated whether using corrected estimates affected conclusions with regards to the success of bootstrapping with psychometrically-corrected models. Further, we introduce a new psychometric trim-and-fill method to estimate the effect sizes of potentially missing studies correct psychometric meta-analyses for effects of publication bias. Comparison of the results of the psychometric meta-analysis with the results of a traditional meta-analysis (which only corrected for sampling error) indicated that artifact correction leads to a) an increase in values of the lens model components, b) reduced heterogeneity between studies, and c) increases the success of bootstrapping. We argue that psychometric meta-analysis is useful for accurately evaluating human judgment and show the success of bootstrapping.Esther KaufmannUlf-Dietrich ReipsWerner W WittmannPublic Library of Science (PLoS)articleMedicineRScienceQENPLoS ONE, Vol 8, Iss 12, p e83528 (2013)
institution DOAJ
collection DOAJ
language EN
topic Medicine
R
Science
Q
spellingShingle Medicine
R
Science
Q
Esther Kaufmann
Ulf-Dietrich Reips
Werner W Wittmann
A critical meta-analysis of lens model studies in human judgment and decision-making.
description Achieving accurate judgment ('judgmental achievement') is of utmost importance in daily life across multiple domains. The lens model and the lens model equation provide useful frameworks for modeling components of judgmental achievement and for creating tools to help decision makers (e.g., physicians, teachers) reach better judgments (e.g., a correct diagnosis, an accurate estimation of intelligence). Previous meta-analyses of judgment and decision-making studies have attempted to evaluate overall judgmental achievement and have provided the basis for evaluating the success of bootstrapping (i.e., replacing judges by linear models that guide decision making). However, previous meta-analyses have failed to appropriately correct for a number of study design artifacts (e.g., measurement error, dichotomization), which may have potentially biased estimations (e.g., of the variability between studies) and led to erroneous interpretations (e.g., with regards to moderator variables). In the current study we therefore conduct the first psychometric meta-analysis of judgmental achievement studies that corrects for a number of study design artifacts. We identified 31 lens model studies (N = 1,151, k = 49) that met our inclusion criteria. We evaluated overall judgmental achievement as well as whether judgmental achievement depended on decision domain (e.g., medicine, education) and/or the level of expertise (expert vs. novice). We also evaluated whether using corrected estimates affected conclusions with regards to the success of bootstrapping with psychometrically-corrected models. Further, we introduce a new psychometric trim-and-fill method to estimate the effect sizes of potentially missing studies correct psychometric meta-analyses for effects of publication bias. Comparison of the results of the psychometric meta-analysis with the results of a traditional meta-analysis (which only corrected for sampling error) indicated that artifact correction leads to a) an increase in values of the lens model components, b) reduced heterogeneity between studies, and c) increases the success of bootstrapping. We argue that psychometric meta-analysis is useful for accurately evaluating human judgment and show the success of bootstrapping.
format article
author Esther Kaufmann
Ulf-Dietrich Reips
Werner W Wittmann
author_facet Esther Kaufmann
Ulf-Dietrich Reips
Werner W Wittmann
author_sort Esther Kaufmann
title A critical meta-analysis of lens model studies in human judgment and decision-making.
title_short A critical meta-analysis of lens model studies in human judgment and decision-making.
title_full A critical meta-analysis of lens model studies in human judgment and decision-making.
title_fullStr A critical meta-analysis of lens model studies in human judgment and decision-making.
title_full_unstemmed A critical meta-analysis of lens model studies in human judgment and decision-making.
title_sort critical meta-analysis of lens model studies in human judgment and decision-making.
publisher Public Library of Science (PLoS)
publishDate 2013
url https://doaj.org/article/49b260990b36444fa627bc232a45dca2
work_keys_str_mv AT estherkaufmann acriticalmetaanalysisoflensmodelstudiesinhumanjudgmentanddecisionmaking
AT ulfdietrichreips acriticalmetaanalysisoflensmodelstudiesinhumanjudgmentanddecisionmaking
AT wernerwwittmann acriticalmetaanalysisoflensmodelstudiesinhumanjudgmentanddecisionmaking
AT estherkaufmann criticalmetaanalysisoflensmodelstudiesinhumanjudgmentanddecisionmaking
AT ulfdietrichreips criticalmetaanalysisoflensmodelstudiesinhumanjudgmentanddecisionmaking
AT wernerwwittmann criticalmetaanalysisoflensmodelstudiesinhumanjudgmentanddecisionmaking
_version_ 1718421497982222336