Decision-making in research tasks with sequential testing.

<h4>Background</h4>In a recent controversial essay, published by JPA Ioannidis in PLoS Medicine, it has been argued that in some research fields, most of the published findings are false. Based on theoretical reasoning it can be shown that small effect sizes, error-prone tests, low prior...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Thomas Pfeiffer, David G Rand, Anna Dreber
Formato: article
Lenguaje:EN
Publicado: Public Library of Science (PLoS) 2009
Materias:
R
Q
Acceso en línea:https://doaj.org/article/3cf5a45dd1b547d4854b0cbe9c64ff4f
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:3cf5a45dd1b547d4854b0cbe9c64ff4f
record_format dspace
spelling oai:doaj.org-article:3cf5a45dd1b547d4854b0cbe9c64ff4f2021-11-25T06:17:07ZDecision-making in research tasks with sequential testing.1932-620310.1371/journal.pone.0004607https://doaj.org/article/3cf5a45dd1b547d4854b0cbe9c64ff4f2009-01-01T00:00:00Zhttps://www.ncbi.nlm.nih.gov/pmc/articles/pmid/19240797/?tool=EBIhttps://doaj.org/toc/1932-6203<h4>Background</h4>In a recent controversial essay, published by JPA Ioannidis in PLoS Medicine, it has been argued that in some research fields, most of the published findings are false. Based on theoretical reasoning it can be shown that small effect sizes, error-prone tests, low priors of the tested hypotheses and biases in the evaluation and publication of research findings increase the fraction of false positives. These findings raise concerns about the reliability of research. However, they are based on a very simple scenario of scientific research, where single tests are used to evaluate independent hypotheses.<h4>Methodology/principal findings</h4>In this study, we present computer simulations and experimental approaches for analyzing more realistic scenarios. In these scenarios, research tasks are solved sequentially, i.e. subsequent tests can be chosen depending on previous results. We investigate simple sequential testing and scenarios where only a selected subset of results can be published and used for future rounds of test choice. Results from computer simulations indicate that for the tasks analyzed in this study, the fraction of false among the positive findings declines over several rounds of testing if the most informative tests are performed. Our experiments show that human subjects frequently perform the most informative tests, leading to a decline of false positives as expected from the simulations.<h4>Conclusions/significance</h4>For the research tasks studied here, findings tend to become more reliable over time. We also find that the performance in those experimental settings where not all performed tests could be published turned out to be surprisingly inefficient. Our results may help optimize existing procedures used in the practice of scientific research and provide guidance for the development of novel forms of scholarly communication.Thomas PfeifferDavid G RandDavid G RandAnna DreberPublic Library of Science (PLoS)articleMedicineRScienceQENPLoS ONE, Vol 4, Iss 2, p e4607 (2009)
institution DOAJ
collection DOAJ
language EN
topic Medicine
R
Science
Q
spellingShingle Medicine
R
Science
Q
Thomas Pfeiffer
David G Rand
David G Rand
Anna Dreber
Decision-making in research tasks with sequential testing.
description <h4>Background</h4>In a recent controversial essay, published by JPA Ioannidis in PLoS Medicine, it has been argued that in some research fields, most of the published findings are false. Based on theoretical reasoning it can be shown that small effect sizes, error-prone tests, low priors of the tested hypotheses and biases in the evaluation and publication of research findings increase the fraction of false positives. These findings raise concerns about the reliability of research. However, they are based on a very simple scenario of scientific research, where single tests are used to evaluate independent hypotheses.<h4>Methodology/principal findings</h4>In this study, we present computer simulations and experimental approaches for analyzing more realistic scenarios. In these scenarios, research tasks are solved sequentially, i.e. subsequent tests can be chosen depending on previous results. We investigate simple sequential testing and scenarios where only a selected subset of results can be published and used for future rounds of test choice. Results from computer simulations indicate that for the tasks analyzed in this study, the fraction of false among the positive findings declines over several rounds of testing if the most informative tests are performed. Our experiments show that human subjects frequently perform the most informative tests, leading to a decline of false positives as expected from the simulations.<h4>Conclusions/significance</h4>For the research tasks studied here, findings tend to become more reliable over time. We also find that the performance in those experimental settings where not all performed tests could be published turned out to be surprisingly inefficient. Our results may help optimize existing procedures used in the practice of scientific research and provide guidance for the development of novel forms of scholarly communication.
format article
author Thomas Pfeiffer
David G Rand
David G Rand
Anna Dreber
author_facet Thomas Pfeiffer
David G Rand
David G Rand
Anna Dreber
author_sort Thomas Pfeiffer
title Decision-making in research tasks with sequential testing.
title_short Decision-making in research tasks with sequential testing.
title_full Decision-making in research tasks with sequential testing.
title_fullStr Decision-making in research tasks with sequential testing.
title_full_unstemmed Decision-making in research tasks with sequential testing.
title_sort decision-making in research tasks with sequential testing.
publisher Public Library of Science (PLoS)
publishDate 2009
url https://doaj.org/article/3cf5a45dd1b547d4854b0cbe9c64ff4f
work_keys_str_mv AT thomaspfeiffer decisionmakinginresearchtaskswithsequentialtesting
AT davidgrand decisionmakinginresearchtaskswithsequentialtesting
AT davidgrand decisionmakinginresearchtaskswithsequentialtesting
AT annadreber decisionmakinginresearchtaskswithsequentialtesting
_version_ 1718413976799281152