An experimental test of the effects of redacting grant applicant identifiers on peer review outcomes
Background: Blinding reviewers to applicant identity has been proposed to reduce bias in peer review. Methods: This experimental test used 1200 NIH grant applications, 400 from Black investigators, 400 matched applications from White investigators, and 400 randomly selected applications from White i...
Guardado en:
Autores principales: | , , , , , , , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
eLife Sciences Publications Ltd
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/4651d2b1a8d44468be90e5e358165239 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:4651d2b1a8d44468be90e5e358165239 |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:4651d2b1a8d44468be90e5e3581652392021-11-24T11:37:02ZAn experimental test of the effects of redacting grant applicant identifiers on peer review outcomes10.7554/eLife.713682050-084Xe71368https://doaj.org/article/4651d2b1a8d44468be90e5e3581652392021-10-01T00:00:00Zhttps://elifesciences.org/articles/71368https://doaj.org/toc/2050-084XBackground: Blinding reviewers to applicant identity has been proposed to reduce bias in peer review. Methods: This experimental test used 1200 NIH grant applications, 400 from Black investigators, 400 matched applications from White investigators, and 400 randomly selected applications from White investigators. Applications were reviewed by mail in standard and redacted formats. Results: Redaction reduced, but did not eliminate, reviewers’ ability to correctly guess features of identity. The primary, preregistered analysis hypothesized a differential effect of redaction according to investigator race in the matched applications. A set of secondary analyses (not preregistered) used the randomly selected applications from White scientists and tested the same interaction. Both analyses revealed similar effects: Standard format applications from White investigators scored better than those from Black investigators. Redaction cut the size of the difference by about half (e.g. from a Cohen’s d of 0.20–0.10 in matched applications); redaction caused applications from White scientists to score worse but had no effect on scores for Black applications. Conclusions: Grant-writing considerations and halo effects are discussed as competing explanations for this pattern. The findings support further evaluation of peer review models that diminish the influence of applicant identity. Funding: Funding was provided by the NIH.Richard K NakamuraLee S MannMark D LindnerJeremy BraithwaiteMei-Ching ChenAdrian VanceaNoni ByrnesValerie DurrantBruce ReedeLife Sciences Publications Ltdarticlepeer reviewracial disparitiesracial biasscience fundinghalo effectsMedicineRScienceQBiology (General)QH301-705.5ENeLife, Vol 10 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
peer review racial disparities racial bias science funding halo effects Medicine R Science Q Biology (General) QH301-705.5 |
spellingShingle |
peer review racial disparities racial bias science funding halo effects Medicine R Science Q Biology (General) QH301-705.5 Richard K Nakamura Lee S Mann Mark D Lindner Jeremy Braithwaite Mei-Ching Chen Adrian Vancea Noni Byrnes Valerie Durrant Bruce Reed An experimental test of the effects of redacting grant applicant identifiers on peer review outcomes |
description |
Background: Blinding reviewers to applicant identity has been proposed to reduce bias in peer review.
Methods: This experimental test used 1200 NIH grant applications, 400 from Black investigators, 400 matched applications from White investigators, and 400 randomly selected applications from White investigators. Applications were reviewed by mail in standard and redacted formats.
Results: Redaction reduced, but did not eliminate, reviewers’ ability to correctly guess features of identity. The primary, preregistered analysis hypothesized a differential effect of redaction according to investigator race in the matched applications. A set of secondary analyses (not preregistered) used the randomly selected applications from White scientists and tested the same interaction. Both analyses revealed similar effects: Standard format applications from White investigators scored better than those from Black investigators. Redaction cut the size of the difference by about half (e.g. from a Cohen’s d of 0.20–0.10 in matched applications); redaction caused applications from White scientists to score worse but had no effect on scores for Black applications.
Conclusions: Grant-writing considerations and halo effects are discussed as competing explanations for this pattern. The findings support further evaluation of peer review models that diminish the influence of applicant identity.
Funding: Funding was provided by the NIH. |
format |
article |
author |
Richard K Nakamura Lee S Mann Mark D Lindner Jeremy Braithwaite Mei-Ching Chen Adrian Vancea Noni Byrnes Valerie Durrant Bruce Reed |
author_facet |
Richard K Nakamura Lee S Mann Mark D Lindner Jeremy Braithwaite Mei-Ching Chen Adrian Vancea Noni Byrnes Valerie Durrant Bruce Reed |
author_sort |
Richard K Nakamura |
title |
An experimental test of the effects of redacting grant applicant identifiers on peer review outcomes |
title_short |
An experimental test of the effects of redacting grant applicant identifiers on peer review outcomes |
title_full |
An experimental test of the effects of redacting grant applicant identifiers on peer review outcomes |
title_fullStr |
An experimental test of the effects of redacting grant applicant identifiers on peer review outcomes |
title_full_unstemmed |
An experimental test of the effects of redacting grant applicant identifiers on peer review outcomes |
title_sort |
experimental test of the effects of redacting grant applicant identifiers on peer review outcomes |
publisher |
eLife Sciences Publications Ltd |
publishDate |
2021 |
url |
https://doaj.org/article/4651d2b1a8d44468be90e5e358165239 |
work_keys_str_mv |
AT richardknakamura anexperimentaltestoftheeffectsofredactinggrantapplicantidentifiersonpeerreviewoutcomes AT leesmann anexperimentaltestoftheeffectsofredactinggrantapplicantidentifiersonpeerreviewoutcomes AT markdlindner anexperimentaltestoftheeffectsofredactinggrantapplicantidentifiersonpeerreviewoutcomes AT jeremybraithwaite anexperimentaltestoftheeffectsofredactinggrantapplicantidentifiersonpeerreviewoutcomes AT meichingchen anexperimentaltestoftheeffectsofredactinggrantapplicantidentifiersonpeerreviewoutcomes AT adrianvancea anexperimentaltestoftheeffectsofredactinggrantapplicantidentifiersonpeerreviewoutcomes AT nonibyrnes anexperimentaltestoftheeffectsofredactinggrantapplicantidentifiersonpeerreviewoutcomes AT valeriedurrant anexperimentaltestoftheeffectsofredactinggrantapplicantidentifiersonpeerreviewoutcomes AT brucereed anexperimentaltestoftheeffectsofredactinggrantapplicantidentifiersonpeerreviewoutcomes AT richardknakamura experimentaltestoftheeffectsofredactinggrantapplicantidentifiersonpeerreviewoutcomes AT leesmann experimentaltestoftheeffectsofredactinggrantapplicantidentifiersonpeerreviewoutcomes AT markdlindner experimentaltestoftheeffectsofredactinggrantapplicantidentifiersonpeerreviewoutcomes AT jeremybraithwaite experimentaltestoftheeffectsofredactinggrantapplicantidentifiersonpeerreviewoutcomes AT meichingchen experimentaltestoftheeffectsofredactinggrantapplicantidentifiersonpeerreviewoutcomes AT adrianvancea experimentaltestoftheeffectsofredactinggrantapplicantidentifiersonpeerreviewoutcomes AT nonibyrnes experimentaltestoftheeffectsofredactinggrantapplicantidentifiersonpeerreviewoutcomes AT valeriedurrant experimentaltestoftheeffectsofredactinggrantapplicantidentifiersonpeerreviewoutcomes AT brucereed experimentaltestoftheeffectsofredactinggrantapplicantidentifiersonpeerreviewoutcomes |
_version_ |
1718415042468118528 |