Each year, the National Science Foundation (NSF) reviews nearly 50,000 research proposals in all fields of science and science education, with the goal of identifying the highest quality proposals to receive funding. Currently, NSF project officers assign each proposal to a technical panel of reviewers with relevant content expertise to read and evaluate the proposal based on two criteria set by the National Science Board: intellectual merit and broader impacts. Proposals may also be evaluated on specific goals that programs or divisions set forth in the solicitation. However, studies show that reviewer evaluations in many settings can depend on applicant characteristics unrelated to selection criteria. For example, applications for grants administered by the National Institutes of Health are less likely to be successful when the principal investigator (PI) is Black.
This study seeks to test the fairness of NSF’s merit review process, estimating the extent to which reviewer scores differ when the proposed PI is Black versus White and the natural variability of reviewer scores across panels. The project involves preparing realistic research proposals and assigning synthetic identities (IDs) to those proposals, selecting reviewers, and conducting panels to review the proposals. To address the first research objective, some panels will be assigned a specific proposal with a Black PI assigned to it, while other panels will be assigned the same proposal but with a White PI assigned to it. To address the second research objective, reviewers will also review a set of proposals for which IDs will not vary.
As part of the Proposal Panel Experiment, Insight —
- Prepares realistic research proposals to be reviewed to identify potential bias
- Prepares a panel data file and codebook at the conclusion of all panels
- Describes all work completed to facilitate an understanding of the implementation of the experiment