Transparency Is Not A One-Way Mirror
An editorial in the journal Nature published on April 24, 2013 announces an important new step in the scientific peer review process for manuscripts that are being submitted to Nature and other Nature research journals. Authors of scientific manuscripts will now be required to fill out a checklist before they can submit their work to the journal. The title of the editorial, "Announcement: Reducing our irreproducibility", reveals the goal of this new step - addressing the problem of irreproducibility that is plaguing science. During the past year, Nature and its affiliated journals have repeatedly pointed out that the poor reproducibility rate of published research findings is a major challenge for science and that we need to develop new mechanisms to fix this problem. This new checklist may be one tiny step in the right direction. Its primary focus is the statistical reliability of the results in a submitted paper and asks authors to disclose details about the statistical analyses employed, sample size calculations, blinding and randomization. Manuscripts involving animals or human subjects are also required to disclose details about the approvals by the appropriate review boards or committees.
Examples of the checklist questions are:
1. How was the sample size chosen to ensure adequate power to detect a pre-specified effect size? For animal studies, include a statement about sample size estimate even if no statistical methods were used.
5. For every figure, are statistical tests justified as appropriate? Do the data meet the assumptions of the tests (e.g., normal distribution)? Is there an estimate of variation within each group of data? Is the variance similar between the groups that are being statistically compared?
The authors are also reminded that they have to reveal complete statistical information in the figure legends and evidence that datasets have been submitted to public repositories for 1) Protein, DNA and RNA sequences, 2) Macromolecular structures, 3) Crystallographic data for small molecules and 4) Microarray data.
It is commendable that the Nature editors have recognized the importance of addressing the reproducibility issue in science, but I doubt that this checklist will make such a big difference. The cynical or perhaps overly honest answer to how many biologists determine sample size is not by a pre-specified sample size calculation. Instead, they might just go ahead and perform some arbitrary number of experiments with a sample size of n=5 or so, and then adjust the sample size by increasing it, if the initial results are not statistically significant until they achieve the equally arbitrary and near-mystical p-value thresholds of p<0.05 or p<0.01. This checklist will remind authors of the importance of keeping track of statistical and methodological details, and disclosing them in the manuscript. Such transparency in terms of methods and analyses is sorely needed. This will make it easier for other laboratories to attempt to replicate the published paper, but it is not clear how revealing these details will affect the chances that the results are indeed reproducible. Will the editors perhaps not review a manuscript if the checklist reveals that the authors only studied one strain of mice? Will sample sizes of n=5 not be acceptable even if the p-value is <0.01?
This brings us to another crucial point in the debate about reproducibility of scientific results. Prestigious journals such as Nature rarely review manuscripts that are deemed to be of limited significance or novelty to their readership. In fact, the vast majority of manuscripts submitted to high profile journals such as Nature of Science are rejected at the editorial level without ever undergoing a thorough peer review process. On the other hand, when editors get a personal call from high profile investigators, they may be more likely to send out a paper for review, because the publication of the paper could increase the often maligned "impact factor" of the journal.
Attempts to improve transparency and reliability of published research should not only target scientists, but also target the editorial and peer review process. Instead of the sending out a rather cryptic "Sorry, your paper is not interesting enough for us to review", shouldn't editors also complete a checklist that documents how they reached their decision? A checklist that addresses questions such as:
Was the acceptance/rejection of this manuscript based primarily on the scientific rigor or the number of expected citations per year?
How were the anonymous reviewers of this manuscript selected? How many of the chosen reviewers had been suggested by the authors?
Did the authors directly interact with the editors to influence their decision whether or not to send a manuscript out for review?
Transparency is not a one-way mirror. Scientists need to become more transparent, but the editorial and review process should also be more transparent.
Image credit: Hall of Mirrors at Versailles (Image by Myrabella - Creative Commons License via Wikimedia)