Should authors decide whether their revised paper is re-reviewed??
According to the Journal of Biology…Yes!! In an editorial released on Tuesday, new Chief Editor Miranda Robertson introduces us to a new policy at her journal, one she believes will begin to remedy the frustrating problem of iterative re-review prior to publication. By eliminating this process, the thought is that the journal will more appropriately re-claim its proper role; namely, as an entity of cutting edge research dissemination rather than one that “polices” the quality and accuracy of the research.
Sounds like a great deal…for the authors. But what about for the readers of the manuscripts who are used to believing that they have the comfort of peer-review protecting them as they apply their healthy skepticism when reading a manuscript? Doesn’t this blur the lines between unpublished and published work? And as for the editors, how will they make very difficult technical decisions in fields that are not in their immediate area of expertise without consulting an expert (i.e., reviewer)?
If data are being published without peer review in journals claiming to be peer-reviewed, this is a problem. This policy seems to me like it will place J Biol somewhere between a peer-reviewed journal and a pre-print server. The reason I say that is because although the paper will have gone through one round of review, upon resubmission, which masochistic authors are actually going to check the box “please send this back to the reviewers, especially the one who wanted me to do lots of extra work”?? Rather, the authors are going to take this golden opportunity to convince the editors (and editorial board members) that the requests of the reviewers are “beyond the scope of the current work”, or that “these other controls provided in the revision are just as good to address the previous concerns”, or that the “new data we added greatly enhances the impact of this study”. In other words, a lot of data will be released for public consumption without having been scrutinized by a reviewer. Isn’t that what happens with a pre-print? At least with a pre-print server, one understands that the work may still be preliminary. But if one is not a regular reader of J Biol (or their editorials), won’t the typical assumption be that all data in the study was subject to refereeing?
There is a reason we review papers. It is because the editors and editorial board members are not experts in all disciplines and techniques. We identify 2-4 people that are highly-experienced in said disciplines and techniques, and allow those experts to come to some consensus on the value and accuracy of a study. This often leads to multiple rounds of review, with the authors adding new data to prove their points. Shouldn’t these new data be meticulously examined just like the data provided in the original version? Why wouldn’t authors just provide the safe experiments, and then toss in some more suspect, but perhaps provocative, results after the first round? If they do, would the editor feel comfortable making a decision on the paper without the assistance of a true expert? If the editors reject the paper, won’t the authors just go ahead and appeal that decision, perhaps even asking the editors to consult the reviewers in an effort to assuage the discomfort of the editor with the new data?
The arguments in favor of this policy, beyond the one given at the beginning of this post regarding the role of the journal in the scientific process, are weak. J Biol should have stuck to this high road instead of ticking off a list of debatable and more dubious supportive statements. The high road point?
…the policing function of journals (especially but not exclusively the high-profile journals) is in danger of overwhelming their primary function as publishers.
This, I think everyone can agree on. The way to fix this problem is where things diverge, and there is no quick-fix. As for the other support for adopting this policy?
- Authors can cite many examples of having a difficult time with a reviewer [Of course, but discuss these issues with the editor and sort out a compromise in each situation…that’s our job.]
- The content of the paper is the responsibility of the authors, not the reviewers [True, but then, why are journals frowned upon with every retraction? Why would a journal want to publish flawed studies, and then just point fingers at the authors? That’s a great way to lose not only submissions, but readership.]
- Seriously flawed papers still make it into the published literature after iterative review [True, but why adopt a policy that actually increases the likelihood that this will happen?]
- This policy will save the reviewers time [Yes, at the expense of losing those reviewers who disagree with a policy that marginalizes their efforts made and valuable time taken to improve a paper.]
- Sometimes reviewers dislike Nobel Prize-winning papers [Was that anecdote actually an argument for this policy?]
- Since journals still ultimately have an obligation to their readers, to hedge their bets, J Biol will include the concerns raised by the reviewers, and not necessarily addressed, in Commentaries that accompany all published works [No comment.]
This policy may save time in the dissemination of some papers; those only requiring textual changes or other minor revisions. But, many journals operate in this manner anyway when reviewers pass a paper and only require some t’s to be crossed. When this type of revision is submitted, it is thoroughly checked and sent to production straight away.
I don’t agree with this experimental policy, but will take a scientific mindset and wait for the data to roll in on how well it improves the dissemination of accurate knowledge.