Reviewing the reviewers

13 April 2009 by Noah Gray, posted in Uncategorized

I noticed a guest post on Peer-to-Peer that calls for creating a metric measuring peer review performance. Essentially, the author suggests that pooling citation-based metrics along with a peer-review metric will provide a more comprehensive assessment of an individual scientist. Here is an excerpt:

Perhaps a metric for this essential scientific activity of peer-reviewing might be constructed by summing the number of papers refereed by the individual scientist per year, each review being multiplied by the Impact Factor of the journal concerned. As refereeing is usually a solo activity, a metric for this skill, and for the related professional commitment, would be less prey to the shortcomings of performance measurement associated with metrics that attempt to gauge multi-author citations, for instance. Combining a ‘refereeing metric’ with other citation-related metrics to obtain a more comprehensive performance score for an individual scientist should not be an insuperable problem – and this measure can be pooled…with expert evaluation.

A peer review metric based solely on the # of reviews multiplied by the journal impact (as suggested in the post) seems wildly over-simplistic and hardly quantitative. And the notion that paper reviewing is a solo activity is quite naïve. (cont.)


It is supposed to be a solo activity, unless the reviewer feels that s/he requires an additional expert opinion in order to properly evaluate a portion of the manuscript. This additional opinion often comes from a postdoc or student in the lab, or even a colleague down the hall. This is perfectly valid [and in line with our policies, so long as the primary reviewer takes steps to maintain confidentiality of the study] and provides a better assessment of the work, which I very much appreciate. Most reviewers disclose their “collaborations” either before or after the review (or both), just for the sake of completeness. And I like having this information when assessing the criticisms and opinions of the reviewer. To complicate this proposed metric even further, some reports are written solely by an unidentified lab member, with few changes to the final review other than the addition of the reviewing PI’s name. I know this goes on, I DID work in the lab for a long time after all… This charade eventually comes to my attention in some form or another (usually when I am attempting to decipher a certain portion of the report over the phone with the faux reviewer and it is obvious that we are discussing the words and opinions of someone else…), but would not be accessible by the metric. Therefore, discussing peer review as if it is an individual venture that does not suffer from the potential shortcoming of teasing out individual contributions seems to be invalid.
Why not suggest the quantification and analysis of Web 2.0 interactions? In biology, with the advent of paper commenting and the inevitable gradual acceptance of research posting on pre-print servers, scientists now have the opportunity to influence their colleagues’ work in many more (trackable) ways than simply journal-organized peer review. In fact, I think one could make an argument that assessing scientific contributions/conversations made for public consumption (like what one could call “peer review lite”, performed through blogs or paper commenting) would be a more meaningful metric because these interactions would not go on behind the closed doors of editors’ offices - HA! I don’t have an office -, and would be available for all to see. A healthy back-and forth between the authors and a reader/colleague staged on a pre-print server would not only serve to potentially improve the paper prior to eventual publication, but would also serve to better educate the masses who read both the paper (perhaps the original and final versions?) and the exchange.
Of course, this obviously begs the question: “Why is peer review anonymous and why shouldn’t we just make the reviews public??” Those are separate issues at this time. I would not be opposed to changes in these policies, but any movement will likely occur at a glacial pace, unlike the explosion in the availability of, exposure to and education regarding Web 2.0 tools for scientists. Hence my concentration on the latter. (Ed. note: The Frontiers in Neuroscience series of journals identifies the reviewers after the paper is accepted, and I have heard that there are plans to publish the reviews with the papers)
So scientists should feel free to list the journals for which they have reviewed when applying for jobs or meeting with the tenure committee. But until the review system changes and all reviews are non-anonymous and publicly available, I see little reason to broadly apply any kind of public metric or quantitative system to assess review quality and merits. That’s my job. And no, I won’t share my notes with you Ms. Tenure Committee Member, since assessing the broad and meaningful contribution to science by your prospective faculty members is YOUR job. What’s that? No, don’t just count the number of C/N/S papers either…
Rolls eyes and sighs…


One Response to “Reviewing the reviewers”

×

Comments are closed.