Re-defining the “publishable unit”

22 January 2009 by Noah Gray, posted in Uncategorized

In an Editorial today, Cell announced that it is beginning to look towards the future of scientific publication in order to adapt to the new ways in which science is communicated, an adaptation to the changing times. This will most certainly involve an embracing of new web technologies. No more will the scientific publication be a “linear structure” but rather, will be “hierarchical”. In addition, Cell plans to target one particular issue for specific scrutinization and focus this year. Namely, the definition of the publishable unit.

Over the years, what constitutes a full publishable unit has changed dramatically in biology. The need for more data has steadily grown, not surprisingly, as the number of scientists churning out manuscripts has grown. Quoting figures derived from the NSF, and dug out by a Nature Neuroscience editorial from 2007, the number of biology PhDs awarded annually has doubled over the last 20 years, while the number of tenure-track jobs has remained steady. The percentage of PhD biologists holding tenure-track positions has decreased accordingly, from 46% in 1981 to less than 30%. All this underscores the creation of a volatile environment where competition for high-impact status has intensified amongst scientists. Thus, the natural progression for journals and reviewers is to require more work in order to set a study apart from the noise produced by the glut of trained biologists. This requirement for additional work often manifests as large supplementary information (SI) sections containing many controls and extra experiments deemed to be critical for the proper interpretation of the results, but not important enough to be included in the main body of the text. The expansion of these sections is not just due to reviewer demand though, since enormous SI sections are usually included with most new submissions to Nature.
As an example of this data expansion to which I am referring, let’s look at the differences between the Nature papers published by three NYC Nobel laureates who still maintain active neuroscience labs. I took their first and last two publications in Nature for this comparison, according to PubMed.

  1. Eric Kandel
  2. Paul Greengard
  3. Richard Axel

I like this data set because it actually spans quite a nice range of time, and even has plenty of data points sprinkled along the entire span. It is pretty obvious how things have changed. And plenty of researchers are complaining about it every time I attend a conference.
As the editors of Cell put it, if the publishable unit is going to change, it is going to take the cooperation of editors, authors and reviewers working together to find a reasonable and appropriate standard for publications. Because, after all, if our reviewers told us that 4 single panel figures were an exciting story that was highly impactful, we would publish it. Alas, they are not giving us this advice.
So how did Cell do with turning over a new leaf by cutting down the present-day publishable unit? Well, here is the number of SI items (figs/tables/movies) in each of this issue’s articles:

  • 17
  • 9
  • 9
  • 6
  • 13
  • 6
  • 13
  • 13

Oh well, maybe next issue…

9 Responses to “Re-defining the “publishable unit””

  1. Michael Nestor | Permalink

    You wrote “the noise produced by the glut of trained biologists.”

    How demeaning, and I somewhat take offense to the idea that the work I put 60+ hours into a week is considered “noise”.

    This is why the real talented people are beginning to look elsewhere. Maybe we need to examine that pervasive attitude in the editorial community instead of focusing on minutiae like this.

  2. Noah Gray | Permalink

    No intention to offend. Every journal, from Nature on down has a signal to noise ratio. One journal’s noise is another’s signal. And it trickles down as such. One just has to determine within which signal their work fits. Your 60+ hours certainly produces a signal, just like the hard work of every other researcher out there. And all of these signals need to be parsed and analyzed. Thus, ALL journals set a threshold for consideration, eliminating things they believe are not a good fit for their journal. Above, I describe how it seems one way this has been accomplished is by increasing the size of the publishable unit. It was an easy fix to the problem.

    The “pervasive attitude in the editorial community” to which you refer has been examined, and the response was the creation of PLoS ONE. This is a journal where all technically-sound work will be published, removing “rejection on editorial grounds” from their publication vernacular. And it serves the scientific community extremely well. But this doesn’t mean that we should still churn out a glut of trained biologists.

  3. Michael Nestor | Permalink

    I agree with you on the churning out of too many scientists…but the editorial cap on “what is important” and “what is not” is a problem that has been around as long as there have been journals. I think the question for our generation, with the “web 2.0” structure is can we all know a priori how long a paper should be, or how important a paper is?

    These decisions do trickle down, and I an not in disagreement with you, but think that in some sense the editors can direct scientific discovery by putting on these arbitrary caps…I am just wondering if that is the right thing to do?

  4. Martin Fenner | Permalink

    Noah, I see the problem, but I don’t see the solution. Should be publish less or should we publish more?

  5. Noah Gray | Permalink

    As I mention above, all parties involved need to decide what should constitute the “publishable unit”. If that means publishing shorter papers, so be it.

    This all relates to issues concerning the size of one’s CV as well. Longer papers containing 17 SI figs likely means that an entire 3 year project was published in a single study. It also probably means that it took the entire 3 years just to get the study through review (since it certainly didn’t start with 17 SI figs). Thus, when that author goes on the job market, the expectations of the candidate’s publication record have to be adjusted to reflect the current times. No more will search committees see candidates with 4-5 first author publications coming out of a 3 year post-doc.

  6. David Featherstone | Permalink

    Perhaps the problem is that we no longer value elegance in experimental design? Are mounds of supplemental data too often allowed to substitute for a clear and compelling result?

  7. Mike Fowler | Permalink

    I’m reading Robert H. Peters’ A Critique for Ecology at the moment, and he makes an interesting point about how reviewers/referees judge the quality of the work of others, compared to the quality of their own work. The main point is that they are very subjective, and likely to be biased (his method of showing this ain’t too great though, ahem).

    Much supplementary work seems to be to cover a paper’s back when it comes to (potential or actual) criticisms in the review process. Some of this is due to lack of understanding by reviewers of the current state of the field, and if this is the case, it is totally unnecessary (a reviewer should be an expert in the field, after all).

    Other supplementary results can be very important in backing up major claims in a paper, but surely then they are not “supplementary”.

    The pressure to publish in short format journals (e.g., Nature, Science…) often leads to too much relevant, important material being separated from the main article and points. Simply being short does not always make an article more clear, or robust. Perhaps short format journals should do more to encourage papers whose results can be contained within their restrictions. If a paper can’t meet those restrictions, it should be submitted to a more appropriate, longer format journal.

    This, in turn, may lead to a shift in the importance of different journals, e.g., seen through their ISI ratings, which could more accurately reflect the importance of a journal and its results in its field.

  8. Noah Gray | Permalink

    Mike, I agree with your assessment, and actually, I believe many think similarly to the situation you describe. In neuroscience, people often discuss J Neurosci papers as the bread and butter of their specific field, with its longer papers providing the most important information to assist them with their ongoing research. While these detailed, meticulous, thorough studies may significantly impact the researchers immediately in that specific field, they are less likely to instigate a paradigm shift across multiple sub-disciplines. We are in a system where those studies are mostly sent to Cell, Nature and Science, regardless of length (and of course, Cell is different in the sense that it is a longer format journal).

    With search committees and granting institutions typically looking for a high impact track record, not attempting to gauge the individual impact of each study that comes out of a lab, longer format papers will continue to be submitted to the highest impact journals, simply with longer lists of SI. As a friend recently reminded me, the model for publication has changed:

    OLD: Big finding = Nature + detailed follow up in J Neurophys.
    NEW: Big finding = Nature + supp. info.

    So as I mentioned in a previous comment, as the bar for publishing is raised, so too does the bar measuring publication numbers for receiving faculty positions or tenure need to fall.

    It has been suggested that it is up to the editor to break this cycle. While I agree in principle that the editor can play his/her part, the focus of the community remains on receiving 1 or 2 high impact papers, not 3 or 4 solid “lower impact” studies. This increased desire to publish in Nature leads to ever-increasing submissions, leading to ever-increasing publication standards, leading to ever-increasing manuscript lengths, leading to ever-decreasing acceptance rates. Editors are but one link in this circular chain, so I’ll stand by my original assertion that all involved can play a part in any future change to the system.

  9. Christopher Mims | Permalink

    Verrry interesting. I just forward this to my wife, who is currently searching for a post-doc. I was helping her edit her CV and one of the issues she worries about is that she didn’t get more publications out of her PhD thesis. (Her PhD thesis was massive.) Perhaps if the “publishable units” were different it would provide a more accurate measure of a scientist’s output.

    Or it might just incentivize scientists to publish many tiny papers instead of a few big ones.


Comments are closed.