The Challenge of Negative Results

28 May 2013 by Matt Shipman, posted in Uncategorized

Image: Zsuzsanna Kilian

If a bunch of people are working toward a shared goal – like, say, curing a form of cancer – it would make sense for them to compare notes, right? Significant discoveries should be made public so that researchers can adjust their efforts accordingly and move everyone closer to solving the problem. That’s what journal articles are – an opportunity for researchers to share information and get closer to solving whatever medical, scientific or technological challenges they’re grappling with. Except when the system doesn’t work.

One of the ways the system can fail is when researchers don’t publish negative results. Negative results are what you get when a hypothesis fails. For example, you might think that Compound X will prevent Cancer Z from metastasizing. But if your experiments show that Compound X does not prevent Cancer Z from metastasizing, you have a negative result.

I’ve always felt that negative results are important. If other researchers are also really interested in Compound X, they would probably want to know that your experiments showed Compound X was ineffective. That way they could make an informed decision about how (or whether) to proceed with their own Compound X experiments. But they probably won’t find out about your Compound X experiments, because most negative results never get published.

But maybe I'm wrong. Maybe negative results are not important. This is the first in a series of posts exploring whether negative results matter, and the challenge of getting them published. (Note: you may be interested in this post on a journal editor's perspective on negative results and this post regarding funding agencies' support for publishing negative results.)

Journals

A lot has been written about negative results and the importance of sharing them within the scientific and medical research communities. As Jim Caryl wrote on SciLogs last year, “Negative results are still results, they can still tell us something new; almost as important as knowing that X causes Y within a given context, is knowing that X doesn’t cause Y within this context.” Unfortunately, Caryl adds, “within the current bias of publishing, the self-correcting nature of science is only possible on results that can be refined and built upon, or debunked whilst also presenting a new positive finding. This leaves a lot of ‘orphan’ results that remain apparently unchallenged within the literature, despite being incorrect.”

In short, Caryl puts the onus on journals, saying they make it unduly difficult to publish negative results. (In a separate post, Caryl detailed his struggle to publish a paper on negative results.)

And a 2011 post by Ivan Oransky discusses the so-called “positive publication bias” as well: “In 2008, for example, a group of researchers published a New England Journal of Medicine study showing that nearly all — or 94 [percent] — of published studies of antidepressants used by the FDA to make approval decisions had positive results. But the researchers found that when the FDA included unpublished studies, only about half – or 51 [percent] — were positive.”

But while Caryl puts the blame squarely on the journals, Oransky doesn’t point fingers. And some people put the blame on the researchers (or the people who fund them).

Not the Journals?

Is the lack of published negative results because researchers aren't submitting papers? Or because journals aren't accepting them? Could it be both?

Ben Goldacre, a Wellcome Research Fellow in Epidemiology at the London School of Hygiene and Tropical Medicine (and Bad Science blogger), testified on this issue April 22 before the U.K. House of Commons Science and Technology Select Committee, which is looking into clinical trials. Goldacre, who is also part of the AllTrials campaign to make all clinical study results public, told the select committee that medical journals are not the problem when it comes to publishing negative trial results.

In his testimony, Goldacre said that journals are “not the main barrier to publication” of negative results. “With the advent of open access journals where the business model is not dependent on the need to sell subscriptions with high profile ‘positive’ papers, there are now several open-access academic journals – such as the open-access journal Trials, and journals from BioMedCentral and the Public Library of Science – that will publish trials regardless of whether the results are positive.”

In a memo submitted along with his testimony, Goldacre argues that it is drug companies (who pay for clinical trials) and researchers who are withholding negative results. “Currently, drug companies and researchers are allowed to withhold the results of clinical trials, on treatments currently in use, from doctors and patients if they wish to,” Goldacre wrote. “This means that we are misled about the benefits and risks of treatments.”

And there are certainly researchers who don’t like to publish their negative results.

One senior researcher, who shall remain nameless, put it this way: “I think that most researchers in life science fields would not find these publications very useful. We are more interested in how things work rather than learning that our hypotheses were incorrect. That is, we use the failed experiments to adjust the hypotheses and conduct more experiments. Many times the negative results are included in research articles. I would not like to have to wade through tons of articles describing negative results because I would rather spend my time reading articles that tell me how things work.”

And that’s a problem facing at least some publishers who are actively looking for negative results to publish. Johan Kotze, of the University of Helsinki, is the editor of the Journal of Negative Results, which was established in 2003 to publish negative results in the fields of ecology and evolutionary biology.

“We truly believe that by reporting negative results we will benefit the scientific community,” Kotze says. But Kotze reports that “we’re having trouble finding researchers to submit, perhaps mainly because, one, some think it’s a joke, while, two, others are so pressed with publishing in high-flying journals (to secure funding, etc.) that they’d rather try publishing in more established journals.”

Does It Matter?

But whether researchers are not submitting papers on negative results, or journals are not accepting them, the fact remains that negative results are not showing up in high-profile outlets.

Oransky pointed to a 2011 study in his post that evaluated how often negative or inconclusive results were published in surgical journals. Here’s how Oransky summarized the findings: “In the top-ranked journals, 6 [percent] of studies were negative or inconclusive, compared to 12 [percent] in the middle-tier journals, and 16 [percent] of those in the lowest-tier. (Of note: The lowest-ranked journal the researchers looked at was still in the top third of surgery journals overall.)”

In other words, while negative results may make their way into journal articles, they’re unlikely to be found in the most prestigious (and visible) journals.

And that means that there are scientists devoting time and effort to research that someone out there has already tried – and already knows will fail.

I find this subject fascinating and, over the next couple weeks, hope to run additional posts featuring different perspectives on the publication of negative results – including interviews with folks from peer-reviewed journals and research funding agencies. Stay tuned. (Update: additional posts are now up. See note at end of fourth paragraph.)


9 Responses to “The Challenge of Negative Results”

  1. Alex Reply | Permalink

    As a graduate student I find this particularly interesting. I don't have the time to waste replicating experiments that could have already been attempted with negative results. Knowing before hand would be exceedingly beneficial.

  2. L.A. Grange Reply | Permalink

    Just from a logical perspective, I do not understand a few things in the following text:

    "One senior researcher, who shall remain nameless, put it this way: “I think that most researchers in life science fields would not find these publications very useful. We are more interested in how things work rather than learning that our hypotheses were incorrect. That is, we use the failed experiments to adjust the hypotheses and conduct more experiments. Many times the negative results are included in research articles. I would not like to have to wade through tons of articles describing negative results because I would rather spend my time reading articles that tell me how things work.”"

    From this piece of text, there are a few things that I do not understand from scientific perspective:

    #"We are more interested in how things work rather than learning that our hypotheses were incorrect"

    Good for you, but doesn't information about what 'doesn't work' also gets you closer to finding out how (and why) 'things work'?

    Regardless of this: if a large percentage of findings published in journals could be false (e.g., see: http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124)
    then it might be the case that 'spending time reading articles that tell me how things work' may in reality be more like 'spending time reading articles that tell me how things probably NOT work (but at least it all looks nice, and I can convince myself that it is accurate information, and the only really useful information I should be consuming as a scientist)'

    #"That is, we use the failed experiments to adjust the hypotheses and conduct more experiments"

    Ah, I now understand that 'failed' experiments are indeed useful. It could indeed be just silly to think, from a scientific standpoint, that this sort of information is not useful.

    That's just peachy then, we are on the same page. So here is what I subsequently don't understand: why not publish these 'failed experiments' so others can use this information immediately? That way, a) other scientists don't have to wait months/years until the article with all that 'works' gets published (leaving aside whether the negative results ARE in fact included in research articles or not, and also leaving aside the inconsistency in reasoning that apparently reading about these 'failed' findings WITHIN an article would then all of a sudden not constitute 'wading through tons of information describing negative results')

    , and b) other scientists can conjecture their own accounts about why things 'did not work out' and hereby possibly speed up scientific progress. An additional benefit for the original author of these 'failed' studies could be that he/she would get cited for providing this information.

    It all just seems so easy to do much and much better...I thought scientists were smart. Why have they decided that it was a 'good thing' to only publish, read, and refer to, 'things that work'? It just makes no sense to me from a scientific standpoint.

  3. Jessica Reply | Permalink

    Thanks for this post. My experience is mostly with review journals, not the places where people would publish original research, negative or positive. I suspect that major blocker is what one person you quoted said - publications get grants and notoriety, so why would anyone want to publish negative findings? That would have to come from a different motive, but with the same time taken to prepare the paper, etc.

    The notion that someone might be duplicating work by doing a study that already proved negative elsewhere ... I wonder how much that happens. Would be interesting to know. But anyway, in the same way that people have to confirm positive results by conducting a second study, would it be valid to say the same for negative results? If negative results are just as important as positive results, then should negative studies also be held up to same standard, in terms of replicating the findings?

    The issue seems (to me at least) to be more about negative studies of drugs and other treatments - that if these aren't published alongside the positive ones, it's like hiding data. That seems to be more the issue, but that might be because my frame of reference is so much about drug development, not so much in other fields.

    Looking forward to the next one on this topic.

  4. [BLOCKED BY STBV] Science Journalism: You’re Doing It Wrong! | New Religion and Culture Daily Reply | Permalink

    [...] Matt Shipman at SciLogs describes yet another questionable practice in scientific research: There is a bias in publishing toward results that confirm hypotheses. This should be really troubling if you think a la Popper that falsifiability is what makes science such a powerful tool. But if negative results don’t get published then disproven theories perpetuate and scientists waste time trying to prove things that have already been disproven. But beyond all that, it’s easy to imagine what this looks like to the wide-eyed wonder of science writers. Everything’s being proven all the time! All the theories are proven right! Science just keeps affirming, affirming, affirming! [...]

  5. Belén Suárez Reply | Permalink

    The exciting world of research is full of unpredictable results. There is a long way of hard work before obtaining a significant result and after it, only positive results are normally published.
    I totally agree that the system fails if you do not have all the information and knowledge generated. Negative results enriche and facilitates the advancement of research, not only because it saves effort and work directed in the wrong direction, but also because it represents a considerable cost savings no longer useful to achieve the goals.
    While there may be too many hypothesis that fail and the idea of having all documented may seem cumbersome, we are too many people researching (thankfully) and it is probably that we fall into the same error, and avoiding this certainly would be a significant saving in our time.
    The fact that these publications have not yet reached top-ranked journals makes some people not to be convinced of its publication. And it's funny, because many times prove that something does not work when a priori on paper there is no reason for not to, it is harder to prove that something that is expected to work and works.
    There is a Non-profit organization (The Society for the Improvement of Science) which general purpose is to prevent the editorial bias that are suffering the negative results of main scientific research. They are publishing The All Results Journals (http://www.arjournals.com), the first Total Open Access Journals dedicated to negative results, and are giving to negative results the importance they deserve.
    I am convinced about that optimization of research infrastructure will translate in a better society in a faster way.

  6. [BLOCKED BY STBV] To Null Is Human Reply | Permalink

    […] significant outcomes than papers that affirm a null result.  Matt Shipman in his article, “The challenge of negative results” points to a 2011 study in the Annals of Surgery that showed the […]

Leave a Reply


− 4 = two