Trust me, I’m a doctor… Guest post from Marcus Munafo

9 January 2013 by Suzi Gage, posted in Uncategorized

Happy new year everyone! Sorry for the silence recently, December ran away with me a bit. A post about Multiple Imputation is impending, but for now, my boss Marcus Munafò has written this. Enjoy.

A lot has been made recently of the ongoing reproducibility crisis in science, and the extent to which current incentive structures motivate scientists to engage in questionable practices. These have been discussed elsewhere in detail, but involve various things like running multiple statistical tests and then only reporting the one that looks “best”, systematically excluding data points and re-running analyses, to more or less the same effect, and so on. Neuroskeptic has captured this well in the “9 Circles of Scientific Hell”.

Of course, no scientists really correspond to the Platonic ideal of a disinterested and entirely impartial truth-seeker. We all have our pet theories and prejudices, and we’re as prone to the next person to the usual range of cognitive biases that make it harder to see when we might be wrong (although our scientific training should at least mean that we know how to set up procedures to guard against these). How often, for example, have you had an argument with someone where your opponent has said “You know what, you’re right, I’ve been wrong all along. Thank you so much for pointing out exactly why and how…”? Exactly. It doesn’t happen that much between scientists either.

One would hope that most scientists would ultimately want to look back on their career and reflect on the one or two genuine discoveries and advances that they contributed to, rather than how many times they published, which journals they published in, or how much grant income they generated. But it’s the metrics, rather than the advancement of knowledge, that scientists are increasingly incentivized to focus on. And since, ultimately, whatever safeguards we put in place, science relies on the basic honesty and integrity of scientists, these incentive structures are likely to undermine the progress of science. Don’t believe me? A couple of personal examples illustrate the point, one where I served as a reviewer of a journal article, and one where I served as the handling editor for an article.

In the first case, I’ve reviewed the same article countless times (it’s exactly in my area, so I’m always going to come up when the editor searches for likely reviewers). It has a fatal flaw. It’s a study of people quitting smoking. Since everyone quits at the beginning, and then some people relapse, the number of people who have quit successfully can only ever decline if you use the standard method of defining abstinence. In this article the number goes up. These numbers simply can’t be right, and suggest that the authors are using an inappropriate method to analyse their data. Every time I reviewed the article I pointed this out, and said that I’d be much happier seeing the data analysed correctly, even if the new analysis meant that the results were null. I have no problem with null results being published if they’re informative. Every time I saw the paper again, having been submitted to a different journal, the analysis remained the same. It’s now been published, basically unchanged, in a relatively obscure journal (I didn’t review this version).

The second case is even worse. An author submitted an article on the combined effect of two genes on a particular outcome. As the editor handling this manuscript I did a search to see what had been written on the subject before. There was only one other paper looking at one of these genes and the same outcome. It was by the same research group. In fact, it was exactly the same data, just without the inclusion of the second gene. Now, it can be fine to go back to your data and reanalyze them. New hypotheses will come up that need testing, and secondary analysis of existing data is a cost-effective way of doing this. But the new paper didn’t say anywhere that this was a re-analysis of existing data that had already been published. It would give the casual reader the false belief that the basic effect of one of these genes had been replicated in this “second” study. I replied to the author that I would be happy to send the paper out for review if it was shortened to a brief report, and it was made explicit that this was a re-analysis. I didn’t hear back from the author. A week later I received exactly the same article, unchanged, in my capacity as editor for a different journal. You can imagine my reply….

What do both of these cases tell us? That there are some scientists who are much more interested in the act of publishing than in what they publish. And perhaps also that there are too many journals out there, so that anything can be published eventually, with enough determination and creative writing. The profitability of journals suggests that the growth in potential outlets isn’t likely to change in the near future. I’m not sure what the solution is – clearly the incentive structures need to change, but that’s easier said than done, and initiatives like the so-called ‘impact agenda’ probably won’t help. Ultimately, the responsibility lies with individual scientists, but unfortunately scientists are humans too, and fallible.

Marcus Munafò


7 Responses to “Trust me, I’m a doctor… Guest post from Marcus Munafo”

  1. deevybee Reply | Permalink

    As you know, I could not agree more. I've spent the last month reading a specific literature that combines genetics and brain imaging and I am more worried about science than I ever have been. The rewards in the field seem currently geared to benefit psychopathic story-tellers who are willing to hide inconvenient details and cherrypick their data so they can spin their study into an impressive-sounding 'discovery'. All aided and abetted by the top journals. Some of these people are really clever and it's like it has become a game that they play because it amuses them. But it is not funny - it is holding up progress by diverting research funding to those who play the game best, and by setting us off on false scents. We have to unpick the errors that these people strew around the journals, and reading the literature has become like wading through treacle.
    The solution? I like Brian Knutson's suggestion that instead of the H-index we focus on the R-index, with R standing for replicability. http://edge.org/annual-question/2011/response/11587

  2. Deneck Reply | Permalink

    Sometimes, however, the reviewers are ignorant because they were chosen from the wrong field. Like those psychologists who keep reviewing my neuroethological paper. Birds are not humans, some of your dogmas don't apply!

  3. Jim Woodgett Reply | Permalink

    I completely agree with the thrust of this article but I do have a problem in the example of reviewing a manuscript multiple times. I always decline to review a manuscript that I've previously reviewed. I can see why this could also be problematic, but no one reviewer should stand in the way of a submission. If the author is stupid enough to resubmit it with a basic flaw, then so be it Either it will be caught by another reviewer or it will be published. There's an awful lot of rubbish in the literature and most of it is innocuous because its either ignored or recognized as rubbish. It pollutes the literature but in some ways serves a purpose by being transparent evidence of an example of the quality of work by that scientist. We should be judged not just by the good we publish, but also have that offset by the bad. Indeed, our reputations are more likely set by what we choose not to publish.

    I know that there are unscrupulous researchers who will find ways to put out their work and, ideally, we should block their route to publish. However, research is inherently fault and error tolerant. We make genuine mistakes and this is often how progress is made. The danger is that by setting rigid standards, we might filter out research that is transformational. Of course, this is also mitigated by quality reviewers.

  4. Marcus Reply | Permalink

    Dorothy: Thanks for the comment. There's far more that I could discuss than I have space for here, and it is dispiriting at times. I've even had people tell me that they dropped contradictory data from papers and talks (theirs or other people's - see the second article linked below) because it told a "better story that way" (a direct quote).

    Jim: Thanks also - you make a very good point. I should have perhaps pointed out that I signed my reviews, and there was always at least one other reviewer. I also told the editor that I'd reviewed the article previously before I accepted the invitation. So I tried to be as transparent as possible. But I do wonder whether science is as self-correcting as we think it might be:

    http://pps.sagepub.com/content/7/6/670.full

    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2714656/

  5. Sue Bertram Reply | Permalink

    Thanks for the great post. As a young graduate student, I often found reviews overly harsh. Through time, I realized that the review process as a whole is extremely value, with the reviewer's papers strongly shaping and improving my work. I can not, therefore, understand anyone who would completely ignore a reviewer's points and not even take the time to comment back about why they were ignoring these ideas. Very frustrating, especially when one takes the time to write careful and thoughtful reviews. We all must strive to be more transparent, and ensure that we pass these ideas down to our students as well.

  6. cat health: sneezing Reply | Permalink

    Wow. ʟaws are unԀerstоod by me about leaving your dog in a hot auto
    but laws against normally leaving your dog in a car? I don't know what we would dο.
    I enjoy tօ take Ϲhеstеr and Gretel witth me a great deal (and we continue a lot of way-from-house-excursions) and they love to go.
    Ӏf I'm shopping, in a mսseujm or such plenty of times
    tɦat means they get left in the car. I wouldn't ever take them anytime I understood it wօuld be hhot and they աould have
    to ƅe left in the vehicle thoսgh.

  7. dog and cat health insurance Reply | Permalink

    Doǥ lifespaո is determined by how you care them notably their
    nutrients. A dog owner should be awarе of the things that are
    occuring to their dog. Any uncommon behavior sɦould be ոoted to
    prevent problems later on.

Leave a Reply