Trust me, I’m a doctor… Guest post from Marcus Munafo
Happy new year everyone! Sorry for the silence recently, December ran away with me a bit. A post about Multiple Imputation is impending, but for now, my boss Marcus Munafò has written this. Enjoy.
A lot has been made recently of the ongoing reproducibility crisis in science, and the extent to which current incentive structures motivate scientists to engage in questionable practices. These have been discussed elsewhere in detail, but involve various things like running multiple statistical tests and then only reporting the one that looks “best”, systematically excluding data points and re-running analyses, to more or less the same effect, and so on. Neuroskeptic has captured this well in the “9 Circles of Scientific Hell”.
Of course, no scientists really correspond to the Platonic ideal of a disinterested and entirely impartial truth-seeker. We all have our pet theories and prejudices, and we’re as prone to the next person to the usual range of cognitive biases that make it harder to see when we might be wrong (although our scientific training should at least mean that we know how to set up procedures to guard against these). How often, for example, have you had an argument with someone where your opponent has said “You know what, you’re right, I’ve been wrong all along. Thank you so much for pointing out exactly why and how…”? Exactly. It doesn’t happen that much between scientists either.
One would hope that most scientists would ultimately want to look back on their career and reflect on the one or two genuine discoveries and advances that they contributed to, rather than how many times they published, which journals they published in, or how much grant income they generated. But it’s the metrics, rather than the advancement of knowledge, that scientists are increasingly incentivized to focus on. And since, ultimately, whatever safeguards we put in place, science relies on the basic honesty and integrity of scientists, these incentive structures are likely to undermine the progress of science. Don’t believe me? A couple of personal examples illustrate the point, one where I served as a reviewer of a journal article, and one where I served as the handling editor for an article.
In the first case, I’ve reviewed the same article countless times (it’s exactly in my area, so I’m always going to come up when the editor searches for likely reviewers). It has a fatal flaw. It’s a study of people quitting smoking. Since everyone quits at the beginning, and then some people relapse, the number of people who have quit successfully can only ever decline if you use the standard method of defining abstinence. In this article the number goes up. These numbers simply can’t be right, and suggest that the authors are using an inappropriate method to analyse their data. Every time I reviewed the article I pointed this out, and said that I’d be much happier seeing the data analysed correctly, even if the new analysis meant that the results were null. I have no problem with null results being published if they’re informative. Every time I saw the paper again, having been submitted to a different journal, the analysis remained the same. It’s now been published, basically unchanged, in a relatively obscure journal (I didn’t review this version).
The second case is even worse. An author submitted an article on the combined effect of two genes on a particular outcome. As the editor handling this manuscript I did a search to see what had been written on the subject before. There was only one other paper looking at one of these genes and the same outcome. It was by the same research group. In fact, it was exactly the same data, just without the inclusion of the second gene. Now, it can be fine to go back to your data and reanalyze them. New hypotheses will come up that need testing, and secondary analysis of existing data is a cost-effective way of doing this. But the new paper didn’t say anywhere that this was a re-analysis of existing data that had already been published. It would give the casual reader the false belief that the basic effect of one of these genes had been replicated in this “second” study. I replied to the author that I would be happy to send the paper out for review if it was shortened to a brief report, and it was made explicit that this was a re-analysis. I didn’t hear back from the author. A week later I received exactly the same article, unchanged, in my capacity as editor for a different journal. You can imagine my reply….
What do both of these cases tell us? That there are some scientists who are much more interested in the act of publishing than in what they publish. And perhaps also that there are too many journals out there, so that anything can be published eventually, with enough determination and creative writing. The profitability of journals suggests that the growth in potential outlets isn’t likely to change in the near future. I’m not sure what the solution is – clearly the incentive structures need to change, but that’s easier said than done, and initiatives like the so-called ‘impact agenda’ probably won’t help. Ultimately, the responsibility lies with individual scientists, but unfortunately scientists are humans too, and fallible.