Quality over quantity
Darwin’s theory of Natural Selection. Einstein’s theory of general relativity. Watson and Crick’s description of the DNA molecule. All of these, and many more, are easily classed as some of Science’s Greatest Discoveries. But with modern pressures to publish high-impact papers as often as possible, are the opportunities to find the next Great Scientific Discoveries being stifled?
As I’ve mentioned before, last year one of the academic heavyweights of social psychology, Diederik Stapel, was found guilty of faking data in a lot of his research. Quite a lot of data, actually – over 30 scientific papers and numerous PhD theses. In the aftermath, a lot of difficult questions have been asked about how and why something like this could possibly happen. Some of the reasoning behind his actions comes from Stapel himself:
“…I did not withstand the pressure to score, to publish, the pressure to get better in time. I wanted too much, too fast. In a system where there are few checks and balances, where people work alone, I took the wrong turn.”
Stapel’s behaviour is quite clearly inexcusable, but the pressure to publish may feel familiar to many other researchers – perhaps, most acutely, by the one hundred academics who recently lost their jobs at the University of Sydney for not publishing frequently enough.
In principle, the idea of publishing as much as possible doesn’t seem too bad – if you’re running lots of experiments and doing lots of work, it means that we might get to see those Great Scientific Discoveries quicker, right? But this sort of mentality can cause (and has already caused) a number of undesirable side effects. Perhaps the most well known is ‘publication bias’. This can manifest itself in different ways, but generally refers to the tendency for positive results (in other words, those in which a hypothesis is confirmed) to be much more likely to be published than negative, or inconclusive results. Put another way, if you run a perfectly good, well-designed experiment, but your analysis comes up with a null result, you’re much less likely to get it published, or even actually submit it for publication. This is bad, because it means that the total body of research that does get published on a particular topic might be completely unrepresentative of what’s actually going on. It can be a particular issue for medical science – say, for example, I run a trial for a new behavioural therapy that’s supposed to completely cure anxiety. My design is perfectly robust, but my results suggest that the therapy doesn’t work. That’s a bit boring, and I don’t think it will get published anywhere that’s considered prestigious, so I don’t bother writing it up; the results just get stashed away in my lab, and maybe I’ll come back to it in a few years. But what if labs in other institutions run the same experiment? They don’t know I’ve already done it, so they just carry on with it. Most of them find what I found, and again don’t bother to publish their results – it’s a waste of time. Except a couple of labs did find that the therapy works. They report their experiments, and now it looks like we have good evidence for a new and effective anxiety therapy, despite the large body of (unpublished) evidence to the contrary.
This leads into all sorts of issues rooted in precisely how we statistically analyse our work, but there’s another, simpler problem that this phenomenon causes; it’s wasting a lot of time for a lot of people. There must be countless scientific studies out there that, whilst methodologically sound, simply just didn’t produce a result deemed interesting enough to publish. And because they weren’t published, we don’t have a measure of how many times they’ve inadvertently been replicated elsewhere. Compounding this problem is the idea that if, say, a particularly time-intensive experiment doesn’t work out, researchers might find themselves under pressure to quickly publish something else instead; something that might not be particularly interesting or useful, but is quick, easy, and likely to have a positive outcome. The end result is that we’re sacrificing scientific creativity and research diversity for safe options, science in small increments, and administrative box-ticking. In Psychology, projects like Psychfiledrawer are starting to address this issue, but more clearly needs to be done.
We have to accept, and be comfortable with, the fact that theories and ideas need time to fully develop. As scientists, we to be okay with things when they don’t work out the way we thought they would. The world is a big, noisy, messy place, and not only is that absolutely fine, it’s also exciting. Darwin’s theories on evolution were almost 23 years in the making; would the modern-day pressure to publish have meant that On the Origin of the Species might have been confined to a dusty lab drawer, in favour of a quick and easy, but perhaps mediocre, paper?