Hey science, what’s going on?
To many, retracted science papers are an unfortunate part of the process. With the best intent, we are told, scientists are only human, and will inevitably make mistakes. And it's a good thing that retractions exist - it shows honesty and integrity in the system; that scientists are willing to admit to their mistakes, and correct them for the common good. Or so we've been led to believe. As Carl Zimmer notes in a recent New York Times piece, Nature reported last year that the rise in retraction rate exceeds the rise in the number of papers published in the last ten years - going up from about 30 retractions per year in the early 2000s, to as much as 400 in the last year. While at first this rise was welcomed by a wide range of parties, you can't help but feel that it's all starting to become a bit of an embarrassment. Why are so many papers being retracted?
In the wake of the scandals in social psychology earlier this year, coupled with similarly (although arguably worse) issues in anesthesiology, the answer is deeply worrying. In a recent paper in PNAS, Arturo Casadevall and colleagues reviewed over 2000 retracted biomedical and life science research articles, and suggested that a whopping 67% of these articles were retracted because of misconduct, including confirmed or suspected fraud (43%), duplicate publication (14%) and plagiarism (10%). Sit back, let it sink in, and think about that for a moment. That's nearly 1000 papers that have been retracted, because someone cheated, or was suspected of foul play.
There are a number of reasons why this is happening, some of which are neatly summarised by Diederik Stapel himself:
"...I did not withstand the pressure to score, to publish, the pressure to get better in time. I wanted too much, too fast. In a system where there are few checks and balances, where people work alone, I took the wrong turn."
This quote highlights two important points. One is the pressure to publish, which has always been present in academia, but has been compounded in recent years. The second is the inadequate regulation system that science currently uses to check for errors. Some of the factors that contribute to these problems include:
1) The economic downturn has resulted in funding bodies having limited resources to distribute.
2) As Dr Chris Chambers has recently pointed out, successful grant applications are increasingly being treated by Universities as outputs, when they are in fact inputs. Universities inevitably see large grants as better than small grants, and in some cases, this has led to some institutions considering getting rid of academics who aren’t seen as bringing in the expected income.
3) Large projects require established names and a substantial body of research backing them. Establishing yourself as a top researcher requires a strong publication record, which many equate to a large publication record.
4) Academic journals prioritise the publication of novel results, and are resistant to the publication of null results or replication studies, resulting in an environment in which publication bias is actively, if unintentionally, nurtured.
5) Scientists are pressured to produce novel results in order to publish in high-impact journals, particularly in regards to the upcoming REF.
6) In some cases, time constraints on getting papers out may result in questionable ‘data-peeking’ techniques being rationalized and employed.
7) Inadequate checks are in place to catch inappropriate behavior; often the raw data are unavailable, and the peer-review system frequently fails to detect inappropriate analyses.
All of these pressures can be particularly salient for early career researchers, for whom a publication in a decent journal may mean the difference between a good job and no job at all. So is it any wonder that such a crappy incentive structure has led to cheating?
There is a problem in science, and it's one that's been around for a while. I think more people know about it than they would care to admit, and while it's great that we're starting to seriously think about ways to tackle these problems (see here and here for good examples that there should be more of), what I really think is needed is a change in mentality by people at all levels in the system. I genuinely don't believe that anyone who has a career in science got into it for fame, money or glory; they got into it because they love the pursuit of knowledge. I fear that many of us have lost sight of that fact, drowned in a sea of impact factors, h-factors and grant lotteries. The problems that science faces are not the result of any one particular person alone. If we genuinely want to make a change, every single one of us has to stand up and make a difference, stop trying to lay blame at someone else's feet, and remember why we're here doing this work in the first place.