It’s not a failure when you fail to replicate
Let’s get this out there to begin with, so it’s absolutely clear in everyone’s minds. ‘Failure to replicate’ a study does not mean that the original study was wrong, poor, or fraudulently conducted. It does not call into question an entire field of science. It does not call into question the integrity of any scientists involved. It means that the results of the replication did not match the original study, which could be for a number of reasons. It is simply a part of the scientific process, and a good part at that.
Which is why I was completely flummoxed by a recent Nature headline screaming “Disputed results a fresh blow for social psychology”. The article relates to a recent study published in PLOS ONE, which looked at the concept of ‘intelligence priming’ - that thinking about someone perceived to be smart (or stupid) can affect your performance on an intelligence test. Over the course of 9 experiments, the study attempted to replicate the findings of a 1998 paper by Dijksterhuis and van Knippenberg, and the results all pointed towards intelligence priming providing no advantage in subsequent intelligence tests (i.e. in contention with the original results). I’m not going to go into the nitty-gritty of the specific studies - if you’re interested, it’s worth reading the paper along with responses in the comments. But there are two points to note here. One, as already mentioned, is that replication studies are important, and should be forming a huge part of scientific research - so it’s a good thing that this study was conducted (and published). The second is that one failure to replicate does not constitute a death blow for a particular theory.
To echo Gary Marcus’ recent post on the matter, social psychology does not equal priming, and priming does not equal social psychology. To say that one failure to replicate one particular phenomenon is a blow for the entire field is disingenuous, and tarnishes the many admirable attempts currently being made to not just turn psychology around, but also to lead the way in reforming scientific research practices. Initiatives like the Reproducibility Project, Cortex’s Registered Reports (which went live this week), and BMC Psychology’s open access approach to reviewing are all shining examples of the positive and beneficial moves currently being made.
Perhaps another, more worrying problem, is the association between failures to replicate and fraud. Again, to be clear on this, there are two completely separate conversations to be had. One is the need to replicate psychological studies to determine whether the effects we see are genuine and robust. The other is whether questionable research practices (QRP) are leading to over-inflated and erroneous results. Again, others have gone into excellent details on these matters recently, but it’s worth remembering that this isn’t a two-way street. Replication is part of the answer to preventing or discouraging QRPs. QRPS are not an inherent part of failures to replicate. In my opinion, to discuss any failure to replicate with specific reference to the fraud of Stapel and Smeesters, as the Nature article did, unfairly and unnecessarily calls into question the integrity of honest researchers.
In short, I don’t think every failure to replicate a study is news-worthy, and I certainly don’t think it helps anyone to persistently link such failures to extreme cases of fraud. Many psychologists are actively trying to reform the field in innovative and interesting ways, for the benefit of everyone. Let’s concentrate on that being a positive thing.