Tackling the F word – guest post by Chris Chambers

18 October 2012 by Suzi Gage, posted in Uncategorized

As Pete Etchells announced on Tuesday, he and I are organising a session at Spot On London, in their Policy sessions stream, about how to fix the problems of fraud and academic misconduct currently befalling science.

In the run up to the session we'll be posting some relevant content, in the hope of stimulating debate before the session. So without further ado, here is a guest post from Chris Chambers, one of the panelists.

 

Chris Chambers is a Senior Research Fellow at the School of Psychology, Cardiff University, where he studies human cognitive neuroscience. Together with colleagues from Cardiff University and UCL, he has published articles in the Guardian, Le Monde and New York Times on current issues in science and science/media interactions, and he has recently co-authored a submission to the Leveson Inquiry.

 

 

What is scientific fraud? Where should we draw the line between fraud and the grey area of legal but questionable practices that form so-called “cultural” problems in science? Does drawing this line even make sense?

Some things are obviously fraud, such as fabricating data. Making up results is as blatant as stealing from a cash register. But, as we know from the financial world, white-collar crime is as subtle and complex as human behaviour itself.

To illustrate this point, let’s consider a couple of scenarios. Which of the following do you consider to be scientific fraud?

1)     A scientist collects 100 observations in an experiment and discards the 80 that run counter to his desired outcome

2)     A scientist runs ten experiments then selectively writes up the two that produced statistically significant effects

Everyone would agree that the first scenario is fraudulent. Many (perhaps most) of us would also view the second scenario as fraud, or at least misconduct.

Mathematically, the two scenarios are similar: whether I discard 80% of my data or 80% of my experiments, I end up in much the same place – with biased evidence. Morally, they are also arguably on par. After all, how is selectively reporting an experiment based on a desirable outcome any less dishonest than selecting data within an experiment for the same reason?

But now consider a third scenario.

3)     A journal reviews ten papers on the same topic and selectively publishes the two that reported statistically significant effects

Things just got complicated. In a breath, we’ve ascended from the quagmire of individual dishonesty to the safer terrain of groupthink and the “cultural” problem of publication bias. We can now relax in the comfort of diminished responsibility, forgetting that the incentive structure in scenario 3 drives scenario 2, which in turn encourages the more extreme scenario 1.

The point of this exercise is to show that unless scientific fraud is defined narrowly as (only) data fabrication, it’s actually quite difficult to distinguish it from a continuum of dishonest practices at the individual and group level. What about uncorrected p value fishing? Or analysing a dataset 100 different ways and only reporting the analysis that “worked”? Or creatively excluding outliers? Are these fraudulent? If not, are you excusing such practices on rational grounds or simply because they are so commonplace? What would your neighbours think? (Hint: what did your neighbours think of bankers?)

There are no easy answers to these questions, which is why I believe that tackling scientific misconduct requires tackling the deeper causes, while simultaneously increasing our ability to detect the most serious fraud. We need to put aside our (natural) moralistic inclinations and treat science like a biological system in which misconduct is a naturally occurring disease that requires treatment and prevention.

Prevention will never stop all fraud, but we can help by eliminating the incentives that provide gateways to more serious acts of dishonesty. One approach that I, and others, have proposed is to introduce a new kind of scientific publication in which the decision to publish is based on methods rather than results. And we’ve argued previously that a big part of the solution is to value discovery of knowledge over production of publications. Since something can be considered discovered only when it is independently replicated, valuing replication is an obvious broad-spectrum antibiotic for bad practice (although not a complete cure, see here). Yet, as ridiculous as it must seem to outsiders, many areas of science have evolved a tabloid-like culture in which what’s new trumps what’s true.

We also need to take steps to increase transparency and accountability. For a start, the submission of raw data should become a standard stage of the publication process. Second, the UK Research Integrity Office should scrutinise a random percentage of raw data for signs of fraud, using methods developed by Uri Simonsohn and others. Third, we should take greater steps to protect whistleblowers, who are often junior members of research labs. And fourth, being found guilty of scientific fraud should result in more than unemployment and a toothless report under the ‘file and forget’ category – it should be treated as a criminal act, like any other white-collar crime.

While we’re making these changes, it is vital that we keep talking about fraud. As scientists, we find it difficult to acknowledge the Fraud in the room because doing so tarnishes our mythology as trustworthy and selfless beams of light, as “objective seekers of truth” (as Daniele Fanelli puts it). Sure enough, find any blog or news article written about scientific fraud and the odds are that beneath it you’ll also find at least one comment lambasting the author for daring to mention the F word. Doing so, we’re told, invites attacks on science at a time when society can ill afford it. We’re reminded that fraud is exceedingly rare (even though it isn’t), and besides, the scientific record is self-correcting in the long run, so aren’t we making a big deal about nothing?

The answer is no. Waiting for the system to self-correct wastes time, resources, and – in medical research – can cost lives. And regardless, the bar is higher for us than for the rest of society. Yes, scientists are human and, yes, we’re egotistic and fallible. But we’re also better trained than anyone to understand error, to reason based on evidence, to solve problems, and to detect bullshit. This makes our tolerance of fraud and dumb incentives all the more laughable, irrational and frankly inexcusable.


One Response to “Tackling the F word – guest post by Chris Chambers”

  1. Khalil A. Cassimally Reply | Permalink

    Fascinating post, especially the three three scenarios which illustrates our perceptions regarding scientific fraud.

Leave a Reply