Fixing the Fraud: Q&A with Professor Charles Hulme
With less than a week to go until SpotOn, here is a Q&A with another panelist from Pete Etchells and my session on academic misconduct. If you missed the previous articles, they can be found here (Q & A with Ginny Barbour) and here (guest post by Chris Chambers). Don't forget you can follow the session (Monday 12th at 3.30pm UK time) on twitter using #solo12fraud if you're not at the conference yourself.
Here's UCL's Charles Hulme answering our questions.
Can you tell us a bit about yourself, and your work for Psychological Science?
I am a Professor of Psychology at UCL. Previously, I worked at the University of York for 33 years. My main research interests are in developmental disorders, particularly disorders of reading and language development. In recent years I have spent a lot of my time carrying out randomised trials to evaluate the effectiveness of educational treatments for children’s reading and language difficulties.
I became an Associate Editor for Psychological Science in 2007, and from 2012 I became one of the Senior Editors of the journal. The term Senior Editor here is probably equivalent to “Deputy” editor. Psychological Science has been a very successful journal. It covers the whole range of research in psychology, and has an unusual format consisting of short papers (Research Articles, the standard form of paper, are limited to 4000 words). Psychological Science has over 3000 papers submitted each year, and one of the Associate Editors and either the Editor-in-Chief or one of the Senior Editors read each paper to decide if they should go out for review. Only roughly 30% of papers submitted are deemed of sufficient quality to go out for extended review.
What do you think journals are doing well in order to combat academic misconduct, and what do you think they could be doing better?
I don’t think that journals do much to try to combat academic misconduct, and I don’t think that journals and their editorial staff see that as one of their roles. Neither do reviewers. I guess the normal assumption is that people are honest, and report accurately their methods and findings. Journals and their editors don’t really have the resources or the skills to work as detectives rooting out fraud!
Are we likely to see a shift in emphasis in the big journals, away from solely novel results, to perhaps more replications? Do you think this will help prevent misconduct?
There has recently been an extended discussion about this amongst the Editorial Board of Psychological Science. So we see this as an important issue.
However, I doubt that we are going to move to a position where top journals are full of replications. I think positive replications are always going to be hard to publish, except perhaps in areas where a particular finding has very direct and important applied implications (does taking aspirin each day reduce rates of cancer or heart attacks?). Failures to replicate key results are now recognised as potentially important, and I think high ranking journals increasingly recognize this, and will become more open to publishing such papers. But failures to replicate are likely to remain a fairly small proportion of papers published in any journal. Fundamentally journals will remain most interested in new results which seem to push forward our understanding.
Failures to replicate may be one force to counter academic misconduct. However, such work doesn’t “prevent misconduct” it simply alerts people to the fact that such misconduct may have taken place. Of course, there may be many reasons why results don’t replicate which have nothing to do with academic misconduct (e.g. poor measures or errors in experimental design)
Psychological Science has recently had to retract papers - how did these retractions come about, and have any changes been put in place to try and prevent
I wasn’t directly involved in any of these decisions, but I have asked the former Editor of Pscyhological Science (Robert Kail) about this. I understand that the papers by Stapel were retracted only after an investigation by the Dutch academic authorities concluded that there was good evidence that some of these papers contained data that were fraudulent.
Do you think the peer-review system needs to be re-structured? How would you change it?
I don’t think peer review exists to counter academic misconduct. I don’t believe that there is any evidence that on the whole peer review is not working reasonably well. Peer review is a bit like democracy – they may both be imperfect but they are the best systems we have come up with so far!
If you could pick one thing to change in order to deter academic misconduct, what would it be?
Fundamentally, I believe academic misconduct like misconduct in other fields (banking comes to mind as an area where we hope people will be honest, but where evidence indicates that there are failings) is probably quite rare. Recently, there has been statistical work devoted to trying to discover data patterns that are “too good” and I imagine this sort of work will continue and become more sophisticated. I think that work devoted to replicating key findings will also grow, and that funding agencies need to realize the importance of such work and be prepared to fund it. There is also a move to provide repositories for work on the web where people can lodge reports about failures to replicate.
What made you agree to be on this panel?
An insatiable desire for fame and fortune!