The Generation Game
"Scientists have found that...."
"A significant new study could show...."
Every day we're bombarded with information from the media. From the moment we turn on the radio or TV in the morning, through our newspaper-reading commute, to collapsing into bed at night after the late news, we are faced with stories of science, controversies on ethics and methods, and what might give us cancer this week. But how do we know what to believe, and what's just.... well, to put it bluntly, crap?
It might surprise you to know that scientists have exactly the same problem. The number of scientific journals published is huge and getting bigger every year - the most commonly quoted figure is 20 000. The number of online searches and articles you can see in your particular scientific field may be in the millions. How do you decide what's worth paying attention to, how do you decide what is junk?
In the next few posts we're going to look at ways of sorting out the quality, the Scientific Literature, from the more dubious. How do we decide that one piece of research is more valid than another, and how do we combine the results of lots of bits of good research into something more helpful? We're going to look at three kinds of review - Peer Review, Systematic Review, and Meta Analysis - which are just the job.
First, how is scientific literature generated in the first place?
How the process works
If I were a research scientist, a lot of my time would be taken up writing proposals – applications to a grant funding body to do some research. This money would usually be for a set period of time, to pay my rent and keep me in pot noodles and clean pants, employ assistants, rent equipment, pay lab fees etc. Say, for instance, I want to look at the evolutionary significance of nose hair development in moles. I write a proposal to get funds for my research and after being peer reviewed funds are granted. Sometimes this money is from private institutions, but much more often it is money from Government Organisations (a hypothetical Nasal Hair Research Council, for instance)
So far, so good. I spend a couple of years researching my Mole Nose Hair theory and when I’m ready with my findings I look to publish a paper in a journal. The scientific publishing world is dominated by the big science journals – volumes that are published, for instance, quarterly and whose main content is papers from people like me. Journals are the basic scientific currency, the way ideas are communicated. But these aren’t the kind of journals you buy from the newsagent. Although many newer journals offer free, open access online, the more established old guard are still generally available only by subscription and most subscribers are academic institutions and libraries. Subscription is pricey and readers usually rely on being part of an organisation that subscribes to them.
In the “currency” of journals some hold much more weight than others. This weight is judged by something called Impact Factors. Each year, Thomson Reuters take a selection of journals and calculate how many times articles from them have been cited in that particular year. Because all scientific papers and articles cite their sources and where their information came from, the more citations a journal receives, the more influential it's perceived as being. In a bid to be published in a journal with a high impact factor I may try the International Journal of Mole Studies, but as I’m an early career researcher I stand a better chance with Nasal Hair Journal.
But being accepted by a journal editor is hardly the beginning. Because science is rigorous and the scientific method is important, my paper will then go through the process of Peer Review. My Mole Nose Hair paper will be given out by the journal's editor to several of my colleagues in the mole research field, and some experts who have written other papers on nasal hair evolution. Who the editor selects to review my paper is pretty much up to them, but a good journal editor will do their very best to find a few diverse scientists across my field, who have performed similar studies, to take a look.
What Actually Gets Checked?
As well as checking the basic language and that the paper makes sense, peer reviewers will work their way through a series of checks to see if they think my paper is worthy of publication. These considerations are explained in a PDF produced by The Voice of Young Science, called Peer Review - The Nuts and Bolts. (I borrow heavily from it here, and recommend it as further reading if you're interested)
- Is the paper actually right for that particular journal? Does it fit into the scope, and does it reach the general standard?
- Is the hypothesis (the question we're actually asking) clear, and is it an answerable question?
- Does my study use appropriate study design, methods and analysis of the results? If not, I might as well not have bothered (we'll be looking at the types of study epidemiologists can use a lot more over the next month or two)
- Does my study challenge existing paradigms, is it novel? Does it add to existing knowledge in a way that moves things forward (this brings up the whole issue of valuing novelty over replication but we'll leave that for now, it's a whole blog post of it's own)
- Do I describe my methods well enough that other researchers could replicate my study if they wanted?
- Is the statistical analysis reasonable, and does my level of statistical significance seem about right?
- Have I been ethical, and did I get permission from ethics committees where I needed to?
- Do the results actually answer the question posed at the beginning?
The Way it Gets Done
Peer review can be done in a few ways. The most common is the "single blind", where reviewers will know the names of the paper's authors, but the authors won't know who the reviewers are. Sometimes journals may use a "double blind" method where the paper is sent out without the authors' names attached. In theory this helps to make sure reviewers aren't biased - it would be easy to plump up a big name in your field if you thought it might help you in your career later, or even belittle someone with whom you have a bit of a problem. But in practise, it's pretty easy to work out who the authors of a paper are just by knowing what's going on and who's doing what In your field - science is a small world the more specialised it gets. Finally there's the open review system, where not only does everyone the names of everyone else, but the peer review reports are published alongside the final paper - this allows for maximum clarity and openness, but may also put some peer reviewers off if they're critical of some of the aspects of a paper.
So peer review is a vital step in the process of generating reliable scientific literature. It's importance can't be underestimated, which makes it all the more amazing that scientists are neither payed nor trained in any way to do it. Doing peer review is considered by most scientists as just doing their bit, making a valuable contribution to a higher aim. Increasingly, with so much pressure on scientists' time writing those grant proposals we talked about earlier, there are calls for peer reviewers to be paid for their time. Some journals offer a sort of payment in kind, such as free access to their indexing service or a nice "thank you" in a big annual list, but this is about as far as it currently goes. Although there are fears that payment may encourage people not very well qualified to review, and that the editor's selection may favour "friends", there's certainly a case for more formal recognition of all that work.
Peer reviewers are also not trained, which means it can be difficult for a young researcher, lacking experience and just starting out. Many will try to find a more experienced mentor or belong to a journal club set up by their place of work. Journal clubs are where a small group discuss and critique newly published papers in their field, ostensibly to keep up with the latest research but it also proves a good training ground for peer reviewers. It can also be very useful to look at reviews published alongside papers in open review journals, and often editors will tell peer reviewers if the paper was eventually accepted or rejected and send them copies of all the other reviews that were alongside their own. But while all this is helpful, formal training may still be a good option in future.
Though thankfully it's a rare problem, peer review will never detect plagiarism or incidents where a paper's authors have given fraudulent results. It's best seen as the first step in a long process where a paper will be published, people will attempt to replicate it, and it will be assessed by systematic reviews for years to come. Science is, at the end of the day, a self correcting process however long that may take. There are other concerns, in the (rather wonderful) words of James Hawkes of Straight Statistics:
"It's a good thing scientists are mostly honest, because peer review offers the greatest possible temptation to steal ideas, to show favour to former students, to boost favoured theories or to do down rivals. Honest they may be but they aren't saints, so we must expect all of these things to happen from time to time."
Still, peer review is a vital step in the generation of scientific literature. It's a necessary quality mark, and it's not for nothing that most scientists and journalists will ignore research that has not been peer reviewed. Without it, it would be very hard to weigh up claims and decide what may be "the least wrong theory" (which is about the best we can do with the scientific process). As members of the public bombarded by news, whenever we see those two snippets I started with our next questions should be "was it peer reviewed?" That should at least start us on the right track.
But peer review is only the beginning - next time, we'll look at what can happen to a paper after it's been published.