“Burn Your Textbooks”

2 August 2013 by Tania Browne, posted in Evidence Based Medicine

There's a wonderful quote from Sydney Burwell, former Dean of Harvard Medical School, that opens a chapter in the book "Evidence Based Medicine - How to Practise and Teach It":

My Students are dismayed when I say to them "Half of what you are taught as medical students will in 10 years have been shown to be wrong. The trouble is, none of your teachers know which half."

It's the biggest issue with evidence based medicine - the "evidence" never stops coming. Medical knowledge doesn't stop moving once you've graduated, and the challenge for everyone in the healthcare professions is how to keep learning. "Problem based learning" helps, but it depends on finding the best evidence among the forest of peer reviewed papers and journal publications out there quickly and efficiently.

In the second of my three part series on reviews, I'm looking at systematic reviews. These are an excellent tool to deal with the volume and dodgy methods you may encounter. They aim to use the same kind of rigour  examining papers that was used in producing the papers themselves, and to produce a summary of the clinical literature for a particular question, removing bias and random error.

The Trouble with Experts

If you want to learn about something and you're not sure where to begin, one of your first actions is to seek out an expert. You might wander into a book shop and pick up their textbook,  or you might go to a journal article they've written giving their own analysis of the papers produced in their field to date. So far, so good...

Or maybe not.

You have little way of knowing if that textbook you picked is based on evidence, or the anecdotes and experience of the expert in question. You have no way of knowing if they systematically searched the literature, or just chose to quote the studies that confirmed their own views. And if the studies referenced at the back are more than a year or two old? In something as fast moving as medicine, you might as well forget it.

There's much the same problem with traditional literature reviews and commentaries. Fine if treated as opinion, but they are nothing like scientific. There is no standard structure and there may be no rational decision on what papers are included and what are ignored. The original  methods may not have been examined, leading to reviews based on shoddy research, and anecdote and personal experience may creep in even with the best of intentions. If we're to properly review all evidence on a particular treatment, we need to be scientific, and what's more, systematic.

Out with the old....

It was fairly obvious by the end of the 1980s that the traditional review system was the literature version of the Sir Lancelots, relying on their years of experience rather than proof that stuff worked. But in 1992, two papers by Elliot Antman, Joseph Lau et al were published which had a galvanising effect on the medical community. Both papers covered the common problem of heart attacks, and one in particular was devastating. It estimated that if all evidence of the effectiveness of clot busting drugs after a heart attack had been recognised and systematically collated when it was first available from the mid-1970s, thousands of lives could have been saved. Thousands of people lost their lives for want of a simple treatment, just because we didn't pull all the information together and say "It Works. Use it."

In the last 20 years, systematic reviews have been slowly but surely ensuring errors like this are increasingly rare.

Searching

Once your question is formulated and your protocol is peer reviewed (that phrase again), you can start your systematic review. You search for all the stuff you can find that answers your chosen question. But you have to do much, much more than just take a a look at the MEDLINE database or follow the citation trail on a paper you like. For all it's brilliance, MEDLINE has less than half the published medical papers in the world. Possibly only a third. It's important to use multiple search strategies.

One of the main issues with making a proper search is trying to avoid what we call publication bias. As explained  in my previous post, one of the questions asked of peer reviewers will be "is the research original and will it move things forward", and this question poses more of a problem than you might think. It's a well known phenomenon that journal editors shun replications and papers that find a negative answer to the questions they ask ( a null hypothesis) Likewise, scientists are less likely to submit their papers if they "support the null hypothesis" because they know it's unlikely to stand a chance. The journal publication system likes novelty, big effects and pizazz. In order to find the whole truth, we may have to go into the "grey literature", the technical reports and conference proceedings that never even saw peer review in the first place, those papers shut in drawers after rejection. Searching may involve asking experts to recommend papers you might otherwise miss, or even talking to the original authors about their influences and the studies they cited.

As well as publication bias the major databases have a distinct bias towards the English language, and it's worth remembering that relevant papers may only have been published in minor local journals due to lack of available translation.  It would be wrong to assume that all study authors speak fluent, highly technical English.

For general epidemiologists there can be another bias issue -  the range of study designs is far wider than the clinical trials of evidence based medicine. In the hierarchy of trial design, the "randomised controlled trial" is considered the gold standard of  evidence, yet not all of our questions in epidemiology can be answered with RCTs. Lack of evidence from an RCT does not mean lack of evidence, and study quality shouldn't only be judged by the quality of the design but by the quality of evidence. Some organisations now have different criteria for different types of study, so systematic reviews from a mix of designs can be better assessed.

Choosing What to Include

So you've spent several months searching and you think you've found all the literature. But you still need to pare them down to the most relevant, you can't possibly use them all. Luckily for you in your pre-written, peer reviewed protocol you agreed the criteria for inclusion In your review. You may wish to only include studies that had more than a certain number of subjects, for instance, or studies where all the subjects were within a certain age range, or a particular gender. You may only want to only include studies that measured their outcomes in a particular way, or even their exposures in the same format. In studies that use qualitative data it may take a while to sort this all out, and in studies that have mixed methods even longer. We'll be looking at different kinds of study quite soon, but in the mean time you'll have to trust me - it's complicated.

When you have a good idea of what you wish to include, the next step is to scrutinise the methods described in the papers and make sure the study was done well. If, in using your critical appraisal framework, you find that the study was flawed, then the most usual procedure is to not include it's results in your analysis but include it in your general discussion. The findings of the remaining studies are extracted from the papers and placed on a data extraction form.

The Bottom Line

At the end of all this searching and picking out, you hope to come to some kind of conclusion. You want to know if a treatment works, you want to know if it's feasible (whether cost wise or practically), if it's appropriate and if the effect size makes a treatment worth it all. Your conclusions will be determined by the data that the original studies used. If the studies all used similar quantitative data then the most obvious way to conclude is a meta-analysis - a statistical exercise which we'll talk about in part three of this series. If the data is quantitative but not similar then the best you can do is write a narrative summary. For qualitative data you can perform something called a meta-synthesis. 

Often the results of a systematic review also have a kind of commentary at the end which puts the papers in their original context. You might want to talk about the methods and similarities between the studies, the possible play of chance and randomness, and the chances of bias.

Reviewing The Reviewers

Just like any other piece of research, a systematic review can be done badly. In the excellent PDF overview from Bandolier, they suggest having the following questions in mind.

- is the topic well-defined?

- was the search for papers thorough?

- were the criteria for inclusion clearly explained?

- were the studies assessed by reviewers in a "blinding" system? (See my previous post on peer review)

- did the included studies report similar effects?

- was the play of chance taken into consideration?

- a the end of it all, are the recommendations based firmly on the evidence presented in the review?

All scientific research, systematic reviews included, is flawed. It will never be perfect. Just as science strives to find "the least wrong answer", it must strive for "the least flawed methods". Systematic  review helps in that aim, but the same humanity that brings bias and cock ups is also the humanity that brings meaning to the results on a table, that applies that review to the 9 year old with leukaemia, or the 70 year old with bronchitis, or the 59 year old with a dodgy ticker sitting on the examining table. As with all Evidence Based Medicine, it's the balance between graphs, tables, reviews, data  and a human being that makes it work.

In the final part of my series on reviews, I'll be looking at meta-analysis and celebrating a special birthday.


One Response to ““Burn Your Textbooks””

  1. Lee Turnpenny Reply | Permalink

    'Lack of evidence from an RCT does not mean lack of evidence, and study quality shouldn't only be judged by the quality of the design but by the quality of evidence.'

    Not sure I follow. What is 'the quality of evidence' dependent upon?

Leave a Reply


− 4 = two