Revolution in a Cake Factory
This year is a special anniversary for anyone interested in health care. Twenty years ago, in a former cake factory in a rather unlovely part of Oxford, 80 or so people gathered from around the world to devise a polite revolution. None of them knew exactly why they had been invited, what they were doing and how far their idea would go, but the organisation they formed in October 1993 is now synonymous with the topic of the third in our series on reviews - meta-analysis. The name they chose for themselves was The Cochrane Collaboration.
There was never anyone called Cochrane at the helm, which is a little confusing to a newcomer. The Cochrane in question had actually died in 1988. Archie Cochrane, born in Scotland in 1909, had trained at University College London and had served in the ambulance division of the International Brigade during the Spanish Civil War. He went on to serve as a Medical Officer in a few German POW camps during World War 2, and one thing in particular pained him during this time - it was obvious that some of the treatments he was giving (especially for the widespread tuberculosis problem) were actually harming people, not helping them.
Thirty years later, a similar thing would occur to a young British doctor working with Palestinian patients on the Gaza Strip. Iain Chalmers felt frustrated that some of the standard, traditional treatments were obviously not only failing to improve matters, but in some cases actively impeding people's progress. When Chalmers returned to the UK he discovered that Archie Cochrane, a man he'd never even heard of, had pondered this too.
Cochrane was appointed Director of the Medical Research Council's Epidemiology Research Unit in Cardiff in 1969, and in 1972 he published a monograph called Effectiveness and Efficiency - Random Reflections on Healthcare Services. How, he asked, could we have an efficient National Health Service in the UK if nobody actually knew what worked? The monograph advocated the use of the Randomised Controlled Trial, something that Iain Chalmers had never been told about at medical school - the concept was completely new. In his own words, reading Cochrane's monograph on his return to the UK, Chalmers felt as if he'd been "given a compass".
So What is Meta-Analysis?
"It's surely a great criticism of our profession that we have not organised a critical summary, by speciality or sub-speciality, updated periodically, of all relevant randomised controlled trials"
Archie Cochrane, 1979
But by 1979 this concept was not a bolt from the blue. Gene Glass, a statistician from Colorado University, had coined the term "meta-analysis" and subsequently demonstrated it as President of the American Educational Research Associaton in 1976, at a conference in San Fransisco.
A meta-analysis is a kind of systematic review, and you can't have a decent meta-analysis without the bedrock of a decent systematic review, so all the checklists and processes we talked about in the last post still apply.
But meta-analyses are more specific. While systematic reviews are used for various types of study, meta-analyses are mostly used to statistically combine the results of very similar studies. Mainly this is randomised controlled trials, where one group is given a treatment (the intervention group), and compared with a "control" group which is either given the current best treatment on the market or a "placebo", which should have no effect either way.
At the most basic level, the results of randomised controlled trials are usually given in terms of the risk ratio - whether the intervention group is twice as likely to have a pre-decided outcome, or half as likely, or whatever. For instance, a risk ratio of 0.5 would mean that the group receiving the intervention were half as likely to get a particular outcome (a rash, increased heart rate, whatever the chosen outcome to measure....) than the control group. If you think about it, you hear stuff like this on the news all the time. "A study has found that bottle fed babies are twice as likely to...." Blah blah. A meta analysis will summarise a variety of outcomes such as risk ratios, hazard ratios and odds ratios (more on all these soon). The really clever thing is that these statistical summaries are weighted - the results of some studies, the largest and very best designed, will carry more weight in the summary than the one on less than 100 patients in just one clinic, or the one that didn't have very clear methods.
Variety is The Spice of Life
There's a variety of issues to account for and many statistical acrobatics to get around them, of course. Too many to get into here. But just as one example, consider variety. Even though meta analysis is designed for similar studies there will always be problems. However similar you think your studies are, they will vary in some ways - there will always be a certain amount of heterogeneity. Your trials may have taken place with different age groups, or different stages of the same illness, or with the clinicians deciding on differing outcomes to measure in one trial than another. Even the structure of health provision from area to area.... all of these issues may be a big deal when it comes to measuring your effect size - the measure of the magnitude your intervention worked by.
For instance, you're testing a new drug and find it has a much bigger effect on pubescent kids and teens than it does people over 50. In fact, in the over 50s the effect is negligible so they may still be best off with the previous best treatment. For teens however? It's perfect and much better than what you were using before. Now if you have two studies, one performed on eligible A level students from the colleges in your area, the other performed on the members of the local golf club, there may be an issue with combining the effect sizes.
It's up to the analysts to decide if they think a group of studies is "combinable", but there are a couple of tests that can be done during meta-analysis to detect heterogeneity (one is called the Cochrane test, naturally). Depending on the results they will use either fixed effects modelling (assuming the effect size is the same and that any discrepancies are pure chance) or random effects modelling (acknowledging the effect size varies between studies).
Seeing the Wood for the Trees
At the end of your meta analysis, you can make something rather lovely like this:
This particular example is very stylised as it's the Cochrane Collaboration's logo, but it's called a forest plot, and it's a visual, at-a-glance summary of your summary. The line in the middle there? It's called the line of no effect. It's the border at which you decide your treatment does nothing. Zilch. Nada. The horizontal lines each represent one of the individual studies you looked at in your meta analysis. To the left of the centre line? Yup, definitely something going on there - the further to the left a study appears, the larger the effect size measured. To the right? Nothing to see here, move on. Any study line that touches the middle line means there was no obvious effect beyond what may have been chance. Notice how the length of the horizontal lines varies? The shorter the line, the bigger the study - the more subjects a study has, the less likely the findings are affected by chance, the more likely the result is down to "the thing you did". Big is best, so we want nice short lines. The diamond at the bottom is the conclusion. You've looked at the seperate studies, did you find over all that The Thing worked, or not? Once again, left means a big effect and the further left, the bigger the effect.
The Cochrane Collaboration's logo is not, by the way, a fiction. It's based on a real study. Until the 1980s nobody knew for sure whether giving steroids to women about to give birth to premature babies would help the babies survive. Many studies had been done, but none taken on their own seemed especially conclusive. See the diamond? It did work. By quite a lot. In fact, giving steroids as routine during premature births has reduced infant deaths by around 40%. As Ben Goldacre says in his book, Bad Pharma "systematic reviews are.... quietly one of the most important and transgressive ideas of the last 40 years"
"An Obstetric Baader Meinhof Gang"
By the mid-1980s in the UK, Iain Chalmers had started work on a meta meta analysis - while meta-analyses had been performed for particular interventions in the past, Chalmers and 100 or so colleagues had decided to try and perform meta-analyses for their entire field of obstetrics. Despite the objections and naysaying of many doctors and clinicians who claimed it impossible to combine the results of RCTs (and some pretty unkind name-calling too) their efforts were finally gathered together in a 2 volume, 1300 page tome called Effectiveness and Efficiency in Pregnancy and Childbirth. They must have felt no small sense of pride, especially considering that Archie Cochrane himself had described obstetrics as the "least scientific" area of medicine.
The research and development arm of the NHS were impressed enough with the work of Chalmers and his colleagues to fund them for three years in 1991, and the UK Cochrane Centre was set up in a disused cake factory in Summertown, Oxford. It was there, in October 1993, that the skeleton of the international collaboration was born. From that first colloqium where they could offer no expenses but free coffee and biscuits, the Cochrane Collaboration has grown into an international, influential - yet quiet and modest - revolution.
As they come to celebrate their 21st annual colloqium a month from now in Quebec, The Cochrane Collaboration are considered the gold standard in assessing and reporting research across all fields of medicine, from schizophrenia, to dentistry, to vaccines and rare blood diseases. They are represented by around 28 000 volunteers and 500 paid staff in 120 countries. There are 14 Cochrane Centres, 17 centre branches, 53 review groups, 16 methods groups and 11 fields. 5400 Cochrane reviews have been published by over 22 000 authors, and another 2400 reviews are currently underway. Over half the world's population has one-click access to Cochrane reviews, either through license or through the free access schemes for low and middle income countries. They are slowly but surely increasing their presence on social media and encouragjng people from all walks of life to get involved by publishing plain English summaries of their findings. And all of this is done without any commercial funding.
It's almost impossible to really comprehend the difference The Cochrane Collaboration has made to modern medical practise in just 20 years. As people from across the world come together in Quebec this September, I shall be raising a glass to them all from my little house in South West England and looking forward to seeing what they can achieve in the next 20.