For years anyone involved in any kind of non-drug therapies has faced accusations that without the support of randomised clinical trials (RCTs) they lack an evidence base and so their work is worthless, quite possibly dangerous and even fraudulent.
That might be reasonable if the RCT was an accurate and reliable method for gathering an evidence base for all forms of treatment. But informed critics can point to a variety of ways in which they can be unreliable and misleading. So does it really make sense to depend on RCTs as the only way to distinguish between what is safe and effective and what is not?
I’m not even talking about multi-intervention treatments here– changing diet, setting exercise regimes, giving supplements where there is a deficiency, maybe herbal treatment. There is enough of a problem with the simple set up that RCT’s are designed to investigate – the single pill vs a placebo.
What’s been happening with cholesterol lowering statins recently is a good example. We’ve had them for 20 years; they are the best selling drugs ever and they they’ve been subjected to numerous large scale trials. If RCTs were as reliable as their advocates claim, we should now be in no doubt that they are effective and safe.
Evidence medicine: a movement in crisis
But we aren’t. Look at the challenge mounted to NICE’s plan double statin usage –The statins wars: Another round and maybe some clarity and the spat between one strong statin supporter and his critics that’s been played out at the BMJ – Statin critics cleared. Top statin advocate knuckles’ rapped.
Critics have concentrated on various faulty aspects of RCTs and evidence based medicine in articles in the BMJ with headlines such as “Evidence medicine: a movement in crisis” or “Strengthening and Opening up Health Research by Sharing our Raw Data.”
The accusations levelled at RCTs include being overly controlled by drug companies; the hiding of unfavourable data; marginal, clinically insignificant gains being inflated into the basis for a prescription and the total mismatch between the rarefied and highly regulated conditions of an RCT and the messy uncertainties of dispensing the drugs in the real world. Perhaps even more important is that as a way of picking up on harmful side effects RCTs have many shortcomings.
The ultimate irony about the total reliance on RCTs is that the part of clinical practice where RCT’s are most important – setting up evidence based clinical guidelines – is where they result in elderly people being treated in a way that is virtually evidence free. The potential for harm from unforeseen interactions here is probably far greater than in multiple non-drug treatments.
No one knows how being on seven drugs will affect you
It works like this. Once you’ve been diagnosed your doctor will generally prescribed one or more pills according to guidelines based on RCT’s that have found they are more effective than a placebo. That can work very well. The situation gets trickier, however, as you get older and so more prone to a number of diseases.
You could be getting a couple of drugs for your raised blood pressure, a statin for raised cholesterol, a couple more for diabetes, a pain killer for your arthritis and a pill to lower stomach acid to cut the risk of gastric bleeding from the pain killer. Seven in all.
This is known as polypharmacy and although the evidence base for prescribing each individual elements of this daily cocktail may be excellent (leaving aside RCT’s shortcomings) the evidence that your cocktail or indeed one given to anyone else in the same position, is safe or effective is non-existent.
Not only are many drugs widely used on elderly people often tested on people much younger but the subjects in the trials will only have a single condition. Companies rarely run RCTs on people taking two drugs let alone seven or more.
One way out of this bind is to bring in other ways of gathering evidence such as not testing a treatment against nothing (a placebo) but by comparing the safety and effectiveness of one treatment with different treatment. Certainly that is something that patients are keen to know. Do herbs or diet work best for this condition; does a drug or exercise work better for that?
Here Rupert Sheldrake describes how Comparative Effectiveness Research (CER) might work. Sheldrake has published the piece previously and it is used with his permission.
COMPARING TREATMENT METHODS ON A LEVEL PLAYING FIELD:
OPEN-MINDED EVIDENCE-BASED RESEARCH
By Rupert Sheldrake
In medical research, the “gold standard” research methodology involves randomized double-blind placebo-controlled trials. These trials are helpful in distinguishing the effects of a treatment from the effects of a placebo, but they do not provide the information that is needed by many patients and health care organisations. For example, if I am suffering from lower back pain, I do not want to know whether drug X works better than a placebo in relieving this condition, but which kind of treatment I should seek out of the various available therapies: physiotherapy, acupuncture, osteopathy, and so on.
Probably the best way to answer this question would be a “level playing field trial” in which various possible treatments were compared with each other. Taking the example of lower back pain, in such a trial a large number of sufferers, say 1,200, would be allocated at random to a range of treatment methods. Five treatment methods could be included in the trial, plus one no treatment group. Thus for each method there would be 200 patients. The treatment methods could include physiotherapy, osteopathy, acupuncture, chiropraxis, and any other therapeutic method that claimed to be able to treat this condition. Within each treatment group, there would be five different practitioners, so that in the statistical analysis the variability between practitioners could be compared. There would also be a no-treatment group.
The outcomes would be assessed in the same way for all patients at regular intervals after the treatment. The relevant outcome measures would be agreed in advance in consultation with the therapists involved in the trial. The data would then be analysed statistically to find out
- Which treatment, if any, worked best.
- Which treatment methods had the greatest inter-practitioner variability
- Which methods were the most cost effective.
This kind of information would be of great use to patients and also to providers of health care such as the National Health Service.
A similar level playing field approach could be adopted for a variety of other common conditions, including migraine headaches and cold sores.
This would be genuine evidence-based medicine, the trials would be relatively simple and cheap to conduct, and the exercise would be pragmatic and theory-free.
Imagine, for example, that homeopathy turned out to be the best treatment for cold sores. Some might argue that this was simply because homeopathy brought about a stronger placebo effect than the other treatments. But if homeopathy unleashed a greater placebo effect than other methods, then this would be an advantage, not a disadvantage.
Outcome research of this kind used to be common in medicine before the Second World War, and it is still widely used as a research methodology in medicine and in other areas of research, for example in agricultural field trials. Standard statistical methods can be used in the analysis of data.
Level playing field outcome research on different treatment methods, including complementary and alternative therapies, would be helpful both for health care providers and for sufferers who are trying to decide which method of treatment to go for.