A very interesting article on the BBC News website this morning:
Andrew Booth, from the School of Health and Related Research (ScHARR) in Sheffield, is assessing the proportion of modern treatments that are “evidence-based” – supported by “randomised controlled trials”, which, if run correctly, give the best view on the value of a drug or device.
The article, drawing on the work of Andrew Booth from the School of Health and Related Research in Sheffield, mentions cases in which there is no good evidence regarding the effectiveness of the medical measures being used as well as those where the evidence actually shows the measures are not helpful. The one explanation that is mentioned is that there is now so much research being done that doctors can not possibly keep up with it.
This immediately made me think of the presence of homeopathic pseudo-remedies in chemists. Not too long ago my daughter was sick and a doctor wrote out a prescription for several different kinds of pills: when I went to the chemist I found out that half of them were homeopathic placebos. And the practice here in Poland isn’t limited to prescribing homeopathic substances. Much of the advice that is linked to common ailments is not much more than old wives’ tales, including such tales as the power of vitamin C and the threat of sitting in a draft. Yet doctors perpetuate it. To the previously mentioned explanation that in some cases this is due to ignorance I would add several more. One is that the doctors need to be trusted by their patients and by going against commonly accepted beliefs they also would be undermining their own position. Another is that they count on the placebo effect, given the lack of actual treatments for the flu and other illnesses. A third that comes to mind straight away is that the companies which produce the useless treatments (when dealing with actual products) will keep on pushing them as long as they can get away with it. I suspect that the ‘evidence based medicine’ research that Booth and others work on has revealed many other causes.
None of that, however, is what really makes the article interesting for me. Rather, what caught my attention is that cases such as this show that the distance between superstitions and what we think of as ‘best practice’ really isn’t all that great. For example, one of the definitions of superstitions one hears is that they are beliefs that are not based upon evidence: but, then, if the research is right, much of medical practice isn’t either! This isn’t to say that doctors are just as bad as shamen, of course. But it does mean that we do have to be more subtle to understand the difference or else end up making it seem like they are. This is, of course, the general point that in criticising superstition we shouldn’t set up ourselves, or anyone else, as perfectly rational.
In a sense the term ‘evidence based’ is somewhat misleading as it makes it sound that some practices lack all evidence whereas, I suspect, they do have evidence but that evidence is often very poor, having been gathered and ‘analysed’ using methods which are, at best, inappropriate. I am thinking of such things as word-of-mouth, tradition and anecdote – all sources of evidence in the weak sense that in some situations they do provide a sufficient reason to believe something. A better term would be ‘based upon evidence that was gathered and analysed using the methods we currently think are best in these circumstances’ – though that term has certain readily identifiable weaknesses as well. Pronounceability being one.