1. RCTs: the great deception

These commentaries are based on Dr Gillman’s peer reviewed scientific papers, see Publications

Gold-standard or fools-gold?

Sigh no more, friends, sigh no more, RCTs were deceivers ever,
One foot in SEs and one unsure, to science constant never*
The deceptive dogma of RCTs as the gold standard has come to over-shadow other forms of evidence and has achieved a hegemony, controlling clinical practice via EBM and guidelines, which have become the de facto standard.
One might regard RCTs, EBM, and guidelines, as the terrible trio

Disparate evidence has propelled me to the view that the preponderance of RCTs, both in quantity, and in the wide area of application, has become a powerful hidden negative influence on thinking and innovation in science.

If that seems a preposterous proposition, consider for a moment the hundreds of thousands of RCTs (At a cost of billions of dollars) that have been carried out and then try to identify a few RCTs which have produced a significant change in knowledge or practice. It is immediately obvious that such rare beasts constitute on my fraction of 1% of all the RCTs ever done, and RCTs are not a cost free problem free activity, patients pay the price as subjects of worthless RCTs.

More particularly, RCTs are the dominant influence on the practice of doctors.  This has resulted from the ubiquity and disproportionate influence of the guidelines that are produced from RCTs and meta-analysis (mostly industry funded).

The are several problems about RCTs that have widespread ramifications — these are especially harmful because they are largely unrecognised.

  • First, RCTs are credited with much greater value and authority than their scientific credentials deserve
  • Second, they provide an easy route for lazy thinkers to adopt rote practices in clinical medicine which devalue the judgement and experience of doctors
  • Third, they are unthinkingly adopted when other methodologies are more appropriate, especially because RCTs are divorced from real science, cause-and-effect relationships, and the necessary processes of basic research

Control of RCTs by pharmaceutical companies, and control of the data they generate, has distorted the balance of science — medical authors of these frequently ghost-written papers, often produced by medical writing companies, are frequently not able to see the raw data from the multiple centres which participate in such trials.

A large proportion of all RCTs performed are funded and controlled by pharmaceutical companies, which accounts for the fact that the essential attendant techniques and methodologies employed have not been developed in step with advancing theory and knowledge.  This is exemplified by, inter alia, the continued use of several inappropriate and poor-quality rating scales, such as the Hamilton rating scale — it is obviously not in the interests of drug companies to fund research into rating scales when everybody accepts the Hamilton.

Fifty years of failure

Medicine and psychopharmacology have been utilising the RCT for 50 years and yet a summary of the work done on a recent antidepressant,Vortioxetine, comes up with the following verdict from eminent reviewers, De Giorgi [1] a Cochrane centre reviewer, said:
‘The clinical significance of these findings is difficult to interpret because of the very poor quality of the evidence supporting them’. Not just ‘poor’, de Giorgi says, but ‘very poor … ‘comparing vortioxetine with…SNRIs the quality of the evidence was extremely low’.
After fifty years they still cannot do it well: if that is not an indictment of the abject failure of RCTs, then what is?After fifty years they still cannot do it well: if that is not an indictment of the abject failure of RCTs, then what is?
To me, this is beyond belief.

Key questions

Therefore, key questions for the incisive analyst become: do RCTs have any special epistemic validity? excellence? or superiority? how much value should we place in RCTs? how are their results demonstrably relevant and beneficial to ‘real-world’ patients? do they translate into predictable and meaningful short-term or long-term treatment outcomes?

Some may be surprised that the answer to all these questions is a decisive ‘no’, or, at best, there is no good evidence to support these contentions.

RCTs can be valuable for making careful comparisons between one drug and another in relation to side-effects, and to some extent to establish relative efficacy.  However, little reliable replicated information has emanated from RCTs concerning relative efficacy in 50 years.

For  many other purposes they are valueless, unsuitable, or ineffective.

There are many inadequately explored and contentious epistemological, methodological, and statistical considerations concerning RCTs [2-4].

No lesser authority than Hill himself pointed out that you do not need either randomisation, or statistics to analyse the results, unless the treatment effect is very small [5].

Hill himself made a point of endorsing Claude Bernard’s view that there is ‘no qualitative epistemic difference between experiment and [properly scientific] observation’ [i.e., clinical science and clinical experience].

The commentaries gathered together under the umbrella of this section explore these and related issues in detail.

Last thoughts

Judea Pearl received the equivalent of the Nobel Prize in IT and mathematics not long ago for his pioneering work on the theory of scientific causation.  He emphasised that without causative explanations and causative mechanisms science is nothing: understanding and advances simply cannot be made without considering these foundational essentials.

In stark contradiction, RCTs exist in a kind of ‘post-truth’ world: they can be paraded without having any relationship to knowledge of basic science, or even reality: for instance RCTs and meta-analysis have been done concerning parapsychology, here are examples from the many I could cite [6-8].

For anyone who has red Hume on miracles, or is familiar with Bayes theorem, it must be clear that careless utilisation of the RCT is a perilous practice — even more so because it is combined with a simplistic and naïve interpretation of P values.

A parody by KG

Sigh no more, friends, sigh no more, RCTs were deceivers ever,
One foot in SEs and one unsure, to science constant never
Or “Sigh no more, Ladies, sigh no more, Men were deceivers ever,
One foot in sea and one on shore, To one thing constant never.”
—Shakespeare, Much Ado About Nothing

No code or guidelines can ever encompass every situation or replace the insight and professional judgment of good doctors.  Good medical practice means using this judgement to try to practice in a way that would meet the standards expected of you by your peers and the community.

“integrating … the best available external clinical evidence from systematic research…[with] the proficiency and judgment that individual clinicians acquire through clinical experience and clinical practice… [without which] even excellent external evidence may be inapplicable to or inappropriate for an individual patient.” [9].

[10]. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6019115/

References

1. De Giorgi, R., Vortioxetine for depression: the evidence for its current use in the UK: COMMENTARY ON… COCHRANE CORNER. BJPsych Advances, 2019. 25(1): p. 3-6.


2. Feinstein, A.R. and R.I. Horwitz, Problems in the “evidence” of “evidence-based medicine”. The American journal of medicine, 1997. 103(6): p. 529-535.


3. Thompson, R.P., Causality, mathematical models and statistical association: dismantling evidence-based medicine. J Eval Clin Pract, 2010. 16(2): p. 267-75.


4. Naudet, F., et al., Has evidence-based medicine left quackery behind? Intern Emerg Med, 2015. 10(5): p. 631-4.


5. Hill, A.B., The Environment and Disease: Association or Causation? Proc R Soc Med, 1965. 58: p. 295-300.


6. Harris, W.S., et al., A randomized, controlled trial of the effects of remote, intercessory prayer on outcomes in patients admitted to the coronary care unit. Arch Intern Med, 1999. 159(19): p. 2273-8.


7. Bierman, D.J., J.P. Spottiswoode, and A. Bijl, Testing for Questionable Research Practices in a Meta-Analysis: An Example from Experimental Parapsychology. PLoS One, 2016. 11(5): p. e0153049.


8. Sarraf, M., M.A. Woodley, and P. Tressoldi, Anomalous information reception by mediums: A meta-analysis of the scientific evidence. EXPLORE, 2020.


9. Sackett, D.L., et al., Evidence based medicine: what it is and what it isn’t. BMJ, 1996. 312(7023): p. 71-2.


10. Deaton, A. and N. Cartwright, Understanding and misunderstanding randomized controlled trials. Soc Sci Med, 2018. 210: p. 2-21.

Bias in Science

Menu

Consider Donating to PsychoTropical

PsychoTropical is funded solely through generous donations, which has enabled extensive development and improvement of all associated activities. Many people who follow the advice on the website will save enormously on doctors, treatment costs, hospitalization, etc. which in some cases will amount to many thousands of dollars, even tens of thousands — never mind all the reduction in suffering and the resultant destruction of family, work, social, and leisure capability. A donation of $100, or $500, is little compared to those savings. Some less-advantaged people feel that the little they can give is so small it won’t make a difference – but five dollars monthly helps: so, do not think that a little donation is not useful.

– Dr Ken Gillman

gillman_headshot
Dr Ken Gillman