Guidelines: problems aplenty

These commentaries are based on Dr Gillman’s peer reviewed scientific papers, see Publications, and include a downloadable PDF.

Abstract

This critique is about the problems and adverse treatment consequences associated with medical treatment guidelines.  It explicates the many ways in which they can be, and are, misused, misleading, invalid, and a negative influence on clinical practice, patient welfare, and health policy.

The epistemological and methodological mistakes and misattributions associated with randomised controlled trials, and the resultant meta-analyses that are the major influence on guidelines, are discussed.

The influences on the probity of the scientific process, and the biases that are an inevitable concomitant of the profit-driven pharmaceutical industries are enumerated, especially because doctors are naïve about fraud, and other adverse influences like advertising and gifts.

The corrupted nature of the whole science publishing enterprise is a further powerful negative influence on the declining quality of the massively increased number of papers in journals which are often ‘pay-to-publish’ with lower standards of peer-review, or sometimes frankly incompetent or faked, peer-review.  The major publishing houses are now making profits exceeding those of large pharmaceutical companies.

The net result of these influences indicates that both RCTs and M-As should be regarded with a great deal of caution and should not be considered, as they currently are, as an inherently superior form of evidence.

Medical science and medical scientific publishing have suffered a precipitous decline in the last couple of decades and are in a state of crisis.

Introduction

Doctors who aspire to keep up to date with medical science and practice have been cast adrift in a sea of misinformation epitomised by the now all-pervasive and all-influencing guidelines.

A parody

They fuck you up, the bloody guidelines.   

   They weren’t designed to, but they do.   

They’re filled with faults, then add

   egregious extras, just for you.

My apologies to Philip Larkin, ‘This be the verse’.  Original here

https://www.poetryfoundation.org/poems/48419/this-be-the-verse

Guidelines are the ‘final common pathway’ communicating data, that are generated by randomised controlled trials (RCTs), to practicing doctors.  This commentary assembles the evidence that most RCTs, and therefore guidelines, are seriously flawed, and are often a negative influence on good clinical practice [1, 2].  First and foremost, this is because many RCTs are based on corrupted data and corrupted methodology, which emanates from the systematic errors, manipulations, and distortions of clinical trial processes (mostly by ‘big pharma’), and distortions of the resulting ‘evidence’.

As Orwell said:

In a time of universal deceit, telling the truth is a revolutionary act

A great majority of trials published are paid for by ‘big pharma’ [3], that distorts the greater part of published, and unpublished, medical science.

The Lancet editor, Richard Horton, expressed this scathingly: ‘much published drug-trial research is McScience’; it is advertising, not science.  He is among an increasing number of journal editors and prominent researchers who have commented in relation to this, along with other ex-editors of leading medical journals [4-10].  Horton has reviewed Kassirer’s (ex-NEJM editor) book [11] and comments that ‘The best editors get fired’ [because making money and publishing good science are antithetical enterprises].  These editors came to the realisation that they were being duped, manipulated, and blackmailed into publishing misleading science, through the prestigious publications they were in charge of.

These publications had been turned into cash-cows and, Horton:

little more than information-laundering operations for industry’, and [Smith] ‘extensions of the marketing-arm of pharmaceutical companies’ [7, 12]

This has included a collapse of the fundamental pillar of the independence of medical editors.  This has resulted from commercial pressures, either directly from the publisher, or via withdrawal of advertising by pharmaceutical companies, or threatening not to purchase reprints [13].  There have been instances of the direct blocking of publication of research which was unfavourable to certain drugs [14].

Doctors and health-care professionals generally have an insufficient appreciation of how comprehensively corrupted are the data that are subjected to meta-analysis (M-A) and systematic review, which are the backbone of evidence-based medicine (EBM), and therefore how corrupted are the guidelines themselves — as Professor Ioannidis recently expressed it:

Few systematic reviews and meta-analyses [there are now hundreds of thousands] are both non-misleading and useful’ [15]. 

The crazy position is that there are more systematic reviews and meta-analyses published annually, than actual original trials.

There are more systematic reviews and meta-analyses published annually, than original trials

They are merely re-digesting the same material, which is sometimes execrably poor, usually producing different ‘results’ and interpretations.

Almost none of these data, relied on by reviews, M-As, and guidelines have been independently replicated. 

That which is not replicated is not science.

Big Pharma has played a major part in creating and steering both diagnostic practice and guidelines — guidelines are the coup de grace in this sorry saga of the mutilation and abuse of science.

There has been, rightly, a fuss, and much writing, about the various issues relating to the deceit, fraud, bias etc., detailed and discussed in this commentary.

However, right from the start, I want to emphasise how little difference this has made to the continuation and dominance of those same improper and dishonest practices.

On a web site called ‘International Network for the History of Neuropsychopharmacology’, some of the most famous names in the field from the last 60 years have aired opinions supporting with what I say in this commentary.  Indeed, they paid me the compliment of asking me if they could re-publish the sister commentary to this one, which they did: its title is ‘Medical science publishing: A slow motion train wreck’.

Many of the apparent improvements that have been claimed, concerning the probity of science research and publishing, constitute a charade and are no more than a splash of new paint on the facade of a decrepit building.

The influence of Guidelines

A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines. R W Emerson Self-Reliance 1841

Guidelines magnify mistruths and imbalances and promote a narrow perspective on healthcare which puts excessive emphasis on drugs over other non-drug interventions, or no intervention.

They are problematic because the ‘little minds’ described in Emerson’s philosophical essay abound in the bureaucracy of health care, whether it be in hospitals, committees, government agencies, or the insurance companies responsible for deciding what healthcare measures will be provided and paid for.  Many people foresaw these problems decades ago [16], but few realized how extensive and negative such difficulties would prove to be, nor how easily they would be manipulated.

Doctors, or others, who find themselves in conflict about practice, as that relates to guidelines, may remind everyone concerned of the statement of the medical board of Australia (which is reflected in statements by most ‘medical boards’):

The practice of medicine is challenging and rewarding. No code or guidelines can ever encompass every situation or replace the insight and professional judgment of good doctors. Good medical practice means using this judgement to try to practice in a way that would meet the standards expected of you by your peers and the community.

They may also remind everyone concerned that, as several commentators have noted, the initial EBM tenets required:

integrating … the best available external clinical evidence from systematic research…[with] the proficiency and judgment that individual clinicians acquire through clinical experience and clinical practice… [without which] even excellent external evidence may be inapplicable to or inappropriate for an individual patient.” [17].  Patient experience and preferences are commonly under-emphasized in the synthesizing of evidence for inclusion in clinical practice guidelines [18].

Practice guideline developers should give greater consideration and emphasis to clinical expertise, and patient choices and perspectives

Independent replication

Independent replication is the cornerstone of science: most medical research is not independently replicated and therefore cannot be regarded as reliable science.  Obviously, if you cannot inspect the original ‘raw’ data (see below), you cannot know if anyone has replicated it, nor whether it was sound data in the first place.  It is that simple.  No rationalisations or excuses can alter that.  Either you are doing science, or you are not: there is no such thing as ‘semi-science’.

Little medical research is independently replicated and therefore cannot be regarded as reliable science

Epidemic of meta-analyses: RCTs & coprophagia

Guidelines have proliferated like rabbits over recent decades, and they are the dominant influence over the treatments chosen by practicing doctors.  The ‘meta-analyses’ (I put that in quotes because many papers have doubtful credentials as true M-A), and ‘systematic reviews’ on which they are based, have proliferated even more than guidelines — they have proliferated like rabbit-droppings over the publishing landscape to a farcical extent; rabbits (coprophages) cannot compete — in a recent paper, ‘The Mass Production of Redundant, Misleading, and Conflicted Systematic Reviews and Meta-analyses’, Professor Ioannidis has detailed how more systematic reviews of trials are published annually than actual new randomized trials. For antidepressant drugs alone there have been 185 meta-analyses published between 2007 and 2014 [15].  Professor Ioannidis concludes:

The production of systematic reviews and meta-analyses has reached epidemic proportions. Possibly, the large majority of produced systematic reviews and meta-analyses are unnecessary, misleading, and/or conflicted

Many meta-analyses are indeed industry initiated, organised, sponsored, and conflicted [19].

How better to describe this activity than as ‘intellectual coprophagia’?

The meetings of senior doctors, to craft both the Diagnostic & Statistical Manual (DSM) and most guidelines, have been heavily funded by drug companies.  Large numbers of eminent American doctors were wined, dined, and handsomely remunerated to attend resorts in Palm Springs and like venues, to thrash out guidelines for the use of SSRIs, Xanax, Risperdal, Seroquel … you name it.  This has been labelled ‘disease-mongering’, i.e. defining things as diseases in order to legitimise using drugs to treat them [20-27].

Guidelines now unjustifiably impose themselves on doctors who justifiably may not agree with them.  That has been termed ‘intellectual imperialism’ [28]; I suggest ‘intellectual fascism’ is a more apposite term.

By the way, there are multiple guides to guidelines — honestly [29-33].

Which set of guidelines do you then choose to follow?  One might facetiously ask, ‘is there an evidence base for deciding which guideline has the best evidence base’?

What evidence is there for deciding which guideline has the best evidence?

Guidelines are contaminated by having expert panel-members who have financial ties to drug companies, even though the Institute of Medicine long ago recommended that no such people should be on the guideline-panels [34, 35].  Even if panel-members are truly independent, their main currency is still corrupted RCT data, and no-one can overcome that problem, any more than can the statistical legerdemain of meta-analysis — garbage in, garbage out (see below).

There are other good reasons, in addition to the problems with RCTs, to suppose that the evidence-based medicine (EBM) enterprise is diseased from the roots to the shoots [28, 36-39].

Guidelines have morphed.  They may have been intended by some proponents, as exactly that, guides: the sort of kind advice that a senior colleague might give about a difficult case.  But they have been seized on by the simple-minded, the lazy, the authoritarian, the managers, the media, and even politicians, as if they were diktats — and that is how one sees them being applied to many patients.

This is a complex topic to deal with and understand.  It involves an understanding of history, how businesses work, how medicine works (I refer particularly to the vested interests of specialists and experts, and much else besides.

The more complicated my field appears to be, the greater is my prestige as a supposed expert in it

That understanding can only be attained through wide-ranging experience of medicine and life, and extensive reading.  Few doctors have the time to do that, except for those like me who are enjoying a comfortable retirement in the sun, which is setting on the age of the polymath.

The books I have listed are in my view the indispensable background to enabling people to see and understand the big picture.  I will simply add that as a pharmacologist with a sceptical attitude, I am certain that vast numbers of people are being treated with expensive drugs that produce little or no benefit but have many poorly documented and unpublicised ill effects.

Do we need to be reminded that adverse reactions to drugs, and drug-drug interactions, are among the leading causes of hospital admissions and deaths [40-44]?

I am reminded of Shaw’s words:

‘When a stupid man is doing something he is ashamed of, he always declares that it is his duty.’

I expect my readers can translate that into ‘guideline-speak’.

Recommended books

This is a good point at which to recommend books relevant to this subject: I recommend these because they are written by scientists giving an informed view of the subject.  These individuals continue to attract a considerable degree of opprobrium: powerful groups do not like the truth being told.

The further a society drifts from truth, the more it will hate those who speak it Orwell

Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients’’ (Faber and Faber, 2013) Ben Goldacre; Senior Clinical Research Fellow, Centre for Evidence-Based Medicine, University of Oxford.

Pharmageddon: Professor David Healy. Hergest Unit, Bangor, Wales (the best pun title I can remember).

Deadly Medicines and Organised Crime: How Big Pharma Has Corrupted Healthcare. Professor Peter C. Gøtzsche. Danish physician, medical researcher, and leader of the Nordic Cochrane Center at Rigshospitalet in Copenhagen, Denmark.  He co-founded, and has written numerous reviews in, the Cochrane collaboration.

Psychiatry Under the Influence: A Case Study of Institutional Corruption.  Professor Lisa Cosgrove and Mr. Robert Whitaker (a medical writer, director of publications at Harvard Medical School), both are fellows at Edmond J. Safra Center for Ethics, Harvard.

The Truth About the Drug Companies: How They Deceive Us and What to Do About It (Random House, 2005) by Marcia Angell, M.D., (former editor of the New England Journal of Medicine).  Marcia Angell, M. D., is a Corresponding Member of the Faculty of Global Health and Social Medicine at Harvard Medical School and Faculty Associate in the Center for Bioethics.  She stepped down as Editor-in-Chief of the New England Journal of Medicine on June 30, 2000.  The only one on this list that I have not read myself.

Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming.  Professor Naomi Oreskes, Erik M. Conway.  This is a more general historical overview covering tobacco and climate denial etc. which gives a better impression of the enormity and persistence of these big-business tactics.

Continuing to read literature which simply agrees with what you already think is not a priority, reading material which disagrees with what you think is what good scientists do, I do not make a habit of reading books like this, since I have already made most of these points myself. 

However, when I decided I would write a commentary about guidelines it was necessary to read or re-read these texts — they are good.

Seriously corrupted data — a core problem

The serious persistent problems surrounding confliction and evidence relating to guidelines are undeniable, as evidenced by many recent reviews [31, 45-47], which indicate, as suggested above, that little has changed over the last decade or two, despite the kerfuffle.

These problems are related to, first and foremost, the appropriation, hiding and distorting of patient data by ‘big-pharma’ (see below), as well as the conflicted handling, and then mis-used third-rate science, that underpins most of the clinical-trial base, and thus of the ‘evidence-based medicine’ enterprise.

This has been fuelled by the massive financial power imbalance in the medical system (pharmaceutical companies have the money).  It has been powerfully catalysed by the pusillanimous acquiescence of the medical profession in allowing drug companies to take over the whole trial process, including the actual data — that is, incidentally, a glaring ethical betrayal of patients: but few seem to have commented on that, or even noticed it.

Allowing the partisan drug industry to sequester the data and refuse to let (even their own) expert ‘authors’ examine it, was a serious tactical error (possession is 9/10 of the law).

It is hard to respect the medical professionals who have colluded in this process, a proportion of whom are undoubtedly wicked, greedy, self-aggrandising and dishonest, even though most convince themselves otherwise.

What do I mean by seriously corrupted data?  The concise answer is this: data that any seasoned observer has good reason to suspect are unreliable, and which are not subject to examination and checking by others, nor reproducible (see e.g., Pharmageddon).

If I report that a patient I have assessed was ‘suicidal’ this means little if I do not record exactly what I asked the patient, and what they replied.  Needless to say, it means even less if I refuse to show my case records to anybody else, and simply justify my opinion by saying ‘because I say so’.  But that is what big Pharma is still getting away with.

There is more to it than that: for instance, if the patient does not have a relationship with me, and does not trust me, then they are unlikely to answer truthfully about suicide, for fear of being locked-up.

I am going to have to give a few examples relating to corrupted data here, because, although endless examples and details are in the references and books cited, many readers will not get around to looking at them.  Since these are crucial evidential material, I will give details on one or two, because I can hear some of my colleagues saying, ‘come on Ken, surely you are overstating the case here, it’s only a few bad apples etc.’ If only …

Blumsohn was the doctor at Sheffield who lost his job for attempting to insist on seeing the raw data for the tables in the ghost-written paper he was presented with as ‘author’.  He was not prepared, like most are, to ‘sign-off’ on it.  His Dean, Eastwell, another co-author, did sign-off on it, and subsequently appeared before the General Medical Council because he said he had seen the ‘raw data’ when he had not [48]; the reference has the details.

Documents revealed in another court-case showed a senior company executive commenting on the [established fact of] hiding of adverse-event data from a drug trial, saying in an internal company email, ‘if this comes out, I don’t know how I will be able to face my wife and children’ — one imagines this was a superficial and self-serving mea culpa, but no less revealing for that.

It seems that many companies have been keeping research documentation ‘offshore’, which impedes accesses to them by legal processes — but such data could only be incriminating if they revealed ‘misrepresentation’, lying, cheating, whatever.  It is an instance of ‘excusatio non petita accusatio manifesta’, or in French, ‘qui s’excuse, s’accuse’ [he who excuses himself, accuses himself].

I have previously commented on the chicanery involved in the Risperdal trials, and would only repeat here that the much-cited meta-analysis by Leucht [49] failed to cite the classic Huston paper that dissected the deceit pervading Risperdal trials [50]: I asked Leucht why he had not cited Huston and he replied that they simply did not know about it.  How hard did they look?  If I can find it, in my disadvantaged and isolated situation here in tropical North Queensland, how come a professor at a major European University cannot find it.  You see what you want to see and forget what you do not want to remember.

Anyway, the extensive dishonesty involved with Risperdal (and many other drugs [51]) is documented elsewhere and I have lost track of the number of successful legal actions against pharmaceutical companies in relation to this.  They must have paid out more than a billion, by now.  Ah well, it’s just the cost of doing business.  Google it, it will astonish you.

Look at the references for a myriad of further examples: when commenting on this sort of thing one feels like a hawk attacking a flock of starlings, there are so many targets that there is a danger of not killing any of them.  I must add here this observation: those are precisely the tactics Big Pharma sometimes uses.  Flood the literature with ‘your stuff’ and the contrary view simply gets snowed-under and lost — that is exactly how they dealt with Barry Marshall (Heliobacter, Nobel prize, remember?) to maintain sales of the billion-dollar block-buster anti-ulcer drugs that still had a while to run under patent.  Eventually the more effective, life-saving, long-term cure, anti-biotics, displaced them: eventually, after many more deaths — it seems co-lateral damage is acceptable not only in the military.

RCT: gold-standard or fools-gold?

Where the outcome at issue is at all substantial then not only is randomisation unnecessary, so also is the use of any formal statistical test of significance.

Sir Austin Bradford Hill 1965.

Control RCTs, control clinical practice

Control RCTs and the data they generate, keep it to yourself (preferably off-shore along with your tax shelf-companies) and you control clinical guidelines and clinical practice: talk about a fait accompli.

The dogma of RCTs as the gold standard has been made to over-shadow other forms of evidence: therefore, controlling clinical practice has become a one-step process.

Therefore, key questions for the incisive analyst become: do RCTs have any special epistemic validity, excellence, or superiority? how much value should we place in RCTs? how are their results demonstrably relevant and beneficial to the typical patient? do they translate into reliable and meaningful short-term or long-term treatment outcomes?

These are crucial questions which have never been satisfactory addressed, even though they were raised by eminent statisticians right at the start of the whole EBM/RCT endeavour.  As Worrall discusses [52], proponents of EBM advance an exaggerated view of the epistemic virtues of RCTs — here, we might note that Hill himself made a point of endorsing Claude Bernard’s view that there is ‘no qualitative epistemic difference between experiment and (properly scientific) observation’ [i.e., clinical experience].

There is no qualitative epistemic difference between experiment and observation

The eminent Australian professor, Gordon Parker, argued, some time ago, that there are major ‘limitations to ‘level 1’ evidence derived from randomised controlled trials … which are no longer producing meaningful clinical results’ [53], and that paper, and others [54-58], are entirely consonant with the major points raised herein.

One could make more of the epistemological, methodological, and statistical faults and problems concerning RCTs [1, 56, 59] [see especially Feinstein, 1997], but that is not the prime purpose of this commentary, other than to raise awareness and persuade readers that there are indeed serious problems which should have a major influence on how ordinary doctors regard the results of RCTs, and therefore, guidelines.

No lesser authority than Hill himself pointed out that you need neither randomisation, nor statistics to analyse the results, unless the treatment effect is small [60].  Remember that.

Anti-depressant — a meaning-deficient label

The label ‘anti-depressant’ is an ill-defined and largely a meaningless label (it is often assigned by the company to their own drug without challenge from anyone, cf. mirtazapine, vortioxetine), and sits uneasily with the recommendation to define drugs by their pharmacology, not by their claimed actions or benefits [61-64].

There is another point to be borne in mind.  The degree of symptom improvement that a drug must exhibit, in order to be approved by the FDA and ‘officially’ labelled as an ‘antidepressant’, is minimal.  Such drugs are assessed for effectiveness using (mainly) the poor and antiquated ‘Hamilton rating scale for depression’, or the MADRAS, one can easily see how small changes of symptoms, that have nothing to do with the core pathology of depressive illness (anergia and anhedonia), are sufficient to get almost any drug with sedative or anxiolytic properties over that hurdle (e.g. see my commentary on quetiapine), even though it has little or no effectiveness on the core changes that constitute the illness.

Look at the HRSD to see what I mean (PDF copy here). Qs 4, 5, 9,10,11 & 12 might be improved by any anxiolytic/sedative — a one gradation change only in each of those produces a 6-point improvement of the score, about double that needed to get a drug approved by the FDA as an AD.  Yes, incredible as that may seem to outside observers, it really is that silly [65].  See this [66] for an early indication of the huge reductions in HRSD score with diazepam-alone (like a 15-point reduction).

Also, note there is not one single question in HRSD assessing the key core symptom of anhedonia, and precious little for anergia either — absurd, totally absurd.

Deliberately dishonest coding

The data gathered in clinical trials are inevitably subject to interpretation and uncertainty.  Responses to a series of questions, artificially and rigidly constructed, asked by someone (unknown to the patient), paid for by a drug company, to go around asking questions from a clipboard!

For the purposes of analysis, they are coded by someone.  Doctors have abrogated the responsibility for their lead-role in trials, this someone is rarely the doctor who had responsibility for clinical care of the patient, but a technician at drug company central office — they pay separate ‘clinical trials’ companies, which are set up specially to manage these things, to do this.  Having an arms-length-separation facilitates plausible deniability.  A recent painstaking re-analysis of the infamous paroxetine study 329 illustrates many of these points [67].  And, why ‘329’, or 3/29, were there 28 other studies we do not know about?

Furthermore, we know full-well that coding sometimes has been incorrect (or deliberately dishonest), and that suicidal thoughts, feelings and intentions were coded, during the analysis of results, as something different [67-71] — read ‘Pharmageddon’ for further details and references.  Therefore, when the results were presented, and written-up for publication by ‘ghost-writers’, who had nothing to do with the actual drug trial — they were probably not even on the same continent — no one at these ‘conferences’, neither the presenters nor the attendees, had any idea what had happened to actual patients.

The many doctors that associate themselves with such practices have either been duped or are dishonest.  They are traitors to science

The medical colleges and authorities have abandoned their ethical principles.  That is highlighted by the fact that the ‘famous’ KOL doctors who have allowed their names to be used as authors (front-men) of these kinds of papers have not been struck off the medical register for dishonesty or corruption.  Are we so inured to such behaviour that we have lost our capacity to be outraged by it?

It is routine practice for the doctors, who participate in these trials from various different centres, to be refused access to the original aggregated data, they only get to see the data after its coded by somebody else.  There are now numerous documented examples that this is done misleadingly, erroneously, or dishonestly, and that the practice continues [72,73].  What has recently been put on the Internet in the name of ‘transparency’ is a token: because the data shared is not the original data, it is the coded data.  Not the same thing.

That is a mockery of science.

An illustration

The way the pharmaceutical industry presented the benefits and side effects of SSRIs is an illustration of several of the above points concerning misleading manipulation of data and misclassification of side-effects.  A major therapeutic effect (it is not a ‘side effect’) of SSRIs is to inhibit the pathways that lead to sexual climax (no RCT needed there, cf. Hill).  The minor effects on anxiety and mood are small by comparison (barely a 3-point difference between drug and placebo on the HDRS or MADRAS).

See my note on citalopram from twenty years ago [it has been on the site, but not attached to a menu — it is now: which is a reminder to use the search facility].  There are a number of bullet points at the end, one of which points out that the average practitioner would not have been able to discern the difference between those on placebo, vs those on citalopram, at ‘endpoint’.

Anyway, the trials of these drugs claimed that inhibition of sexual climax was an uncommon occurrence — not so, I was using clomipramine to help premature ejaculation in the 1970s, before ‘Prozac’ even existed.  That is how well-known the SRI-effect on ejaculation was.  I shall not dwell on this here, but if there is anybody out there who still doubts how the relative prominence of SEs vs benefits has been turned on its head, they might be persuaded if they read the relevant section of Prof Healy’s book Pharmageddon.

That exemplifies well the methodology that was developed for maximising the trivial effects on ‘mood’, by using large numbers of patients to get a marginally significant statistical result (cf. citalopram, above, and more recently see Kirsch [74, 75]) whilst at the same time failing to ask appropriate questions to elicit side-effects, or ‘miscoding’ them [67, 76].

And long-term side effects — not our problem, it is licenced now, up to someone else to do that.

That is bad science, and it is deceitful science; it simply does not, and cannot get more, how can one put it: incorrect, erroneous, false, fallacious, duplicitous, mistaken, inaccurate, shoddy, corrupt, double-dealing, deceptive, deceitful, crooked, untrustworthy, fraudulent, misleading.

In short: it is as wrong as the parrot was dead.  For those interested in rhetoric that is an amusing example of ‘pleonasm’.

Statistics

The adage: ‘lies, dammed lies, and statistics’ has a long history going back to at least the 19th century.  In, The Life and Letters of Thomas Henry Huxley, is his account of a meeting of the X Club, which was a gathering of eminent thinkers who aimed to advance the cause of science, especially Darwinism: ‘Talked politics, scandal, and the three classes of witnesses — liars, damned liars, and experts’.  Even more apposite for our time.

I start with this old adage because it has withstood the test of time, which is telling, and because modern information-laundering, in this post-truth world, has re-invigorated its potency and influence.

Here is a tiny sample of many references I could give, by eminent researchers, discussing misuse of statistics in a great proportion of medical studies.  Hardly surprising then that most published medical studies turn out to be wrong, as history indisputably demonstrates [77-83].

One recent review by a group of eminent statisticians [84] stated [of the use of such tests] ‘definitions and interpretations that are simply wrong, sometimes disastrously so — and yet these misinterpretations dominate much of the scientific literature’.

The ASA has commented ‘Statisticians and others have been sounding the alarm about these matters for decades, to little avail’ [85].

Meta-analysis, the phrenology of the third millennium

I am not a statistician, thus, I will merely content myself with pointing out the above references and mentioning that two of the prominent culprits are p-values and the procedure called meta-analysis, which is invariably applied in a pseudo-scientific manner.  I have previously described meta-analysis as ‘the phrenology of the third millennium’.  I have recently become aware that an eminent researcher from Yale pre-empted me by decades, with a better analogy.  He compared it to alchemy, and his detailed criticism of it remains essential reading, two decades later [1, 86].  The 1997 reference is an exemplar of prescience and a ‘must-read’.

Meta-analysis forms the backbone of guidelines, where it reaches its pseudo-scientific zenith.  Elsewhere I have quoted Charles Babbage on this subject (GIGO — garbage in, garbage out).

A researcher, whose name is well-known in this field, recently said to me in a private email:

‘I have rather gone off ‘meta-analysis’ as it is mostly selective/rubbish data in – spurious certainty or continuing uncertainty out, whatever the sophistication of the statistical methods.  I include myself in this criticism by the way’.

The trials included in M-A have multiple problems [87, 88], they exclude most of the patients that we treat in every-day practice — e.g., the old, those with mild, or particularly serious illness, those with multiple conditions, and those on multiple drugs, and, craziest of all, patients who are suicidal.  They may solicit subjects by advertisement, and now many of them are conducted in totally different cultures and settings in China, India, Asia and Africa (some 80% of Chinese trials are thought to be ‘fabricated’).  Shi-min Fang [89] exposed scientific misconduct in his native China, for which he won the inaugural Maddox prize in 2012.

RCTs represent an atypical fraction of the real-world treatment population [31].

Methodology and heterogeneity

But, as if that was not enough, it is not valid to extrapolate from the averaged result of a non-homogeneous group [86], and then apply it to individuals not from that group, but who share a somewhat arbitrary descriptive similarity (a score on a rating scale).

I defy anybody to produce even a skerrick of evidence that the group of patients, defined as MDD by DSM, is likely to represent a patho-physiologically homogenous group.

Drawing conclusions from, or extrapolating from, RCTs involving groups that, prima facie, cannot be assumed to be patho-physiologically homogeneous, is incorrect.  It is invalid science

This is such a fundamentally important scientific fact, that an understandable analogy is required.

Lots of people enjoy gardening: let us pretend that the patients are represented by the vegetables (non-homogeneous) in your garden; root vegetables? green vegetables? ‘fruity’ vegetables? etc. (define a vegetable, define depression — there is much mileage in this analogy).

Now then, you have got some super new fertiliser from the garden-centre (organic, slow release, and terribly expensive) and you want to know if it improves the yield of your vegetable garden — for aficionados of statistics, that is exactly why the famous statistician Fischer, of ‘Fisher’s-exact-test’ fame, developed his analysis of variance test.  It was to help measure the effect of fertilisers on crops at the Rothamsted agricultural research station in the UK — would you scatter this super new fertiliser around the garden, then see if your basket of mixed vegetables was a little heavier than before? or would you test the fertiliser on each separate type of plant, even though some of them look almost the same?

I hope it is obvious that, if the weight of your basket of mixed vegetables was only slightly greater with the new fertiliser, that would not prove all the plants were improving.  It might be only one of them was being helped a lot, and the rest not at all.  Indeed, it might be that one or two were poisoned by it, because it was the wrong balance of nitrogen and phosphorus, or too concentrated.  Whatever.

I trust that makes the point clear.  No qualifications in rocket-science are required.

RCTs represent science at an astonishingly incompetent level, yet that is what dominates drug research in psychiatry and ‘informs’ guidelines.  It is little better than the evidence for ‘Alt-Med’

Presentation is the key: ghost authorship the solution

Despite the fuss, ghost authorship in industry-initiated trials (i.e., most trials) is still common, perhaps even the rule [90-94].

The commissioning, timing, and placement of these ‘papers’ is orchestrated by …

… the marketing and sales divisions.

Because? Timing, presentation, and placement (key journals) are the sine qua non to optimal marketing and sales.

The medical-writing companies ghost-write and orchestrate it, lastly, they get key authors on board, and presto …

PLoS Medicine and the NY Times got a raft of such documents [to do with medical-writing companies] made public (see here) in a court case: Ginny Barbour, editor in chief of PloS Medicine, said she was taken aback by the systematic approach [to generating ghost-written papers] of the [medical writing] agency. ‘I found these documents quite shocking, … They lay out in a very methodical and detailed way how publication was planned’ [before the ‘authors’ ever got involved] [95].

Many doctors routinely take the credit for articles written this way.

Such doctors are frauds, cheats, and liars

But let us start this most serious of issues with something amusing.

A real ghost-author!

In the revealing Wilmshurst-case-saga — a man of probity — made well-known because of Simon Singh and the UK libel-tourism story, it was revealed, when he withdrew from authorship because they refused to give him the original data, that, included in the official list of authors of the published paper, was Anthony Rickards.

Anthony Rickards had died before the research was even conducted.

These unprincipled and unpleasant people then sued Wilmshurst for remarks he had made in academic good faith, about the limitations of the conclusions in the paper.  This gives everyone a bit of insight into the threatening and bullying which has a major spill-over effect on the willingness of academics to take on these kinds of people.  It is an insidious influence and totally antithetical to the scientific endeavour.  One can see the power of the self-censorship and self-selection effect here: why would a decent, mild-mannered, industrious, conscientious researcher want to get involved in that? those who do get involved may be a ‘different sort’ of person.  Probably not the sort of person that makes a good scientist.

See Wilmshurst’s 2020 video, it describes naked aggressive, greed, fraud, dishonesty, criminality, and more https://www.youtube.com/watch?v=awmWaOKLj9U

The bottom-line

At the end of the day, the details substantiating the frequency, mediocre quality and dishonestly, of ghost-written material, are contained in references given herein.

What I would highlight is this, ‘the big picture’: one only needs to look at the blossoming of these specialist medical-writing companies, to whom the pharmaceutical companies farm-out their ghost-writing tasks, in order to understand the mega-dollars involved and how common it must thus be, in order to sustain many profitable enterprises.

The global medical writing market was valued at USD 3.4 billion in 2019, projected to be $8 billion by 2027

Next, look at the number of papers published under the name of doctors (KOLs [96]) associated with these drugs.  You will find there are many academics who have been publishing papers ridiculously frequently (dozens per year), over prolonged periods of time.  You cannot possibly write ‘proper’ scientific papers at that rate —that tells anyone of perspicacity that these people are making a minimal, possibly negligible, contribution to either the research work, or the papers, that bear their prostituted imprimatur.

It is simple.  You do not have to be Einstein to work it out.

The medical establishment has done little to call ghost-writing doctors to account.  This is the most astonishing ethical failure, and betrayal of patients, perpetrated by my generation of doctors — we should be profoundly ashamed.

The next step

Another step in this deceitfully orchestrated enterprise is the unscientific manipulation of data using the statistical metric of the p-value and other statistical peregrinations.  I will not here describe what that means for non-scientists.  Many prominent names in science agree with me [82, 97-99], I could have inserted one hundred references there from the last decade.  Yet, unbelievably, doctors have colluded with it and swallowed this in what can only be described as an uncritical and naïve manner.

It is also relevant to remind ourselves that statistical analysis is only needed to ‘show’ a difference when the treatment effect is small; we did not need statistics to realise that penicillin and chlorpromazine were effective drugs.  If complex statistics, and conflation of trials via M-A, are needed to show small treatment-effects of drugs — that covers drugs in psychiatry in recent times — then the effects are of minimal significance or usefulness, no matter what blandishments may be offered to contradict this.  It is that simple.

Lest anyone think I am going beyond my expertise in asserting this (being ultracrepidarian), I would remind them of the opinion of Sir Austin Bradford Hill — he of the smoking-lung-cancer fame, and also the instigator of one the earliest RCTs — who said of RCTs: ‘Where the outcome at issue is at all substantial then not only is randomisation unnecessary, so also is the use of any formal statistical test of significance’.

Do RCTs translate to everyday practice?

Contrary to what is strongly contended by many, there is no sound reproducible science that would allow reliable conclusions that RCTs usefully predict everyday efficacy or long-term outcomes [100].  They most certainly do not predict long-term side effects.

The EBM approach, based on an insufficiently critical assessment of RCTs, promotes unjustified over-generalization by accepting that the outcomes of RCTs apply generally — unless there is a compelling reason to believe otherwise [87, 88, 101].  However, that is turning science on its head, and would certainly not have been accepted by Popper [102].

RCT evidence does not allow us to predict which small percentage of patients will experience these slight benefits — revisit the ‘vegetable garden’ analogy above. 

Generalizations (i.e. guidelines) that drugs should be used in a large but ill-defined target population are an invitation for poor clinical practice and over-prescribing [2, 100]

This issue is even more consequential and problematic when one is talking about prophylactic treatments, e.g. cholesterol-lowering drugs — that entails treating large numbers of people, many of whom would die without ever experiencing the condition being putatively prevented.  Thus, a large group of patients are exposed to a definite risk of side-effects without any possible benefit.

Algorithm-guidelines, nurse practitioners

If doctors are pressured and constrained to practice within these guidelines, as they increasingly are, by their colleagues, health service managers and insurance companies, and by fear of litigation, then why have doctors?  All you need is managers and nurse practitioners checking that everyone is given the computer-generated-algorithm-guidelines that dictate treatment: in no time you will be able to dispense even with the nurse practitioners and get your treatment ‘instructions’ online and take your algorithm-generated script straight to the pharmacist.  After all, many people only get a 10-minute ‘medication-management’ appointment anyway.

Incidentally, a bit of history: this is not a revolutionary idea, but a return to the past.  The concept of prescription-only drugs is relatively new in the history of medicine.

And perhaps most sinister is the fact that patients worry that, if they did not accept the guideline recommended treatment, they will be refused reimbursement for any other treatment — now that truly is medical fascism.

Nice meeting you.  May I leave you in the care of ‘Siri’ for psych —anyone remember ‘Eliza’

My personal experience

My personal experience, my understanding of common practice, the published literature, and the requests I get for opinions on treatment from around the world, lead me to the opinion that doctors continue to become more proscriptive and prescriptive — proscriptive (i.e., declining to prescribe a drug as a result of rigid adherence to guidelines) and prescriptive (authoritarian directiveness and unwillingness to consider the preferences of patients).  It is as if they have come to regard themselves bound to guidelines —slaves to them, or guardians of them?  A bit of both perhaps.

Doctors act as both the guardians and the slaves of guidelines; Doctors should be the masters of guidelines

There is a disturbingly prominent vein of authoritarianism present that is in no way justified by the quality or certainty of the evidence, and which does not admit discussion, options, preferences, and flexibility [103, 104].

This is abhorrent ‘medical fascism’ and good doctors should have no truck with it, but some will lose their jobs because they try to stand up to it

The very existence and prominence of guidelines magnifies this authoritarianism because guidelines provide a deceptive aura of authority and certainty.  This is mediated by a fundamentally flawed system of narrowly focussed ‘pseudo-evidence’ (sponsored clinical trials and their associated methodological flaws) digested via the non-scientific medium of the statistical procedure that agrandifies itself with the epithet ‘meta-analysis’.  Armoured with this false shield, our shining-medical-knights sally forth to do battle with mythical disease-dragons — fortunately some have shunned DSM, and the disease-mongering that has accompanied it [20, 21, 26].

I referred above to the fact that this was an immense and complex topic.  I must indicate what I mean by ‘mythical disease-dragons’.  Many informed commentators have noted how the internationally influential American Psychiatric Association manual, called DSM, has over the last few decades served to expand the definition of mental illness to encompass a substantial proportion of the population, thus legitimising and enabling the ‘medicalisation’ and insurance-remunerated administration of drugs to vast numbers of people (for instance, I think recent figures indicate something like 10% of American children are on medication for ADHD).

Issue of long-term therapy

The doubtful inference that is made from RCTs (and amplified via guidelines) that modest short-term treatment effects over a few weeks, even if you accept those are meaningful, extend to long-term treatment and meaningful real-life outcomes (like a reduction in the suicide rate) — as opposed to a small change on a rating scale, which is merely an interim proxy measure.  Indeed, if anything, the evidence points in the opposite direction: for instance, lithium has the least effect on short-term scores on the HRSD, but the greatest long-term reduction of suicide and hospitalisation [105-107].

These drugs are almost always given over a period of many months, often years.  Indeed, other types of evidence, and this speaks to the almost complete lack of external validity of guidelines, suggests that long-term treatment with most antidepressants (and antipsychotics) does not reduce long-term illness manifestations.  Statistics concerning such questions are complex, not always reliable, and much disputed.  Disability, the hospital readmission rate, the suicide rate, may be reduced little, if at all [108-115].  Those things are powerful evidence that the drugs have questionable long-term benefit for patients.  The long-term side-effects are not in doubt though.  It seems reasonable to suppose that there are substantive benefits for carefully selected severely ill patients: however widespread use of these drugs probably means that a proportion of people who are being given them are being exposed to risks with little benefit.

That this not to say that ‘antidepressant’ drugs are ineffective for everyone.  Experience clearly demonstrates that severely depressed patients experience major benefits from some antidepressants.  And, even if SSRIs are not ‘antidepressants’, that is not to say they do not benefit some symptoms (viz. anxiety) in some people.

It is difficult to construct a sound argument that the evidence strengthens the case for using RCTs to guide treatment for the general population.  Much evidence supports the opposite point of view.

To listen and advise

Doctors are there to listen and advise, not to dictate and direct with insufficient real evidence, explanation, and discussion.  For me at least, it is a fundamental precept of medical practice that we listen and advise and resort to paternalism and authoritarianism as little as possible.

There are few circumstances in clinical medicine in which the underlying science is sufficiently good to confidently dictate one form of treatment over another.  In psychiatry, there are no circumstances in which the underlying science is sufficiently good to dictate one treatment over another.

Guidelines are furthering and fostering medical rigidity and authoritarianism

Guidelines may benefit by being less prescriptive, and their creators need to accept responsibility for how they are used and abused.  Then there are other issues, like their period of validity, a clear statement about when they are due for revision (sometimes missing) and what kinds of new evidence might invalidate them.  There is no established mechanism for questions and discussions with those who promulgate such ‘edicts’, nor criteria for judging who has valid authority and expertise to participate in issuing such edicts [1].  My inner atheist is smiling as it contemplates the myriad of parallels between religious texts and guidelines: who decides which texts become an accepted part of the holy book? who are the anointed priests who determine these things? and should we hold a ‘council of Trent’?  The parallels go on and on, but we must leave it there, despite the rich comic and satirical possibilities.

The creators of guidelines have a clear obligation to get out there and engage in dialogue with the people who are expected to use them.  Presently, the whole process bears too close a resemblance to a papal edict. 

The world of guidelines is rife with schisms, like the world of religions

The various churches have generally adopted the wisdom of claiming that their redemptive truths can only be verified in the [anticipated] afterlife — smart move.

Summary and conclusion

The world is unpredictable and our knowledge of it is imperfect: furthermore, ‘rules’ may be the produce of imperfect thinking.  Therefore, they must necessarily be ignored and broken.  The frequency with which it is appropriate to discount rules will depend on the degree of imperfection in our understanding, and the degree of unpredictability of the material under consideration.  Sometimes our understanding will be sufficiently rudimentary that useful rules become pointless, or worse, misleading.

This commentary has looked at the various problems plaguing RCTs, M-A, EBM, and guidelines.  The foremost of these problems that practicing doctors will benefit from keen awareness of, are that the data behind RCTs/guidelines are seriously contaminated by secrecy, corruption, distortion, bias, and misapplied and poor science.

Major problems, of a directly science-related nature — epistemological, methodological, and statistical — are flagged herein, but not analysed in detail: to do that would require a book.  A prominent topic is the invalidity of extrapolating from patho-physiologically non-homogeneous trial groups: this afflicts most RCTs, and it hugely diminishes their value.

A doubtful and largely unsubstantiated inference is made from RCTs (and amplified via guidelines) that modest short-term treatment effects over a few weeks, often only estimated by proxy measures, extend to long-term treatment and meaningful real-life outcomes.

Guidelines are a gift to the intellectually lazy and are increasingly treated as inerrant texts with a quasi-religious authority that relieves doctors of their duty for personal thought, judgement, responsibility, and individual consideration — follow the guidelines, and that will relieve you of the necessity to make decisions for yourself.

Follow the guidelines and say your prayers and you will not be found guilty of any clinical misdemeanour

Despite clear statements in the introductions to most guidelines about their ‘advisory’ nature, and the responsibility of the individual doctor to assess and treat each patient on their merits, this does not happen: intellectually laziness supervenes.  Guideline creators should be obligated to take full responsibility for the problems created by their ‘product’: how they are presented, promulgated, and updated, and how they are abused and misused, and more.

Furthermore, those who now have an increasing influence on the delivery of health care, be they politicians, or managers of health care delivery organisations, or insurance companies etc. make simplistic assumptions and interpretations in relation to what guidelines recommend and use them for their own ends.  That can mean not giving treatment-cost re-imbursement to patients, or sacking a doctor, for not following ‘the guidelines’.  Even if that is infrequent, that does not alter the fact that many doctors who contact me justify not using a particular drug, like MAOIs, on the basis that ‘it is not recommended in the guidelines, and I will get in trouble’.

Add these factors together and you have a considerable potential, much of it already realised, for misapplication and patient harm.

We already have a generation of doctors who have not developed clinical experience and expertise in utilising non-standard treatments.  Therein lies a downside for progress in clinical practice, because many advances come from observations unrelated to clinical trials and purpose-directed research.

However good their intentions might have been, those who initiated the notion of EBM and guidelines, it is well to remember that, as the old saying goes, ‘The road to hell is paved with good intentions’.

The ‘gold-standard’ of RCT guideline evidence, when assayed, is found to be fools-gold

We might end by reminding ourselves of one or two simple observations, discussed above, which suggest that these expensive new treatments have achieved little.  The expenditure on drugs for psychiatric illnesses has increased exponentially, by close to 100 times over the course of my career.  The suicide rate has decreased little and the number of psychiatric patients on disability benefit is much increased in most western countries.  The number of patients hospitalised and harmed, by adverse drug reactions, is now among the leading causes of morbidity and mortality.

Massive expenditure, minimal advance, much harm

Assigning greater weight to other possible research methodologies, and to experience and clinical judgement [2], as opposed to ‘RCTs’, is of great, but presently neglected, importance.

A final point to emphasise for those not familiar with scientific literature is that most of the references below are eminent authors — I have not dredged the depths of crank rants.  The papers have been published in the most prestigious journals, like Nature, BMJ, Lancet, JAMA, PLoS Medicine etc.  We are not talking about authors on the fringe of medicine publishing in dubious and obscure journals.

References

1. Feinstein, A.R. and R.I. Horwitz, Problems in the “evidence” of “evidence-based medicine”. The American journal of medicine, 1997. 103(6): p. 529-535.

2. Sniderman, A.D., et al., The necessity for clinical reasoning in the era of evidence-based medicine. Mayo Clin Proc, 2013. 88(10): p. 1108-14.

3. Lundh, A., et al., Industry sponsorship and research outcome. Cochrane Database Syst Rev, 2017. 2: p. MR000033.

4. Angell, M., The truth about drug companies: How they deceive us and what to do about it. New York: Random House, 2005: p. 336.

5. Angell, M., Drug Companies & Doctors: A Story of Corruption. New York Review of Books, 2009. 56: p. http://www.metododibella.org/cms-web/upl/doc/Documenti-inseriti-dal-2-11-2007/Truth%20About%20The%20Drug%20Companies.pdf.

6. Smith, R., Travelling but never arriving: reflections of a retiring editor. British Medical Journal, 2004. 329(7460): p. 242-244.

7. Smith, R.L., Medical Journals Are an Extension of the Marketing Arm of Pharmaceutical Companies. Public Library of Science: Medicine, 2005. 2: p. e138.

8. Horton, R., The Dawn of McScience. New York Review of Books, 2004. 51: p. 7-9.

9. Kassirer, J.P., On the take: How medicine’s complicity with big business can endanger your health. 2004: Oxford University Press.

10. Kassirer, J.P., Commercialism and medicine: an overview. Camb Q Healthc Ethics, 2007. 16(4): p. 377-86; discussion 439-42.

11. Horton, R., The best editors get fired. Lancet, 2017. 390.

12. Horton, R., Memorandum by Richard Horton (PI 108). The pharmaceutical industry and medical journals. UK Parliament: Select Committee on Health. Minutes of Evidence, 2004: p. https://publications.parliament.uk/pa/cm200405/cmselect/cmhealth/42/4121604.htm.

13. Kassirer, J.P., Joint ownership: the shared responsibilities of journal editors and publishers. Md Med, 2007. 8(1): p. 10-2.

14. Lexchin, J. and D.W. Light, Commercial influence and the content of medical journals. BMJ, 2006. 332(7555): p. 1444-7.

15. Ioannidis, J.P., The Mass Production of Redundant, Misleading, and Conflicted Systematic Reviews and Meta-analyses. Milbank Q, 2016. 94(3): p. 485-514.

16. Woolf, S.H., et al., Clinical guidelines: potential benefits, limitations, and harms of clinical guidelines. BMJ, 1999. 318(7182): p. 527-30.

17. Sackett, D.L., et al., Evidence based medicine: what it is and what it isn’t. BMJ, 1996. 312(7023): p. 71-2.

18. Greenhalgh, T., et al., Six ‘biases’ against patients and carers in evidence-based medicine. BMC medicine, 2015. 13: p. 200.

19. Fava, G.A., Meta-analyses and conflict of interest. CNS Drugs, 2012. 26(2): p. 93-6.

20. Moncrieff, J. and P. Thomas, The pharmaceutical industry and disease mongering. Psychiatry should not accept so much commercial sponsorship. British Medical Journal, 2002. 325(7357): p. 216; author reply 216.

21. Moynihan, R., I. Heath, and D. Henry, Selling sickness: the pharmaceutical industry and disease mongering. British Medical Journal, 2002. 324(7342): p. 886-91.

22. Wolinsky, H., Disease mongering and drug marketing. Does the pharmaceutical industry manufacture diseases as well as drugs? EMBO Rep, 2005. 6(7): p. 612-4.

23. Gillman, P.K., Disease mongering: one of the hidden consequences. Public Library of Science: Medicine, 2006. 3(7): p. e316. DOI: 10.1371/journal.pmed.0030316.

24. Heath, I., Combating Disease Mongering: Daunting but Nonetheless Essential. Public Library of Science: Medicine, 2006. 3(4): p. e146.

25. Kumar, C.J., et al., Awareness and Attitudes about Disease Mongering among Medical and Pharmaceutical Students. Public Library of Science: Medicine, 2006. 3(4): p. e213.

26. Moynihan, R. and D. Henry, The Fight against Disease Mongering: Generating Knowledge for Action. Public Library of Science: Medicine, 2006. 3(4): p. e191 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=16597180.

27. Edgar, A., The dominance of big pharma: power. Med Health Care Philos, 2013. 16(2): p. 295-304.

28. Greenhalgh, T., Intuition and evidence–uneasy bedfellows? Br J Gen Pract, 2002. 52(478): p. 395-400.

29. Schunemann, H.J., et al., A guide to guidelines for professional societies and other developers of recommendations: introduction to integrating and coordinating efforts in COPD guideline development. An official ATS/ERS workshop report. Proc Am Thorac Soc, 2012. 9(5): p. 215-8.

30. Mercuri, M., et al., When guidelines don’t guide: the effect of patient context on management decisions based on clinical practice guidelines. Acad Med, 2015. 90(2): p. 191-6.

31. do Prado-Lima, P.A.S., The surprising blindness in modern psychiatry: do guidelines really guide? CNS Spectr, 2017. 22(4): p. 312-314.

32. Cabarkapa, S., et al., Prostate cancer screening with prostate-specific antigen: A guide to the guidelines. Prostate Int, 2016. 4(4): p. 125-129.

33. Waters, D.D. and S.M. Boekholdt, An Evidence-Based Guide to Cholesterol-Lowering Guidelines. Can J Cardiol, 2017. 33(3): p. 343-349.

34. Chuang, Y.C., et al., Effects of long-term antiepileptic drug monotherapy on vascular risk factors and atherosclerosis. Epilepsia, 2012. 53(1): p. 120-8.

35. Lenzer, J., Why we can’t trust clinical guidelines. Bmj, 2013. 346(58): p. f3830.

36. Goodman, N.W., Who will challenge evidence-based medicine? Journal of the Royal College of Physicians of London, 1999. 33(3): p. 249-51.

37. Goodman, N.W., Criticizing evidence-based medicine. Thyroid, 2000. 10(2): p. 157-160.

38. Ioannidis, J.P., Hijacked evidence-based medicine: stay the course and throw the pirates overboard. Journal of clinical epidemiology, 2016. 73: p. 82-86.

39. Fava, G.A., Evidence-based medicine was bound to fail: a report to Alvan Feinstein. Journal of Clinical Epidemiology, 2017. 84: p. 3-7.

40. Pedros, C., et al., Prevalence, risk factors and main features of adverse drug reactions leading to hospital admission. Eur J Clin Pharmacol, 2014. 70(3): p. 361-7.

41. Robb, G., et al., Medication-related patient harm in New Zealand hospitals. N Z Med J, 2017. 130(1460): p. 21-32.

42. Parameswaran Nair, N., et al., Adverse Drug Reaction-Related Hospitalizations in Elderly Australians: A Prospective Cross-Sectional Study in Two Tasmanian Hospitals. Drug Saf, 2017. 40(7): p. 597-606.

43. Boileau, I., et al., Influence of a low dose of amphetamine on vesicular monoamine transporter binding: a PET (+)[11C]DTBZ study in humans. Synapse, 2010. 64(6): p. 417-20.

44. Benard-Laribiere, A., et al., Incidence of hospital admissions due to adverse drug reactions in France: the EMIR study. Fundam Clin Pharmacol, 2015. 29(1): p. 106-11.

45. Cosgrove, L., et al., Conflict of Interest Policies and Industry Relationships of Guideline Development Group Members: A Cross-Sectional Study of Clinical Practice Guidelines for Depression. Account Res, 2017. 24(2): p. 99-115.

46. Bastian, H., Nondisclosure of Financial Interest in Clinical Practice Guideline Development: An Intractable Problem? PLoS Med, 2016. 13(5): p. e1002030.

47. Campsall, P., et al., Financial Relationships between Organizations That Produce Clinical Practice Guidelines and the Biomedical Industry: A Cross-Sectional Study. PLoS Med, 2016. 13(5): p. e1002029.

48. Blumsohn, A., Authorship, ghost-science, access to data, and control of the pharmaceutical scientific literature: who stands behind the word? AAAS Professional Ethics Report, 2006. 19: p. 1-4.

49. Leucht, S., W. Kissling, and J.M. Davis, Second-generation antipsychotics for schizophrenia: can we resolve the conflict? Psychol Med, 2009. 39(10): p. 1591-602.

50. Huston, P. and D. Moher, Redundancy, disaggregation, and the integrity of medical research. Lancet, 1996. 347(9007): p. 1024-6.

51. Spielmans, G.I. and I. Kirsch, Drug approval and drug effectiveness. Annu Rev Clin Psychol, 2014. 10: p. 741-66.

52. Worrall, J., Causality in medicine: getting back to the Hill top. Prev Med, 2011. 53(4-5): p. 235-8.

53. Parker, G., Evaluating treatments for the mood disorders: time for the evidence to get real. Australian and New Zealand Journal of Psychiatry, 2004. 38(6): p. 408-14.

54. Mulder, R., et al., The limitations of using randomised controlled trials as a basis for developing treatment guidelines. Evid Based Ment Health, 2017.

55. Bothwell, L.E., et al., Assessing the Gold Standard–Lessons from the History of RCTs. N Engl J Med, 2016. 374(22): p. 2175-81.

56. Naudet, F., et al., Has evidence-based medicine left quackery behind? Intern Emerg Med, 2015. 10(5): p. 631-4.

57. Naudet, F., et al., Understanding the Antidepressant Debate in the Treatment of Major Depressive Disorder. Therapie, 2015. 70(4): p. 321-7.

58. Shorter, E., A brief history of placebos and clinical trials in psychiatry. Can J Psychiatry, 2011. 56(4): p. 193-7.

59. Thompson, R.P., Causality, mathematical models and statistical association: dismantling evidence-based medicine. J Eval Clin Pract, 2010. 16(2): p. 267-75.

60. Hill, A.B., The Environment and Disease: Association or Causation? Proc R Soc Med, 1965. 58: p. 295-300.

61. Nutt, D.J. and P. Blier, Neuroscience-based Nomenclature (NbN) for Journal of Psychopharmacology. J Psychopharmacol, 2016. 30(5): p. 413-5.

62. Worley, L., Neuroscience-based nomenclature (NbN). Lancet Psychiatry, 2017. 4(4): p. 272-273.

63. Gorwood, P., et al., Editorial: Neuroscience-based Nomenclature (NbN) replaces the current label of psychotropic medications in European Psychiatry. Eur Psychiatry, 2017. 40: p. 123.

64. Blier, P., M.A. Oquendo, and D.J. Kupfer, Progress on the Neuroscience-Based Nomenclature (NbN) for Psychotropic Medications. Neuropsychopharmacology, 2017. 42(10): p. 1927-1928.

65. Moncrieff, J., Antidepressants: misnamed and misrepresented. World Psychiatry, 2015. 14(3): p. 302-3.

66. Johnstone, E.C., et al., Neurotic illness and its response to anxiolytic and antidepressant treatment. Psychol Med, 1980. 10(2): p. 321-8.

67. Le Noury, J., et al., Restoring Study 329: efficacy and harms of paroxetine and imipramine in treatment of major depression in adolescence. Bmj, 2015. 351: p. h4320.

68. Sharma, T., et al., Suicidality and aggression during antidepressant treatment: systematic review and meta-analyses based on clinical study reports. BMJ, 2016. 352: p. i65.

69. Moncrieff, J., Misrepresenting harms in antidepressant trials. BMJ, 2016. 352: p. i217.

70. Dubicka, B., et al., Paper on suicidality and aggression during antidepressant treatment was flawed and the press release was misleading. BMJ, 2016. 352: p. i911.

71. Gotzsche, P.C., Author’s reply to Dubicka and colleagues and Stone. BMJ, 2016. 352: p. i915.

72. Healy, D., Clinical trials and legal jeopardy. Bulletin of medical ethics, 1999(153): p. 13-18.

73. Jureidini, J.N., J.D. Amsterdam, and L.B. McHenry, The citalopram CIT-MD-18 pediatric depression trial: Deconstruction of medical ghostwriting, data mischaracterisation and academic malfeasance. Int J Risk Saf Med, 2016. 28(1): p. 33-43.

74. Kirsch, I., et al., Initial severity and antidepressant benefits: a meta-analysis of data submitted to the Food and Drug Administration. PLoS Med, 2008. 5(2): p. e45.

75. Kirsch, I. and T.J. Moore, The Emperor’s New Drugs: An Analysis of Antidepressant Medication Data Submitted to the U.S. Food and Drug Administration. Prevention & Treatment, 2002. 5.

76. Locher, C., et al., Efficacy and Safety of Selective Serotonin Reuptake Inhibitors, Serotonin-Norepinephrine Reuptake Inhibitors, and Placebo for Common Psychiatric Disorders Among Children and Adolescents: A Systematic Review and Meta-analysis. JAMA psychiatry, 2017.

77. Allison, D.B., et al., Reproducibility: A tragedy of errors. Nature, 2016. 530(7588): p. 27-9.

78. Fountoulakis, K.N., R.S. McIntyre, and A.F. Carvalho, From Randomized Controlled Trials of Antidepressant Drugs to the Meta-Analytic Synthesis of Evidence: Methodological Aspects Lead to Discrepant Findings. Curr Neuropharmacol, 2015. 13(5): p. 605-15.

79. Fountoulakis, K.N., M.T. Samara, and M. Siamouli, Burning issues in the meta-analysis of pharmaceutical trials for depression. J Psychopharmacol, 2014. 28(2): p. 106-17.

80. Tendal, B., et al., Multiplicity of data in trial reports and the reliability of meta-analyses: empirical study. BMJ, 2011. 343: p. d4829.

81. Siontis, K.C., E. Evangelou, and J.P. Ioannidis, Magnitude of effects in clinical trials published in high-impact general medical journals. Int J Epidemiol, 2011. 40(5): p. 1280-91.

82. Mansournia, M.A. and D.G. Altman, Invited commentary: methodological issues in the design and analysis of randomised trials. Br J Sports Med, 2017.

83. Altman, D. and J.M. Bland, Confidence intervals illuminate absence of evidence. British Medical Journal, 2004. 328(7446): p. 1016-7.

84. Greenland, S., et al., Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations. Eur J Epidemiol, 2016. 31(4): p. 337-50.

85. Wasserstein, R.L. and N.A. Lazar, The ASA’s Statement on p-Values: Context, Process, and Purpose. The American Statistician, 2016. 70(2): p. 129-133.

86. Feinstein, A.R., Meta-analysis: statistical alchemy for the 21st century. J Clin Epidemiol, 1995. 48(1): p. 71-9.

87. Fuller, J., Rhetoric and argumentation: how clinical practice guidelines think. J Eval Clin Pract, 2013. 19(3): p. 433-41.

88. Fuller, J., Rationality and the generalization of randomized controlled trial evidence. J Eval Clin Pract, 2013. 19(4): p. 644-7.

89. White, J., Fraud fighter: ‘Faked research is endemic in China’ New Scientist, 2012(2891): p. http://www.newscientist.com/article/mg21628910.300-fraud-fighter-faked-research-is-endemic-in-china.html.

90. Gotzsche, P.C., et al., Ghost Authorship in Industry-Initiated Randomised Trials. PLoS Med, 2007. 4(1): p. e19.

91. Sismondo, S., Ghost management: how much of the medical literature is shaped behind the scenes by the pharmaceutical industry? PLoS Med, 2007. 4(9): p. e286.

92. Wislar, J.S., et al., Honorary and ghost authorship in high impact biomedical journals: a cross sectional survey. BMJ, 2011. 343: p. d6128.

93. Lexchin, J., Those Who Have the Gold Make the Evidence: How the Pharmaceutical Industry Biases the Outcomes of Clinical Trials of Medications. Sci Eng Ethics, 2011.

94. Ross, J.S., et al., Guest authorship and ghostwriting in publications related to rofecoxib: a case study of industry documents from rofecoxib litigation. JAMA, 2008. 299(15): p. 1800-12.

95. Barbour, V., How ghost-writing threatens the credibility of medical knowledge and medical journals. Haematologica, 2010. 95(1): p. 1-2.

96. Moynihan, R., Key opinion leaders: independent experts or drug representatives in disguise? BMJ, 2008. 336(7658): p. 1402-3.

97. Dechartres, A., et al., Evolution of poor reporting and inadequate methods over time in 20 920 randomised controlled trials included in Cochrane reviews: research on research study. BMJ, 2017. 357: p. j2490.

98. Ioannidis, J., Lies, Damned Lies, and Medical Science. Atlantic, 2010. November 17th.

99. Dechartres, A., et al., Association between analytic strategy and estimates of treatment outcomes in meta-analyses. Jama, 2014. 312(6): p. 623-630.

100. Steel, N., et al., A review of clinical practice guidelines found that they were often based on evidence of uncertain relevance to primary care patients. J Clin Epidemiol, 2014. 67(11): p. 1251-7.

101. EveryPalmer, S. and J. Howick, How evidencebased medicine is failing due to biased trials and selective publication. Journal of evaluation in clinical practice, 2014. 20(6): p. 908-914.

102. Popper, K., The Demarcation between Science and Metaphysics. Conjectures and Refutations: The Growth of Scientific Knowledge (1963), 1963. Ch 11.

103. Joiner, K. and R. Lusch, Evolving to a new service-dominant logic for health care. Innovation and Entrepreneurship in Health, 2016.

104. Hoffmann, T.C., V.M. Montori, and C. Del Mar, The connection between evidence-based medicine and shared decision making. Jama, 2014. 312(13): p. 1295-1296.

105. Tiihonen, J., et al., Pharmacological treatments and risk of readmission to hospital for unipolar depression in Finland: a nationwide cohort study. Lancet Psychiatry, 2017. 4(7): p. 547-553.

106. Jones, H., J. Geddes, and A. Cipriani, Lithium and Suicide Prevention, in The Science and Practice of Lithium Therapy, G. Malhi, J.M. Masson, and F. Bellivier, Editors. 2017, Springer. p. 223-240.

107. Wingard, L., et al., Reducing the rehospitalization risk after a manic episode: A population based cohort study of lithium, valproate, olanzapine, quetiapine and aripiprazole in monotherapy and combinations. J Affect Disord, 2017. 217: p. 16-23.

108. Baldessarini, R.J. and L. Tondo, International suicide rates versus adequate treatments. The British Journal of Psychiatry, 2017. 210(4): p. 298-299.

109. Shah, A., et al., Suicide rates in five-year age-bands after the age of 60 years: the international landscape. Aging Ment Health, 2016. 20(2): p. 131-8.

110. Curtin, S.C., M. Warner, and H. Hedegaard, Increase in Suicide in the United States, 1999-2014. NCHS Data Brief, 2016(241): p. 1-8.

111. Viola, S. and J. Moncrieff, Claims for sickness and disability benefits owing to mental disorders in the UK: trends from 1995 to 2014. BJPsych Open, 2016. 2(1): p. 18-24.

112. Tiihonen, J., et al., 11-year follow-up of mortality in patients with schizophrenia: a population-based cohort study (FIN11 study). Lancet, 2009. 374(9690): p. 620-7.

113. Brown, J., et al., Mental health as a reason for claiming incapacity benefit—a comparison of national and local trends. Journal of public health, 2008. 31(1): p. 74-80.

114. Whitaker, R. and L. Cosegrove, Psychiatry under the influence. 2015: Macmillan.

115. Gotzsche, P.C., A.H. Young, and J. Crace, Does long term use of psychiatric drugs cause more harm than good? BMJ, 2015. 350: p. h2435.

Consider Donating to PsychoTropical

PsychoTropical is funded solely through generous donations, which has enabled extensive development and improvement of all associated activities. Many people who follow the advice on the website will save enormously on doctors, treatment costs, hospitalization, etc. which in some cases will amount to many thousands of dollars, even tens of thousands — never mind all the reduction in suffering and the resultant destruction of family, work, social, and leisure capability. A donation of $100, or $500, is little compared to those savings. Some less-advantaged people feel that the little they can give is so small it won’t make a difference – but five dollars monthly helps: so, do not think that a little donation is not useful.

– Dr Ken Gillman

gillman_headshot
Dr Ken Gillman