Seriously Misleading Network Meta-analysis in Lancet of Acceptability of Depression Pills


In 2018, Andrea Cipriani et al. published a network meta-analysis of trials of depression pills in The Lancet. The authors included both head-to-head comparisons of drugs and comparisons of drugs with placebo. They ranked the drugs according to their effect and acceptability.

The acceptability was measured as drop-out for any reason. This is a highly relevant outcome measure in placebo-controlled trials of depression pills. When the patients decide whether it is worthwhile to continue in the trial, they make a judgement about if the benefits they perceive exceed the harms.

Close-up of a hand holding a tablet with a question mark on it. Below, scattered tablets with frowns or hearts.Cipriani et al. showed the odds ratios for drop-out for each of the 21 drugs they included in their review but did not provide a summary estimate. Based on their data, I calculated a summary odds ratio of 0.98 (95% confidence interval 0.95 to 1.01, P = 0.22, fixed effect model). Thus, on average, they did not find a significant difference in drop-out between active drug and placebo.

However, when my research group analysed the placebo-controlled trials, the result was markedly different. Significantly more patients dropped out on the pills than on placebo, risk ratio 1.08 (1.03 to 1.13). After exclusion of three particularly flawed trials, the risk ratio for the remaining 26 trials was 1.12 (1.07 to 1.18). Thus, we found that 12% more patients dropped out on a depression pill than on placebo (P < 0.00001).

We included trials of duloxetine, fluoxetine, paroxetine, sertraline, and venlafaxine. When I restricted Cipriani et al.’s meta-analysis to these five drugs, the odds ratio was similar to the one for all 21 drugs, 0.97 (0.92 to 1.01, P = 0.15).

There are two main reasons why Cipriani et al.’s meta-analysis is seriously misleading. Firstly, they mainly based their data on published trial reports, whereas we exclusively used the voluminous clinical study reports drug companies have submitted to drug regulators. We discovered that some patients lost to follow up early in the trials were not accounted for and that some patients with missing efficacy or harms data were not included in the analyses. We included these patients.

Secondly, as I and others have demonstrated, Cipriani et al.’s meta-analysis is essentially a garbage-in, garbage-out exercise. I called my criticism “Rewarding the Companies that Cheated the Most in Antidepressant Trials” and showed, based on other data, that some of the results could simply not be true.

Other researchers from my centre demonstrated that Cipriani et al. had ignored or underestimated the biases in the trials. Only one of the 522 trials they included had a low risk of bias. Furthermore, the outcome data they reported differed from those in the clinical reports in two-thirds of the 19 trials they examined.

Other reviews based mainly on published data have also reported misleading results. For example, a review of 40 trials reported a risk ratio of 0.99 (0.88 to 1.11) for drop-out when paroxetine was compared with placebo.

It is a futile exercise to rank depression pills after how good they are based on flawed trial reports and—most importantly—when the patients prefer to be treated with a placebo.

All depression pills should be avoided. They have no relevant effect on depression; they double the risk of suicide, including in adults; and the patients prefer psychotherapy, which halves the risk of suicide and has relevant effects on depression.


Mad in America hosts blogs by a diverse group of writers. These posts are designed to serve as a public forum for a discussion—broadly speaking—of psychiatry and its treatments. The opinions expressed are the writers’ own.


  1. Another thing about antidepressants is that they can unmask psychoses in cases where the shrink didn’t ask about dysperceptions (if they exist in quantity, it suggests misdiagnosis is likely to be spectacularly revealed once antidepressant medication is started).

  2. You would think Dr. Gotzsche’s excellent meta-analysis of the ineffectiveness of antidepressants in treating depression would put the nail in the coffin on the sales of the drug treatment of depression; however, history tells us there will be no turn around in drug treatment of depression. When research of drug treatment of depression looked bad psychiatry/pharma invented a disease called Treatment Resistant-Depression (TRD) and added an add-on antipsychotic drug when antidepressants fail, which only increases the drug revenues for this “disease”. I knew it was just a matter of time but have you seen the latest research on Treatment Resistant Schizophrenia (TRS)? Since TRD worked for depression why not TRS for schizophrenia? I predict you will soon see hallucinogenic antipsychotic drugs for both TRD and TRS.

    In my 2005 book (“Antidepressant: Science, Magic or Marketing?” I accused psychiatry of having substantially lower efficacy standards than many medical treatment but after turning to review medical research I found I had been wrong. Psychiatry still wins the first place award for bogus treatments but after reviewing “medical” treatment I found it runs a close second to psychiatry/pharma.

  3. Thank you for your honest work Dr.Gøtzsche,
    If depression pills don’t work and are sometimes very harmful we’re left with the options we had to begin with – talking, yoga, exercise, meditation, sport, religion, work, activity, and many forms of psychotherapy.

  4. >>>All depression pills should be avoided.>>>

    Nota bene. Also, all clinical trials should be disregarded because they measure small changes in symptoms short-term and are not designed to detect a cure (full recovery from an illness). Measurement of short-term small changes in symptoms is hopelessly prone to manipulations, and the results cannot be trusted. As repeated by Robert Whitaker many times, clinical trials show a “small improvement” of depression after 1 or 2 months of an “antidepressant” drug, but few people notice that 80% of unmedicated patients fully recover after 1 year, whereas only 20% of medicated patients do. We live in a world based on mass deception where almost everything is upside down. Currently, the best type of evidence (case reports of full recovery) is considered the “weakest proof”, whereas the worst type of evidence (double-blind randomized controlled trials) are regarded as the “strongest proof.” In actuality, if you need statistical analysis to prove that a medical treatment is useful, this means that the treatment is either ineffective or counterproductive.

    If clinicians and drug companies start evaluating medical treatments using the percentage of people cured after 1 or 2 years as compared to untreated patients, then 99% of known medical interventions will most likely turn out to be useless at best and counterproductive at worst. Drug companies will go extinct, and most surgeons will lose their jobs, except for trauma surgeons. Clinicians who practice psychotherapy and lifestyle interventions are expected to survive.

    Psychotherapy alone is unlikely to cure depression, and trying to prove its small benefit (I believe it exists) would be a futile exercise. For example, a patient who eats mostly Twinkies and tries psychotherapy will probably experience some relief of depression, but it would make sense to simultaneously apply many lifestyle changes AND psychotherapy, if we want to cure depression. If testing shows that this approach is the same as no treatment (80% of patients are cured in the treatment group and 80% in a no-treatment group after 1 year), then it will be reasonable to do nothing against depression or think of something else.

    Final thought: learning the rules of logic and typical logical fallacies should be an integral part of psychotherapy or perhaps can even be used instead of psychotherapy.