Researchers Challenge Interpretation of Antidepressant Meta-analysis

Researchers question the overstated results of a large antidepressant meta-analysis and point to cultural pressures to turn to these drugs for a quick fix.


Researchers Lisa Cosgrove, from the University of Massachusetts Boston, and Allen Shaughnessy, from Tufts University, recently released a commentary on the much-discussed antidepressant (ADM) meta-analysis from Cipriani and colleagues. The meta-analysis was the largest to date, analyzing over 116,000 participants and 21 antidepressants, and found that all ADMs studied were more efficacious than placebo. The researchers point out that the limitations of the study, including modest effect sizes, novelty effects, and often low certainty of the evidence, were not reflected in the coverage from major publications, many of which promoted the study as definitive proof of the effectiveness of ADM for depression.

“Many major publications overstated the results of the study, with headlines such as ‘Antidepressants do work, and many more people should take them: Major international study’ and ‘Millions MORE of us should be taking antidepressants: Largest-ever study claims the pills DO work and GPs should be dishing them out,’” write Cosgrove and Shaughnessy.

“In turn, many researchers and clinicians challenged not only the overhyped media reporting but also the research on which the meta-analysis was based. These critiques cited the short-term nature of the trials, the problematic use of nonpatient-centered outcome measures, the fact that statistical significance does not necessarily translate into clinical significance, and that the findings of efficacy were, in general, limited to people with more severe depression.”

Photo Credit: Wikipedia Commons

The authors proceeded to discuss the nuances they believed to be lost in the hype surrounding the article’s publication. They first argued that it is unhelpful to think of depression as homogenous, with a singular biological underpinning. This, they posit, leads people to think of ADMs as the only solution, when in fact lifestyle changes, psychotherapy and pharmacotherapy, separate or in conjunction, can be helpful.

They argue that there is a gap between efficacy, often shown idealistically in clinical trials, and effectiveness in real-world settings. Short, thoroughly controlled clinical trials often do not translate into long-term results, which makes effectiveness a tricky metric to capture. For example, other ADM meta-analyses have reported that ADMs are on average efficacious for individuals with severe depression but not mild depression.  As meta-analyses often calculate group means, these results do not eliminate the possibility that some severely depressed individuals might not respond to ADMs, while other mildly depressed individuals might.

The authors suggest that honesty from clinicians about the uncertainty behind these findings could enhance the collaborative care process, which could improve patient outcomes.

“Especially in light of the fact that effect sizes in clinical trials and in this meta-analysis were small to modest, one approach is to frame ADM as a treatment that may, in some percentage of patients, reduce but not necessarily eliminate symptoms,” they write. “Establishing an expectation of partial relief is more likely to result in patients’ perception of successful treatment than implying an expectation of quick, complete, and/or long-lasting cure of depressive symptoms.”

They argue that transparency about the number of ADMs necessary before finding the ‘right’ fit would be beneficial for clients. Given that the field is still exploring the precise mechanism of action of depression, clients have a right to understand that the initial drug choice may require alteration or synthesis with other drugs. Cosgrove and her team further argue that a new model of psychopharmacology, starting with non-drug therapy, might be the way forward.

American society, the authors conclude, is one that is obsessed with the quick fix. This makes ADMs especially appealing and may create pressure for healthcare professionals to prescribe it. While it poses a challenge to wait and reach a level of shared understanding between client and clinician, the authors believe that an investment in such a relationship up front will create better client outcomes and satisfaction for both patient and physician.



Cosgrove, L., Erlich, D., & Shaughnessy, A. (2019). No Magic Pill: A Prescription for Enhanced Shared Decision-Making for Depression Treatment. The Journal of the American Board of Family Medicine. 32. 10.3122/jabfm.2019.01.180182. (Link)


  1. The authors fail to understand that the “much-discussed antidepressant meta-analysis from Cipriani” was a deliberate campaign of misinformation, orchestrated by the Royal College of Psychiatrists and spread by the clinicians themselves.

    “Many major publications overstated the results of the study, with headlines such as ‘Antidepressants do work, and many more people should take them”

    But this was NOT “media-hype” – science journalists accurately reported the words of the experts. They sourced their copy from the Science Media Centre, the trusted UK charity set up to provide “accurate and evidence-based” information on science to the media. Psychiatry makes full use of the SMC to promote whatever falsehoods they like to the general public – Professor Sir Simon Wessely is on the board of Trustees. A year ago, it was the antidepressant meta-analysis, now they are pushing ECT…

    “The evidence for ECT being beneficial is good and withstands scrutiny. The case against seems based on a very odd reading of the evidence base, or a very personal and idiosyncratic point of view.” Prof Allan Young.

    Here’s the rest…

    Report comment

  2. To be honest, Cipriani did actually, like he said, “put to bed” the controversy over antidepressants, though not as he would like. They are useless. The effect size of 0.30 was the same as Irving Kirsch’s controversial earlier meta-analysis which most people, including NICE, suggested AD’s weren’t worth it. The Number Needed To Treat was typically around 15 for the average drug.

    But its worse. The average effect size was skewed by certain high side effect and second line sedating AD’s that you really wouldn’t want to take, including one, amitryptaline, that the BNF says do not go near, its so toxic. The two most populat antidepressants in the UK, fluoxetine and citalopram , had effect sizes of 0.23-0.24. Hopeless. You can see how bad that is by going here:

    And yet still most psychiatrists defend them. Professor David Taylor, lead author of the Maudsley Guidelines, said “much more effective that placebo”. I’m putting that down to a colossal gaff, but still I would steer clear of those guidelines with that level of delusion.

    These guys will not stop untl we the people say “No, thats cobblers”. If you do, expect a fight as all manner of BS gets launched at you – but they will rarely provide evidence. If like us, you are cited STAR*D, you can easily debunk it by checking this website which has a balanced analysis.

    With 1 in 6 in England on these drugs and the withdrawal problem, still controversial, being openly discussed, there is an almighty mess on their hands since you have to be really careful coming off these things and psychiatrists think you can do it much too quickly.

    Lack of efficacy, lack of information, adequate consent, lack of data on long term use and withdrawal, and lack of contrition – its a shambles.

    Report comment

    • @ConcernedCarer: “Professor David Taylor, lead author of the Maudsley Guidelines, said “much more effective than placebo”. I’m putting that down to a colossal gaff, but still I would steer clear of those guidelines with that level of delusion.”

      Not a gaff, or delusion – it’s bare-faced, calculated deception. David Taylor is a pharma-shill, along with his eminent professor chums Allan Young, David Baldwin, David Nutt, Carmine Pariante et al. Time to start ‘Shill Shaming’…?

      Report comment

  3. If the efficacy of a drug is statistically significant but not clinically,that means we are becoming guniea pigs.They must be concluding this from post marketing survellience.So the statistician decides whether we got to swallow this bitter pill or not.And this kind of research in psychiatry has other problems too.You got to rely on what the patient says.How he describes his symptom resolution,whether it improved or not and to what cannot scan his brain to detect any improvement in his depression after having introduced the drug.In short it the response cannot be quantified.I have also heard that patients are enticed with money,to give a positive response to their drugs so that it can be marketed.And they must be surely having some subtle communication network to get this done.

    Report comment