Antidepressant-linked Suicide Data Doctored In Seminal Study


An influential 2007 US National Institute of Mental Health-led study included a statistical manipulation that disguised the fact that youth taking antidepressants were actually over four times as likely to experience suicidal events as those taking placebo, according to a study in the International Journal of Risk & Safety in Medicine. This new published analysis has appeared several years after the revelations were first publicly discussed.

The seminal “Treatment for Adolescents With Depression Study (TADS)” published in JAMA Psychiatry in 2007 and involving researchers from NIMH and many universities, compared the effects on depression in teens of the antidepressant fluoxetine (FLX), cognitive behavioral therapy (CBT), placebo (PBO), and combined CBT and fluoxetine. A number of other reports on the TADS study were also published, including one in 2009 in the Journal of Clinical Psychiatry entitled, “Suicidal Events in the Treatment for Adolescents with Depression Study (TADS)”. Suicidal events were defined as “discrete episodes of suicidal ideation, suicidal attempts, or preparatory acts toward an imminent attempt”, and in the study over half of these events led to hospitalization of the youth. The NIMH-led study authors stated, though, that there were no significant differences in the rates of these suicidal events for youth taking either the antidepressant fluoxetine (Prozac) or placebo, and that, “Most suicidal events occurred in the context of persistent depression and insufficient improvement, without evidence of medication-induced behavioral activation as a precursor.”

However, a brief footnote to a table in the JCP study alerted child and adolescent psychiatrist Göran Högberg of Sweden’s Astrid Lindgren Children’s Hospital to a misleading way in which the data had been handled. On a table titled, “Suicidal Event Categories”, a footnote read, “Treatment at time of event was different from the randomized one for 3 CBT and 9 PBO patients, who had started antidepressant medication due to poor response to assigned treatment.”

Essentially, after 12 weeks of the 36-week TADS study, some of the youth who had been taking placebo started taking the antidepressant instead. And it was only after these youth started taking the drug that they experienced suicidal events. But the NIMH researchers had not included that fact in any of their analyses.

So Högberg and co-authors David Antonuccio and David Healy conducted a new analysis of the data based on this revelation. The NIMH researchers had reported that 16 youth on fluoxetine had experienced suicidal events compared to 12 youth taking placebo, a non-significant difference. But Högberg’s team found that, of those 12 in the placebo group, only 3 suicidal events actually took place while the youth were taking placebo, while the other 9 took place after they’d switched to fluoxetine. Consequently, they determined, the participants in the trial were in fact over four times as likely to experience suicidal events while taking the drug.

“The analysis of the data showed that there was a statistically significant difference in proportion of youths with suicidal events between the PBO condition (2.7%) and the FLX treatment (11%),” concluded Högberg’s team.

They added, “What we also did note was that the suicidal events in the study were evenly distributed over the entire time period; thus highlighting that the risk for suicidal events in SSRI-treated adolescents appears to be increased up to eight months after the start of medication.”

“None of the seven abstracts from TADS publications mentioned the fact that there were four times more suicidal events with fluoxetine than with placebo during the randomized controlled trial, and that this difference was statistically significant,” stated the researchers.

Prior to the study being published, these revelations from Högberg’s analysis were discussed several years ago in detail by MIA Blogger David Healy and in a post on Mad In America by MIA Publisher Robert Whitaker.

“Among the bizarre misrepresentations in Clinical Trials of psychiatric drugs during the Age of the Decepticons, this one may take the grand prize,” wrote 1 Boring Old Man recently about the NIMH study. Commenting on the new analysis appearing in the International Journal of Risk & Safety in Medicine, he wrote, “It’s years late, and it’s softly presented, but it’s still something of a breakthrough. It’s what we’re hoping for from Data Transparency, but they didn’t have to petition for the data or jump through any hoops as it was just hiding there in plain sight.”
Högberg, Göran, David O. Antonuccio and David Healy. “Suicidal risk from TADS study was higher than it first appeared.” International Journal of Risk & Safety in Medicine 27 (2015) 85–91 DOI 10.3233/JRS-150645 (Abstract)

the Age of the Decepticons… (1 Boring Old Man, June 18, 2015)

Vitiello, Benedetto, Susan Silva, Paul Rohde, Christopher Kratochvil, Betsy Kennard, Mark Reinecke, Taryn Mayes, Kelly Posner, Diane E. May, and John S. March. “Suicidal Events in the Treatment for Adolescents with Depression Study (TADS).” The Journal of Clinical Psychiatry 70, no. 5 (May 2009): 741–47. (Full text)

“The Treatment for Adolescents with Depression Study (tads): Long-Term Effectiveness and Safety Outcomes.” Archives of General Psychiatry 64, no. 10 (October 1, 2007): 1132–43. doi:10.1001/archpsyc.64.10.1132. (Full text)


  1. Both the analysis and Healy’s are pretty much bullshit due to the loss of randomization that occurred from treatment reassignments.

    During the first 12 weeks, 9 participants from the PBO group were moved into the fluoxetine group and 2 from the cognitive therapy group were moved into the the combination therapy group because they were doing especially poorly. This leaves behind only those who are doing relatively well in the non-fluoxetine treatment groups, while adding those with especially severe depression to the drug treatment groups.

    At the end of 12 weeks, only those who were regards as “responders or partial responders” to their assigned treatment were kept in the trial, while those with a poor response were removed and “treated as medically indicated”, but their data continued to be collected. What a mess!

    Given the very high number of non-responders moved from the placebo arm to the fluoxetine group, nothing meaningful can be said about either group. The comparison betwen the first 12 weeks of the combination therapy group and the first 12 weeks of the cognitive therapy only group is somewhat better due to fewer subjects crossing over. In this comparison we find essentially identical suicidal behavior rates between drug treated and non-drug treated persons.

    Rigorous statistical analysis requires thinking about the issues and not just blindly applying formulas. Neither Healy nor the original authors appear to have done the former.

    Report comment

    • Not sure what else could be done with the data after the hash the investigators made of it. I think the big news here is that the investigative team appeared to be both incompetent in terms of understanding and adhering to the study protocols as well as intentionally deceptive in their findings, and we must assume significantly biased regarding the results. It illustrates that many studies that purport to have some important finding are deeply flawed and that the researchers are often either “on the take” or bring personal or professional biases to the table. It seems like we may need researchers from other fields to take over for psychiatric researchers, because the bulk of them appear to be either incompetent or corrupt or both!

      —- Steve

      Report comment

      • I agree its generally a mess, but am not sure that the attempt to torture this data to make it confess a negative outcome is any less blameworthy than the attempt to torture it into confessing a positive outcome.

        What I think is important is to look at and evaluate data with an open mind to see what it is telling us, or even to see if it is telling us anything at all. There are too many people out there who simply see it as a polemic tool to twist in whatever way is necessary to argue their pre-existing positions.

        Report comment

          • it would be nice to have someone else who is versed in reading and analysing scientific papers comment on this to elucidate some more on how trails are designed, implimented and interpreted.

            Everytime I read an analysis of scientific papers I learn more and that is important to me.

            Report comment

        • Not to mention that people should have been blinded as to the form of treatment and I’m not sure how can you possibly do that with switching protocols, let alone psychotherapy. Drugs do produce obvious side effects in almost anyone who takes them so good luck blinding anyone.

          Report comment

  2. John Smith

    I am afraid I disagree with your relativist notion that making this kind of data confess a positive outcome is morally the same as making it confess a negative one.

    Many people taking these medications are not left to make their own choices in the matter. All medication is a risky business, and to produce false positive results opens many people to a forced drugging of and in itself risky.

    In contrast, to “make it confess a negative outcome” leaves the medical ethic to do no harm intact. Obviously this ancient medical precept was directed at the physician’s tendency to do something (for cash) no matter what and to thereby harm the patient for his own gain.

    Every drug is risky and causes harm. This is not a theoretical argument. Manipulating evidence to create positive outcomes will bring harm to others.

    Report comment

    • Well, I’m going to stick with my position that a lie is a lie, and that lies that support one’s pre-existing point of view are no bettter than those that support positions one disagrees with.

      Ultimately, the Truth is not served by lies, ever. Deaths caused by overstating the risks of drugs and understating their benefit do not offset those caused by pharma company overpromotion, they just add to them.

      Report comment

      • I don’t think that you can overstate the risks of the drugs. The real scientific studies show that there is real significant harm in taking them, and I don’t want anyone to call these harmful effects “side effects” since this is just playing a word game. Anything that a drug causes is the effect of the drug, there are no such things as side effects. Effects are effects.

        Now, if people want to swallow these drugs once they’ve been given fully informed consent then so be it, let them swallow them to their heart’s content. But I’ve never once seen fully informed consent being applied in the state hospital where I’ve worked for a number of years now.

        Report comment

        • And by the way, I was never given fully informed consent while I was a patient in this same state hospital so I know of what I speak first hand.

          I was put on an antidepressant which causes heart attacks and causes significant problems for people with high blood pressure, which I have. The dear psychiatrist who prescribed them to me in the hospital should have known the problems that the damned things would probably cause for me but he couldn’t be bothered. Subsequently, I had a heart attack.

          The people prescribing these drugs to people don’t even know what the drugs actually do, especially in combination with one another. The drug cocktails are extremely dangerous but people are forced on four and five major neruoleptics and an antidepressant or two and a benzo, all for their own good of course, and when you go to the internet and use one of those programs which shows you the adverse interactions that result from these drug combinations it just blows your mind. And yet the psychiatrists are allowed to force people to take these cocktails at the expense of their lives and not the lives of the psychiatrists. Give me a freaking break!

          Report comment

          • Hi Stephen,
            I identify with your experience.
            I stopped Quietiapine 25 mg/day (prescribed, but probably more like 10 mg/day consumed) on account of worrying heart rythm effects. My father had had a pacemaker and cousins have pace makers. I also used to wake in the morning and my chest and ribcage would be pink and the rest of me white.
            I was asked lots of “mental health” questions by the doctor, but none about heart rythm.
            At 25mg/day this drug is completely non therapeutic for “mental illness” but still toxic. Stopping affected my sleeping (a bit), but improved my ‘nerves’.

            I didn’t bother cancelling the prescription or telling the doctor – this might create another problem.

            Report comment

    • I actually agree that it is less harmful to posit a negative outcome with unclear data, because the negative assumption is always the appropriate scientific assumption until proven otherwise (the “null hypothesis”). It is the job of the person claiming benefit to prove it occurs, and if they can’t, it should be assumed that there is not one. In the case of side effects, any indication that they may exist should be considered significant, regardless of the clarity of the data, because we have an obligation to protect against negative outcomes. It is the researchers’ job to prove safety, and in the absence of that proof, we should assume lack of safety for the protection of the patients. The research has enough to suggest that an increase in suicide rate MAY have occurred as a result of the drugs, even if it’s not clear that it was directly caused by the drugs. This should be a very big red flag, especially as it comports with other data raising the same issue. Trying to explain away such a result by data manipulation seems to me far more egregious than overstating the concern, and I do believe the data does suggest a potential significant danger, even if it does not prove the danger comes from the drugs. Unfortunately, psychiatric researchers (including those doing this study) generally take the opposite approach – their drug is assumed to be safe until proven otherwise. This is a very anti-scientific approach.

      Even if the critics are taking some liberties with the data, it is very important that this kind of critique be raised. It should probably be framed more cautiously, but cautiously framed concerns don’t seem to get a lot of attention in the psychiatric world these days.

      —- Steve

      Report comment

    • I can agree to that re-evaluating flawed data is not the way forward if we had a choice.
      But can you be so kind to tell me how we should produce un-biased data?
      Who should sponsor such trial?

      Even if your standpoint might be, “better not debunk their lies with their lies”, what other options are there?

      So this flawed science has been available since 2007, many real doctors in the real World has been able to Point to this and say the drugs are safe.
      Surely you cannot just disregard from what is the outcome of flawed science?

      I have no idea of what to make of you mr. Smith, if you are a champion for the absolute flawless science of Utopia, or just spreading intellectual ‘doubt’ to prevent the truth?

      Report comment

      • I don’t think it’s in any way “utopian” to say that we should look at the data with an open mind and not twist it to make it support our pre-existing beliefs. Nor do I in any way understand your accusation that by opposing misrepresenting the facts I am attempting to “prevent the truth”. Are you arguing that such misrepresentation makes the truth more apparent? Sounds prety Orwellian to me.

        The best and most widely cited evidence tying antidepressants to increased suicide rates comes from an FDA meta analysis of data from manufacturer clinical trials. That data forms the basis of the current warning on the FDA label for all antidepressants, while the small and underpowered study criticized in the Healhy paper barely shows up on the radar as part of the evidence base. Attacking it does not really accomplish anything but allow the authors to publish a paper with a highly provocative title in a minor journal, hoping that it will nonetheless be picked up by blogs such as this one. It really doesn’t add to the scientific discussion in any way, but I guess they accomplished their goal.

        Report comment

        • Or we should fold that paper, then make a ball out of it and throw it straight into a bin. Badly designed experiments can’t be made better – they should be re-designed and repeated.

          Nonetheless, the discussion of a real life impacts of publishing such rubbish stands. Risks of type I and type II errors are quite different.

          Report comment

        • I thought so too, that a cornerstone in any science was an open mind. To let the data tell the tale. But you must surely agree that is not the case concerning SSRI’s?
          The scientific trials themselves has been done with the sole purpose of supporting the manufacturers “pre-existing belief/theory”.
          No I said in my first line of my comment that I share your beliefs in that old and flawed science isn’t the way forward.
          BUT. Newer science isn’t being produced, wich leads us to have to debunk the old ones.
          To me it sounds like you want ‘us’, those who have seen the negative effects of SSRI, to behave civil and to according to each letter of the law. While the producers, Big Pharma, should get away with repeted science fraud? And if suicide was a suspected side-effects of SSRI’s back in 1988-1992, doesn’t it make you feel somewhat ‘strange’ to discuss its existing or not now in 2015? Perhaps you can appreciate the “job done” by others who have shared their “doubts” throughout 25 years, allowing us to be at ‘Square 1’ and still argue wether SSRI causes suicide or not. Yes, your doubt helps us look forward to Another quarter Century with this discussion.

          It’s not a rash we are talking about, we are talking about several hundred thousands Deaths of humans.

          Report comment

  3. A lie is a lie and here this lie proved to be a highly lucrative one by implying these drugs are not only safe but helpful and then quoted worldwide as one of THE most reliable pieces of research.

    I agree with Boring Old Man “Among the bizarre misrepresentations in Clinical Trials of psychiatric drugs during the Age of the Decepticons, this one may take the grand prize,”

    Shameful piece of damaging ‘research’

    Report comment

    • Did anyone on this thread, including Mr. Wipond, actually read the paper criticized by Healy, or are these attacks on the authors’ integrity based solely on what Healy’s second hand account of what it says?

      Per Wipond in the article immediately above, “The NIMH-led study authors stated, though, that there were no significant differences in the rates of these suicidal events for youth taking either the antidepressant fluoxetine (Prozac) or placebo”

      The article he is discussing, however, explicitly states “During the 36-week TADS treatment, 9.8% of the patients randomized to active treatment presented with a suicidal event. The rate was higher (p<0.05) in the fluoxetine (14.7%) than in the psychotherapy group (6.3%),"

      Furthermore, Healy's criticism that the figure in the article misrepresents the fact that most of those who had suicidal events were on Prozac at the time is similarly misfounded. In the paper he criticizes, this figure is immediately followed in the text by the sentence "Some patients who had been randomized to CBT (N=2) or PBO (N=9) and had a suicidal event were in fact on SSRI medication ".

      The abstract of a previous paper by the same authors about the same study states "Suicidal events were more common in patients receiving fluoxetine therapy (14.7%) than combination therapy (8.4%) or CBT (6.3%)."

      So please in the future, it might be a good idea to actually read the paper before attacking the author's character, suggesting they have done something shameful, or belong in prison. The shameful behavior is not on the part of the authors, but by those who attack their character without actually reading what they wrote.

      Report comment

      • Mr. Smith: The critical study did not suggest that the information was nowhere available in the original studies to determine the correct numbers; indeed, the presence of the correct information is what allowed them to do their own supplementary analysis. However, the authors noted that, in the original studies, the suicidal events placebo-to-drug comparison for the entire duration of the trial was never directly and clearly presented alongside relevant analyses as to its overall significance.

        Rob Wipond

        Report comment