Study Details and Findings
The authors of Study 329 began recruiting adolescents for a comparative study of Paxil, imipramine and placebo in 1994 and finished their investigations in 1997. They dropped a large number of their original cohort, so the randomness element in the study must be open to question. Late in 1998, SmithKline Beecham, (henceforth GSK) the marketers of Paxil, acknowledged in an internal document that the study had shown that Paxil didn’t work for adolescents in terms of the two primary and six secondary outcomes they had established at the start of the study. In a nutshell, Study 329 was negative for efficacy and positive for harm, contrary to their succinct upbeat conclusion. Adjudging, of course, that this lack of benefit and presence of risks could not be communicated to an innocent public, the team’s task was now to see how bad news could be transformed into good. They decided then that they would cherry-pick the few positives they might glean from their mass of data and publish these. This, however, required them to abandon nearly all of their original outcome measures and dredge up a few new ones, abandoning the symptoms in the Hamilton Rating Scale for Depression (HAM-D),which they originally invoked and which is the normal scale used in such studies.
The article had first been rejected by JAMA, a very highly rated journal, but even though the editors had reservations about it, the amended jigsaw was published as a paper (July 2001) by Martin Keller and 21 others in the Journal of the American Association of Child and Adolescent Psychiatry, which claimed to be the journal with the highest impact factor in child psychiatry. Undoubtedly the most controversial, occasionally lambasted, psychotropic drug study ever, Study 329 concluded that paroxetine was a safe and effective medication for treating major depression in adolescents. This conclusion in a major journal hoodwinked myriad GPs and psychiatrists into a prescription fever which netted a fortune the following year for GSK. A small number of queries followed from the medical community, to which the authors responded, thus quelling their doubts. Further queries and criticisms were buried in an avalanche of positive marketing and promotion by vested interests. Biomedical psychiatry seemed to have won an easy and permanent victory. All protest then died down, and the study was routinely cited in the medical literature, providing doctors with assurances, and a good conscience, about the safety and usefulness of paroxetine. But this time, fortunately for us, three intrepid journalists, a few academics and one politician got a strong stench of rat, and followed where it eventually led. They all worked doggedly, slowly and patiently for over ten years: academic, institutional and Big Pharma cover was gradually blown, with the endgame now in sight this week. The dance of myriad veils, of bedazzling smoke and mirrors, should soon be over; and many big guns will be named and shamed.
Let us now go back to the beginning. The original hope was that that Paxil would show either:
- A treatment response in which participants would either achieve a Hamilton Rating Scale for Depression (HAM-D) score of 8 or less, OR a reduction from their initial HAM-D score of 50% or greater. The Paxil group showed no significant difference from the placebo group.
- A change in the total HAM-D score from the pre-treatment to the end of the trial. It demonstrated no statistically significant superiority over placebo.
According to the Department of Justice, when statistical significance wasn’t achieved on any of the original secondary outcomes, the investigators defined an additional four secondary endpoints later in the study – but before the results were “unblinded”, that is, they defined the endpoints before knowing the results. I cannot see how this can be true, since it would only have made sense for them to fish around for other potential winning outcomes if they had discovered those for which Paxil did show a modest level of statistically significant advantage. (This was achieved on three of these later-defined endpoints – though the relevance of these to any improved quality of life for patients escapes me. And, anyway, they wandered far from the original outcomes, and look like the saddest of smokescreens.)
Yet, in spite of what their own data told them, the authors concluded that “the findings of this study provide evidence of the efficacy and safety of the SSRI, paroxetine, in the treatment of adolescent depression.”
That seems clear and absolute. But if we take a closer look we learn, for example, that somnolence affected 17.2% of the paroxetine group and 3.4% of the placebo group, and tremor affected 10.8% versus 2.3%!
But, when we arrive at the “Serious Adverse Effects” (SAEs) section we find that the rate of these events was 11.8% in the paroxetine group versus 2.3% in the placebo group.
The paroxetine group’s adverse effects were:
- Emotional lability (including suicidal ideation or gesture) – 5 patients.
- Conduct problems or hostility – 2 patients.
- Symptoms suggestive of mania – 1 patient.
- Worsening depression – 2 patients.
- Headache upon discontinuation – 1 patient.
No patient in the placebo group required hospitalization, but seven in the paroxetine group did. This higher rate of both suicidal ideation and hospitalization for a drug likely to be prescribed to do the opposite should surely have been noted by reviewers and the editor of a major journal.
The downplaying of the significantly higher rate of SAEs with paroxetine as against placebo was shameful, especially when we consider the potential damage to the developing brains and personalities of children. Let’s take a closer look at how exactly the authors went about their business.
Remarkably, indeed unbelievably, the study report tells us that of the 11 patients [who suffered serious adverse effects in the paroxetine group], ONLY THE HEADACHE of one patient was considered by the treating investigator to be related to paroxetine treatment. Then comes an even more mind-boggling statement: “Because these serious adverse effects were judged by the investigator to be related to treatment in only 4 patients (paroxetine, 1; imipramine, 2; placebo, 1) causality cannot be determined conclusively.” More sleight of hand, since a single cause for any illness can rarely be proven conclusively – even in cancer, as a number of commentators have pointed out. Very high risk factors aren’t conclusive or single causes.
What we are witnessing here is the devious assimilation of an investigator’s opinion to the level of objective outcome data: in other words, the researchers cherry-picked data to present only those that were most favourable to the drug company that paid for the study, SKB, using mainly the secondary outcomes added later!!
The study’s claim that paroxetine was “generally well tolerated and effective” was thus buttressed by the selective reporting of, and emphasis given to, the 15% of outcomes that were positive, and selective neglect of the other efficacy and SAE findings.
The JAACAP paper has been defended on the grounds that people could easily read in the results table that the two outcomes described as primary elsewhere (but not in that table) were negative. But this is just another game of smoke and mirrors, designed to divert the attention of busy GPs and other professionals, acting within the druggy doxa of biomedical psychiatry, who are more likely to read only the abstract and the conclusion/outcomes section. Such people may, perhaps, be pardoned for giving but a cursory glance at the dense, overloaded tables of a clinical trial report. This, of course, made it all too easy for a lazy press and most biomedical experts, to continue retransmitting the false impression that study 329 found “significant efficacy on one of the two primary endpoints”.
Critical psychiatrists and researchers have been tracking for years the fraud, occlusion, spin and sleight of hand used in numerous psychiatric studies published in prestigious journals, not to mention the staggering sums vouchsafed by Big Pharma to what most of us assume are objective, independent scholars. Some years ago the editors of the well-nigh incorruptible NEJM found it almost impossible, for a major SSRI study, to get peer reviewers who had no financial connections to big Pharma!
It’s worth noting, too, that this shady study used the same peer-review process that is designed to prevent abuses by researchers and drug companies, and provide other professionals (and the public) with objective data. It is the same peer-review process that is used by the U.S. Food and Drug Administration (FDA) to approve medications as safe and effective. This is why we need to be increasingly sceptical about a process that seems to guarantee integrity and objectivity, since so many academics are creaming vast sums from the drug companies, without declaration of interest; and since so many big names, like Joe Biederman, with huge conflicts of interest in the U.S., are on the editorial boards of the major journals, which are themselves dependant on the data furnished to them, and on information about study design or statistical methodology. All of this should make us very wary of assuming that peer-review in psychiatry has the same honesty and status as it has in other academic domains.
Who Wrote It?
As the internationally revered Irish psychopharmacologist, David Healy, and others, have pointed out, the problem of ghost writing in psychiatric research is a serious one. Several highly respected experts said that they have seen completed articles marked “Author: to be decided”, as it awaited the agreement of a big name to sign up, for a fee. In this case, we cannot even be sure about who is ultimately responsible for the study.
Circumstantial evidence, however, allied to the allegations made by Bass and others, indicate that Study 329 was ghostwritten by Sally Laden, editorial director for Scientific Therapeutics Information, the company that prepared the manuscript on GSK’s behalf. Many claim that none of the “authors,” including Laden, had seen the data, except perhaps the GSK employee who was one of the official authors!
A leaked memo raises suspicions that it was indeed Laden who did the bulk of the writing. Referring to the first draft of the manuscript, a letter from Keller to Laden dated Feb. 11, 1999, begins: “Dear Sally, You did a superb job with this. … It is excellent. Enclosed are rather minor changes from me.”
Director of U.S. Media Relations, Sarah Alspach denied the ghostwriting charges but acknowledged Laden’s contribution to the study, which was recognized in the fine print with the line “editorial assistance was provided by Sally K. Laden” in the fine print of Study 329’s first page, but one would expect such authorship to be fully and openly acknowledged, as it is in traditional publishing. Clinical Associate Professor of Community Health David Egilman said such work is unethical unless proper credit is given, noting that ghostwriting, though not an entirely uncommon practice in writing up clinical studies, is still considered unethical, especially when the ghostwriter has a stake in the findings.
History of the Study
The Keller paper was published in July 2001. A month previously the Tobin trial in Cheyenne, Wyoming returned a verdict against GSK (formerly SKB) and Paxil in a multiple homicide-suicide case. The Tobin verdict and the fact that GSK was then the biggest pharmaceutical company in the world, and Paxil the biggest selling antidepressant in the world, caught the attention of the BBC’s leading investigative programme, Panorama. In 50 years, Panorama had never repeated a topic but they made four programmes about Paxil–Seroxat:
- The Secrets of Seroxat – or The Perils of Paxil
- Emails from the Edge
- Taken on Trust
- The Secrets of the Drug Trial
The motor of these programmes was Shelley Jofre, who read the Keller paper within weeks of its publication and immediately smelt a rat. As a result, the BBC got a huge public response from 67,000 people offering their Seroxat experiences.
The Brown Daily Herald played their part, also, when it told its readers that an eminent professor from their own university, Keller, had ”authored” a study whose findings on Paxil were disputed by some doctors and academics, and which might have occluded its implication in some teenage suicides.
On September 24, 2008, Chaz Firestone and Chaz Kelsh, wrote the Herald’s lead story, which focussed on that intrepid senator, Charles Grassley, the scourge of academics taking Big Pharma’s shilling.
“Among the reasons Sen. Charles Grassley, R-Iowa, has targeted Professor of Psychiatry and Human Behavior Martin Keller in his scrutiny of conflicts of interest in clinical research is Keller’s authorship of a controversial study of the antidepressant Paxil in children. The study and its authors have been fiercely criticized in medical journals for allegedly misrepresenting data, suppressing information linking the drug to suicidal tendencies and reaching a conclusion unsupported by the relevant data. The Herald contacted Keller several times starting Sept. 10, but he said he was unavailable for comment before press time.”
To put it mildly, Keller’s financial relationships with drug companies do not inspire trust in his objectivity, yet, he seems to have escaped lightly on this front, though Senator Grassley refused to let him away scot-free.
However, several, but too few, members of the medical and legal communities independently criticized his research on scientific grounds. Some articles in medical journals and letters to journal editors did call into question the integrity of the scientific practices leading to Study 329’s conclusion that Paxil was a safe and effective way to treat depression in adolescents.
The most percussive of these protests, perhaps, was that of Adelaide University physicians Jon Jureidini and Ann Tonkin, who wrote a strongly worded letter to the editor of JAACAP expressing concern that the study’s conclusion was not supported by the data, but Jureidini did say said that though he was confident that data in Study 329 were deliberately misrepresented he couldn’t definitively attribute the perceived misrepresentation of data to Keller himself. Later, they addressed the issue in a BJP editorial.
Director of U.S. Media Relations Sarah Alspach denied the claim that Study 329 under-reported serious adverse effects, citing a 2002 clinical review of the study by the Food and Drug Administration, which “found no statistically significant signal for suicidality.” It did note, however, that researchers’ coding of adverse reactions was “potentially confusing.” That is to put it very mildly, indeed, as creative recoding was a major contributor to GSK’s spin on the data: there was, in effect, a qualified blessing given to the study by the FDA, yet two short years later, the FDA issued a suicide warning for all selective serotonin reuptake inhibitors (SSRIs), the class of antidepressant to which Paxil belongs, after the British Medicines and Healthcare Products Regulatory Agency issued a similar warning earlier that year for paroxetine in the United Kingdom.
Several judges, also, played capital roles in the unravelling story, resulting in a number of successful lawsuits against GSK, one taken by the State of New York, and another by the U.S. government, which resulted in the biggest fine in corporate history – $3 Billion. The New York Attorney General, Eliot Spitzer, sued GSK in June 2004, charging the company with consumer fraud, alleging that it had deliberately withheld data showing that Paxil increased suicidal tendencies in adolescents. Spitzer did not investigate Brown’s or Keller’s involvement. GSK settled the suit for $2.5 million two months later, but admitted no wrongdoing. As part of the settlement, GSK agreed to release all results from future clinical trials, whether negative or positive.
There are two reasons why this study is being re-examined now. One is that another study (2011), published in the International Journal of Risk and Safety in Medicine, vol. 23, no. 3, examined the internal documents, full dataset and drafts that were made public thanks to a lawsuit filed against the makers of Paxil. Its authors found no significant difference in efficacy between paroxetine and placebo on the two primary outcomes and six secondary outcomes in the original protocol. At least 19 additional outcomes were tested, but Study 329 was able to show positive results on only 15% (4 of 27) of them.
The second strong impetus was provided by an appeal from Peter Doshi and colleagues. Addressing the fact that unpublished, cooked and misreported studies make it difficult to determine the true safety and efficacy of any treatment, Doshi et al., in June 2013 published “Restoring invisible and abandoned trials: a call for people to publish the findings” in the British Journal of Medicine (BMJ). They referred to this proposed protocol as RIAT, whose aim was to get colleagues, sponsors and investigators of abandoned studies to publish (or republish), and if sponsors failed to respond, they proposed a system for independent publishing. Two years earlier, in 2011, Marcus and Oransky had published a fascinating article in Nature, Dec.12, 2011, in which they made very concrete and detailed proposals for an ongoing review and critique of research papers to ensure that the long-term credibility of the scientific record be maintained.
As a result of the Doshi team’s initiative, a group of researchers undertook to re-analyze the original data and publish their new analysis under the RIAT protocol.
Finally, then, the public will soon see, for one rare time, just how low some pharmaceutical companies will go to in order to publish positive results about their drug: we are waiting with bated breath to see the final link in this complex chain, which should appear any day now in the BMJ, following two years’ work and seven drafts. This article is likely to throw a bomb into the consulting rooms of biomedical psychiatrists, and undermine many of their claims, researches and practices. We hope, too, that it will clear up some remaining anomalies, and apportion blame and responsibility beyond reasonable doubt. It seems to be the first time ever that the public will be enabled to compare two completely different assessments of the same data.
Whose Hands are Dirty?
(In Descending Order…)
The pharma company and their associates. A major academic. The journal’s editor. Brown, a major U.S. university. The FDA.
Each allegation of scientific misconduct in Study 329 – the claimed ghostwriting, outcome measures switch and miscoding of suicidal patients – has led scientists to call for Brown to lead a public investigation into the issues, and to retract the study, but the authors and journal editor have refused to retract. Brown has remained resolutely and shamefully silent, unlike Harvard who censured one of their most revered academic psychiatrists, Joseph Biederman, for very shady behaviour, indeed.
“What I find most distressing is that there has been absolutely no attempt at Brown to even discuss these issues, much less have a forum or an investigation,” said Clinical Associate Professor of Medicine Roy Poses. “If Keller’s actions have been exemplary, and that’s what everyone at Brown thinks, then they should go public and refute the charges. If they can’t do that, the accusations are serious enough and they should address them.” Poses added that Brown’s lack of public discussion of the study, despite widespread interest, is potentially embarrassing for the University. “Someone (at Brown) should be talking about it, because the rest of the world is talking about it,” he said. Jureidini agreed that Brown should investigate Keller’s involvement. Study 329 “can only be interpreted as deliberately misrepresenting what was found,” he said.
One would assume that when any respectable institution learns that any of its staff has engaged in research that gives grounds for suspicion they should feel morally obliged to undertake a rigorous investigation, inviting the researcher to present evidence on his or her behalf.
Baum Hedlund began to take lawsuits against GSK on behalf of the families of patients claiming to have been harmed by Paxil. One of their lawyers said that the University’s attitude toward his firm’s efforts has been “less than cooperative”, and that lawyers for the University, had fought Baum Hedlund “every inch of the way”.
University officials would not release memos about the coding that Keller sent to the University’s Institutional Review Board, which could serve to confirm or refute Alison Bass’s claims in her book. Some reports said that it claimed it no longer had them. The plot thickens. The University declared that it “can’t discuss particular cases of possible claims of wrongdoing and what we do about them,” said Provost David Kertzer, thus closing down the possibility of a public inquiry.
Unfortunately for the defence, however, up springs Donna Howard, an assistant administrator under Keller in the mid-1990s, who said that she had personally faxed those memos to the IRB. “There was two or three (instances) where there was severe adverse reaction that brought the child into contact either with the police or a hospital, and the children that were involved were eventually dropped from the study and coded as non-compliant,” Howard told The Herald. She said the mood in her office at their hospital would turn sour any time a patient experienced a serious adverse effect. From these statements we can reasonably infer that there were far more than three SAEs.
What are the Charges Leveled Against Those Involved?
Lies, spin, misrepresentation; creative recoding of adverse reactions; rejigging of original primary and secondary outcome measures; fraudulent marketing.
Who First Smelt a Rat?
And who kept up the hunt? Shamefully, just one politician; a tiny number of scientists, journalists, academics and legal eagles who had to move patiently into sleuth mode to catch a very slippery serpent.
Profile, History and Track Record of the Drug in Question
The typical side-effects and serious adverse effects documented in the 14 major studies on Paxil carried out since 2010 include: birth defects in new-born babies; suicidal behaviour, obesity, spontaneous abortion, violence, heart defects, injurious falls in the elderly, even at low dosing, developmental problems in the foetus(at 6 months) and in neo-nates (at 19 months).Let me give two typical, and very disquieting, examples of this research.
The first is from January 5th, 2010, when a study published in Psychological Medicine investigated the possible adverse effects of the use of antidepressant medication during pregnancy. Its authors reviewed 14,821 women from the Swedish Medical Birth Register, finding that there was an association between antidepressant treatment and many pregnancy complications, notably after tricyclic antidepressant use. An association between use of Paroxetine (Paxil) and birth heart defects and urinary tube defects was also found. The authors concluded that women using antidepressants during pregnancy and their new-born babies have an increase in health problems. (Source: M. Reis, and B. Kallen, “Delivery outcome after maternal use of antidepressant drugs in pregnancy: an update using Swedish data,” Psychological Medicine, 1-11, January 5, 2010.)
The second example is from Denmark, on June 23, 2010. A study in BioMed Central showed how, since the prescribing of psychotropic drugs in infants is rapidly increasing, regulatory authorities have attempted to curb the use of these drugs by issuing various warnings about the risks associated with the consumption of such products in childhood. The research team analyzed data submitted to a national adverse drug reactions (ADR) database to categorize ADRs reported for psychiatric drugs (including reports for Paxil) in the Danish paediatric population. They found that almost 20% of psychotropic ADRs were reported for children from birth up to 2 years of age and 50% of ADRs were reported in adolescents. The authors concluded that, “the high number of serious ADRs reported for psychotropic medicines in the pediatric population should be a concern for health care professionals and physicians. Considering the higher number of birth defects being reported greater care has to be given while prescribing these drugs for pregnant women.” (Source: Lise Aagaard and Ebba H. Hansen, “Adverse drug reactions from psychotropic medicines in the paediatric population: analysis of reports to the Danish Medicines Agency over a decade,” BioMed Central Ltd., Vol. 3, No. 176, June 23, 2010.)
Why is This so Important?
Because for the first time the public will be taken behind the scenes by independent scientists who, blow by blow, with forensic precision, will demonstrate just what passes for science in psychiatric research; and just how little big pharma and biomedical psychiatrists care for the future and health of our innocent children.
This revision will surely dismay decent, overworked GPs and psychiatrists who have been relying on the honesty, independence and objectivity of researchers and respectable journals for their justification in writing myriad millions of prescriptions for antidepressants, (in good faith!?). (Many were, and still are, conned into believing another quasi-scientific hypothesis/theory that has been discredited since the early 90s – the chemical imbalance theory of depression, something that most laypeople still take as gospel! Saying that depression is due to a lack of serotonin shows not only culpable ignorance about the multi-factorial aetiology of mental distress, but is, also, the equivalent of saying that headaches are due to a paracetamol deficiency.)
Critical psychiatry, however, will be delighted to see a ruthless, rigorous exposure by heavyweight independent scientists of a case that puts into question the very meaningfulness of RCTs and of the process of peer review; a case that is typical of numerous other cover-ups which critical psychiatrists have been tracking for years.
The restoration of Study 329 should cause guilt, disarray and a serious examination of conscience among biomedical psychiatrists, but will probably provoke only the usual outraged, defensive backlash.
The roar of a dying bull, perhaps?