For the past several years whenever a critical essay has come along examining the work of Irving Kirsch and his colleagues I have made an effort to examine the validity of the proposed arguments. Kirsch and his colleagues used the Freedom of Information Act to gain access to the unpublished trials of antidepressants and then pooled the clinical trial data – both published and unpublished ─ and analyzed it as a single data set. It is common for pharmaceutical companies to only publish those studies that find their products effective, and to withhold the negative studies, making it difficult to reach accurate conclusions by examining only the published data. Kirsch and his colleagues have reported that in the company sponsored clinical trials, the SSRIs only marginally outperform placebo, with the difference being statistically different but not clinically significant.
This past February, 60 Minutes devoted a segment to these studies (Kirsch, Deacon et al. 2008). Following the 60 Minutes piece, the internet lit up with chit-chat and blog postings about the Kirsch studies. One blog posting in particular caught my attention because it seemed to be more of a personal attack than a discussion about the actual evidence. The blog titled, Overstating the Placebo Effect, was written by Harold Koplewicz, the founder of the Child Mind Institute.
The blog posting did not seem overly significant at the time it was published, but following the recent headlines about GSK’s three billion dollar fine and the role that Study 329 played in that fine, I think his essay can be seen in a different light. The DOJ’s complaint stated that Study 329 misstated facts and made false claims about Paxil’s efficacy. Dr. Koplewicz is on the byline of Study 329.
The goal of the following discussion is to not to revisit the entire debate about the Kirsch studies but to examine Koplewicz’s statements about the clinical trial process and the efficacy of antidepressants in light of the DOJ’s comments on Study 329.
He starts off with a shot right across the bow: ”The argument put forth by Dr. Kirsch and others like him is an ideological one, with no basis in science. ”
This is a strong statement about Kirsch, but is it really accurate? No basis in science? Consider that: Kirsch examined the clinical trial literature which the companies performed and submitted to the FDA; Kirsch’s paper in PloS Medicine is one of their most downloaded papers; even the FDA and MHRA representatives largely agreed with him on the 60 Minutes piece; the pharmaceutical companies largely agree with him, and furthermore even his critics stated on the same show that they found his work significant. It is certainly acceptable to disagree about the FDA data, and some have, but the fact remains that Kirsch has published several important scientific papers in the field. Can someone really say with a straight face that an examination of the FDA database of clinical trials is not a scientific enterprise?
Koplewicz devotes little space to the heart of the argument but he does touch on three problems that he sees with the Kirsch study:
1) the clinical trials didn’t use a strong enough dose in the trials,
2) they only included people with moderate depression,
3) the trials themselves were not conducted very well.
In the comment section of his essay, one of the responders addresses all of these points: “Interesting blog post. A few comments: (a) there is no dose-response relationship with antidepressant efficacy in the FDA trials – higher doses do not work better, (b) almost all FDA trials were conducted with severely depressed patients, which makes it all the more telling that half of these trials failed to demonstrate a significant antidepressant effect, (c) if the FDA trials were poorly conducted as you suggest they are inadequate to demonstrate efficacy and the drugs should not have been approved.” The comment goes right to the heart of his essay and leaves Koplewicz’s argument on shaky ground.
But even more to the point, these criticisms are nothing new. In 2002, following his Prevention and Treatment paper Kirsch addressed almost all these comments, and others, in a formal response. The first line of his response was, “We are very heartened by the thoughtful responses to our article. Unlike some of the responses to a previous meta-analysis of antidepressant drug effects, there is now unanimous agreement among commentators that the mean difference between response to antidepressant drugs and response to inert placebo is very small.” One of the responders to his 2002 paper, Steven Hollon, stated, “Many have long been unimpressed by the magnitude of the differences observed between treatments and controls—what some of our colleagues refer to as ‘the dirty little secret.’”
Keep in mind that Kirsch only analyzed pharmaceutical company trials, thus many of the criticisms directed at Kirsch are really directed more to the pharmaceutical companies who conducted the trials and the FDA who approved them. If the trials were faulty is it appropriate to blame Kirsch? Peter Kramer also took the tack of blaming the trials in his New York Times essay. It would be one thing if Kramer and Koplewicz were consistent in finding fault with the trials, but there was little condemnation from them back in the 1980s and 1990s (at least that I am aware of) when the trials were being used to support FDA approval ─ the trials were the talk-of-the-town back then. But, apparently, now that a more detailed analysis of the trials is published showing their flaws, we are supposed to ignore the trials?
Koplewicz also says, “The conclusions look damning, but in this case appearances are deceiving. To understand why, we need to talk about placebos – and depression.” He then devotes two paragraphs to the placebo effect but never explains why an understanding of the placebo effect should overturn Kirsch’s results. If anything, he seems to be confirming the importance of conducting randomized trials to tease out the placebo effect. He even points out that, “people diagnosed with the disorder may get better on their own,” which is just one more reason to determine the true drug effect.
The problem with being so dismissive of the FDA data is that even within mainstream psychiatry it is now fairly well-acknowledged that the difference between placebo and antidepressant drug effect is minimal. The current state of the SSRI debate amongst Prozac’s proponents is as follows: At one end of the spectrum, some, such as Michael Thase, a clinical trial researcher and medication proponent, says that for every ten people taking an antidepressant the drug helps one person. At the other end of the spectrum there are those who argue that it helps three people. The debate comes down to the significance of these low efficacy numbers. Or put another way: Are SSRIs effective for 10% or 33% of the patients that receive them?
While providing no evidence in support of his statement that Kirsch is an ideologue, and little substantive evidence to counteract the idea that there is a significant clinical difference between medication and placebo, he does cite his personal observations. He assures his readers that he has seen the medications work: “As a child and adolescent psychiatrist, I have also seen the lives of many children turned around by psychotropic medications. I have seen it help kids who were so acutely anxious they were afraid to speak, and I have seen it help teenagers who were so depressed they were ready to take their own lives.”
Anecdotes are fine to note, and they might be of some interest to some people at some point in time, but at this point in time the debate is about the evidence, and not about one person’s observations. This is why clinical trials are conducted in the first place– to test these types of observations. To charge Kirsch with being non-scientific, and then in the same breath use a non-scientific argument to overturn the data from the trials does not seem like a valid strategy. Kirsch is not making an argument based on anecdotes. I am not naïve enough to think that the debate about the efficacy of antidepressants is over, but citing anecdotes does not seem the way to go about overturning the FDA data.
Moreover, the issue of anecdotes opens a can of worms, and potentially leads the discussion in a direction that I am not sure he wants to tread. There have been children on SSRIs who have committed suicide, and some of these parents have testified at various FDA hearings about their children’s lack of suicidal ideation prior to starting an SSRI. In the face of these testimonials, the overall message from the child psychiatry profession has been that these are just anecdotes and that what matters is the clinical trial literature, and in the clinical trials no children committed suicide. If anecdotes are important for judging efficacy, then why aren’t they good for judging side effects? The error in logic seems obvious: You can’t use one set of anecdotes to support your thesis, and then dismiss the anecdotes that don’t support it.
As an aside, Kirsch is not the first person who Dr. Koplewicz has chastised. In a September 2001 blog posting titled Why We Need Psychoactive Meds he took great exception to Dr. Marcia Angell’s article, The Illusions of Psychiatry which has been published in the New York Review of Books. His essay on Angell took the same arrogant and dismissive tone, but again provided little in the way of data to justify his claim that Angell is misguided. You can sense Dr. Koplewicz’s frustration with the media for covering Kirsch and Angell. But aren’t reporters supposed to write about science? For some of the millions of people taking these drugs they work and that is good news. But what is wrong with an adult being told all the facts about a medication? Imagine a media outlet that decided to ignore a study showing that in the clinical trials responsible for FDA approval there was only a slight difference between the medication and placebo groups?
At one point Koplewicz states: “To evaluate efficacy as well as safety, the FDA started testing new medications in the 1960s. Unfortunately, there are many problems with how these clinical trials are conducted.” The irony of his comment is that a study he co-authored, and was soon going to make headlines news, provides the best citation in support of the idea that there are problems with the trials.
Overstating Efficacy: GSKs Three Billion Dollar Fine and the Role of Study 329
Six-months after his essay on Kirsch was posted, GSK was fined three billion dollars by the Department of Justice (DOJ). Not Three million, but three billion. The fine involved several medications, such as Avandia, Wellbutrin, Advair, and Paxil. Regarding Paxil, the DOJ’s complaint mainly focused on Study 329, which examined the use of Paxil for pediatric depression. While the DOJ treats GSK as the sole author of Study 329, only two of the named authors were actually GSK employees. All of the other named authors, one of whom was Dr. Koplewicz, were affiliated with universities. In their complaint about Paxil and the role of Study 329 the DOJ did not mince words: “The United States argues that, among other things, GSK participated in preparing, publishing and distributing a misleading medical journal article that misreported that a clinical trial of Paxil demonstrated efficacy in the treatment of depression in patients under age 18, when the study failed to demonstrate efficacy.” They also note that:”GSK published an article that misstated Paxil’s efficacy and safety for children and adolescents.”
The DOJ report is just the final nail in the coffin for a paper that academic medicine was unable to put to rest. Study 329 was published in 2001, and it didn’t take long for critics to point out the problems with the study. Right after it was published John Jureidini wrote a letter to the editor, and then followed up with several peer-reviewed papers about Study 329. Alison Bass, a health reporter for the Boston Globe, extensively documented problems with the study in her book Side Effects. For example, she pointed out that the study miscoded several suicidal teenagers as noncompliant when they were really suffering from suicidal ideation. Several years ago, the editors of the Lancet stated that the, “The story of research into SSRI use in childhood depression is one of confusion, manipulation and institutional failure.” According to the blogger Boring Old Man, “There is probably no single icon for the corruption of the modern psychiatric literature so paradigmatic as Glaxo-SmithKline’s Study 329.”
At one point in his essay Dr. Koplewicz says, “Some people are simply opposed to treating psychiatric disorders with medication.” But is it really accurate and fair to portray someone who went through the trouble of analyzing the FDA database as “simply” opposed to medication? But there is a flip-side to his statement: Given that the DOJ has just stated that Study 329 overstated efficacy, one could say that, there are some people who will say that treating psychiatric disorders with medications is appropriate, even if the clinical trial data says there is no benefit.
In his essay on the FDA trials he also brings up some issues of the clinical trial process which he believes support his viewpoint, but if anything, especially in light of 329, seem to hurt his case. Koplewicz states: “Rather than taking the time to determine a correct clinical dose, it’s cheaper to do lots of studies and throw out the ones that don’t get results.” It is unclear what exactly he means by the term “throw out” but how this idea of not reporting negative data is supposed to be seen as a defense of the medications is confusing. Some people might take the term “throw out” to be a euphemism for “to not publish the results that put our drug in a negative light.” The DOJ apparently felt that this was how GSK was looking at it.
The DOJ report notes that besides Study 329, GSK conducted two other double-blind, placebo-controlled studies of Paxil for pediatric and adolescent depression: Study 377 and Study 701, and the results were never published. In the DOJ’s words: “Like Study 329, both studies failed to demonstrate any statistically significant difference in efficacy between Paxil and the placebo on any pre-specified primary or secondary endpoint.” The fact that negative studies are apparently “thrown out” is one reason Kirsch looked at the unpublished data. And these “thrown out” studies of Paxil in children were apparently a source of consternation for the DOJ.
There is absolutely nothing wrong with disagreeing with Irving Kirsch or Marcia Angell, but shouldn’t the disagreement be about the data? And it does seem a tad hypocritical for any of Study 329’s authors to criticize another study – especially when it comes to how they reported efficacy – given the fact that they have not commented on their own involvement with Study 329. I am not aware of any of the author’s addressing the DOJ’s statement that, “In publishing Study 329, GSK falsely claimed that it demonstrated Paxil’s efficay in treating depression in patinets under 18.” And the authors’ silence is not for want of media outlets as many of them are regularly interviewed by the nightly news, and some even have their own blogs.
In 2010, at the same time the DOJ was investigating Study 329, The New York Times featured Dr. Koplewicz in a column titled “Ask a Psychiatrist.” The column’s introduction stated that he “will be responding to readers’ questions about the myths and stigma surrounding psychiatric disorders of children and adolescents.” Instead of the readers asking questions, shouldn’t the Times reporters have been the ones asking the questions about Study 329? For instance, did the named authors simply sign their names to an already completed paper written by a company employee? Or what was their role in the intellectual framework of the paper, such as planning the study, analyzing the results, or writing it? These seem like legitimate questions about a paper which has become part of what the justice department has called the largest health care fraud settlement in US history.