The New York Times’ Defense of Antidepressants
Today, the New York Times published an op-ed essay by Peter Kramer titled “In Defense of Antidepressants” on the front page of its Sunday Review section.
In Anatomy of an Epidemic, I wrote about the need for our society to have an honest discussion about the merits of psychiatric medications, and in his essay, Dr. Kramer suggests that he took up his pen in response to recent “debunkings” of the drugs. In particular, he noted the “especially high-profile debunking” that occurred last month in the New York Review of Books when “Marcia Angell, former editor in chief of the New England Journal of Medicine, favorably entertained the premise that ‘psychoactive drugs are useless.’ ” My book Anatomy of an Epidemic was one of three reviewed by Dr. Angell, and as I wrote in Anatomy, I think what our society desperately needs is an honest discussion about what science is telling us about the merits of psychiatric medications. As such, it seems worthwhile to look at Dr. Kramer’s essay in that light.
Here is the question that we need to ask ourselves: Does the essay further public understanding of what science is telling us about the merits of antidepressants? Or does it rely on a misrepresentation of the science in order to protect the image of the drugs?
In his essay, Dr. Kramer writes specifically about research conducted by Irving Kirsch, a psychologist at the University of Hull in the United Kingdom, who detailed his findings in his book The Emperor’s New Drugs (which was also reviewed by Dr. Angell.) He also writes about a study by Robert DeRubeis, a psychologist at the University of Penn, which was published in JAMA in 2010.
First, Kirsch’s work and Dr. Kramer’s review of it.
The Emperor’s New Drugs
In his research, Kirsch analyzed the results of industry-funded trials submitted to the Food and Drug Administration for four antidepressants: Prozac, Effexor, Serzone, and Paxil. As Kirsch noted, these trials — with one exception — were conducted in patients who, at study entry, were severely depressed. In 34 of the 35 trials Kirsch reviewed, the mean baseline score for the patients was 23 or greater on the Hamilton Depression Rating Scale (HDRS), which is a score characteristic of “very severe depression.”
One reason that pharmaceutical companies seek to enroll people who are very depressed into their clinical trials is because they know that it is in this patient group that their drugs are mostly likely to show a benefit over placebo. Once the FDA has approved their drugs, the pharmaceutical companies can then market them to people with mild depression, regardless of whether the medications are effective in that population. In most of the industry-funded trials of SSRIs, the patients had to have a baseline score of at least 20 on the HDRS, which meant that those with mild to moderate depression were explicitly excluded.
In his review of the FDA data for the four drugs, Kirsch found that symptoms in the medicated patients dropped 9.6 points on the HDRS, versus 7.8 points for the placebo group. This was a difference of only 1.8 points, and the National Institute for Clinical Excellence in Britain had previously determined that a three-point drug-placebo difference was needed on the Hamilton scale to demonstrate a “clinically significant benefit.” Kirsch found that it was only in the very severely depressed patients — basically those with a baseline HDRS score over 28 — that the drugs provided a clinically significant benefit.
On page 31 of his book, Kirsch writes: “In examining baseline depression scores (that is, measures of how depressed the patients were before the clinical trial began), the first thing we noticed was that all but one of the trials had been conducted with patients whose scores put them in the ‘very severe’ category of depression . . . in other words, our findings of a clinically insignificant difference between drug and placebo was based primarily on data from those patients who are the most severely depressed according to the APA and NICE classification scheme.”
So how does Dr. Kramer, in his essay, “defend” antidepressants in the light of Kirsch’s report? Let’s go over this point-by-point.
First, he writes that Kirsch “found that while the drugs outperformed the placebos for mild and moderate depression, the benefits were small.” This, of course, is not what Kirsch found at all. The studies didn’t involve patients with mild to moderate depression (except for the one study.) What Kirsch found was that in the FDA trials, the antidepressants didn’t outperform placebo, in a clinical meaningful way, for patients with severe depression.
That, of course, is a finding that would cause readers to seriously wonder about the merits of the drugs. But rather than write about Kirsch’s actual findings, Dr. Kramer crafted a sentence that tells of how the drugs provide a small benefit even in mild to moderate patients. As such, he is reassuring readers — even though falsely so — that antidepressants provide a benefit to the larger universe of patients that take these drugs. And the implication is that the benefit must be quite marked for the severely depressed.
Having misrepresented Kirsch’s findings, Dr. Kramer then writes that “the problem with the Kirsch analysis — and none of the major press reports considered this shortcoming — is that the FDA material is ill suited to answer questions about mild depression.” The reason, Dr. Kramer explains, is that “companies rushing to get medications to market have had an incentive to run quick, sloppy trials,” and in their haste, they “often [enroll] subjects who don’t really have depression.” It is these non-depressed patients who then become counted in the trial results as placebo responders, because, Dr. Kramer writes, “no surprise — weeks down the road they are not depressed.”
I have to confess that this is a paragraph that took my breath away. Dr. Kramer makes it seem that Kirsch’s review focuses on mild to moderate depression (it doesn’t); then he explains that the reason that Kirsch found that the drugs provide only a slight benefit to those patients in the FDA trials is that the drug companies enroll patients who aren’t really depressed at all (when in fact the study criteria required patients to be severely ill); and finally he concludes that when those non-depressed patients end up in the placebo arm of the study, they show up as improved and thus as placebo responders. The “improvement” of the placebo group, Dr. Kramer writes, “may have nothing to do with faith in the dummy pills; it is an artifact of the recruitment process.”
So, readers of the New York Times piece can only conclude this: The industry-funded trials used for FDA approval were in large part conducted in patients with mild depression, or in patients who weren’t depressed at all, and that is why the drugs only slightly beat placebo. The results would have been markedly different in patients who were really depressed. Plus, even in these flawed trials, antidepressants produced a small benefit in the mild-to-moderate group.
Placebo Washouts and Biased Trial Designs
Now let’s go to Dr. Kramer’s analysis of the study by Robert DeRubeis and his collaborators.
As might be expected, the drug companies in fact design their trials in a manner expected to suppress the placebo response rate. This is done through what is known as a placebo washout period, which may last a few days to two weeks. All patients enrolled into the study — who may have to be taken off an antidepressant they might have been on — are given a placebo in single-blind fashion (the investigators know it is a placebo; the patients do not.) Those who get better on placebo in this washout phase are then excluded from the study. Only those who don’t respond to a placebo are randomized into the trial. As such, trials with this design might be better described as “drug versus initial non-responders to placebo,” and of course this is a design that is supposed to reduce the number of placebo responders in the final results.
In his investigation, DeRubeis searched the published literature for trials of patients with a broad range of symptom severity (and thus not just severely ill patients), and also for trials that didn’t use a placebo-washout phase to suppress the placebo response. He found six studies that met that criteria, and analyzed the collective results. Here is what he and his collaborators concluded: “True drug effects — an advantage of antidepressant medication over placebo — were nonexistent to negligible among depressed patients with mild, moderate, and even severe baseline symptoms, whereas they were large for patients with very severe symptoms.”
So how does Dr. Kramer “defend antidepressants” in light of this study? Again, let’s go point by point.
First, Dr. Kramer launches what might best be described as an ad hominem attack. He states that critics have questioned “aspects of DeRubeis’s math,” which is a subtle suggestion that DeRubeis fudged his figures to get the results he wanted. But Dr. Kramer doesn’t provide any information about who has actually raised such criticism, nor does he provide any evidence that there is a problem with DeRubeis’s math skills.
Second, Dr. Kramer writes that DeRubeis concluded that “medications looked best for very severe depression and had only slight benefits for mild depression.” As was the case with his review of Kirsch’s work, Dr. Kramer here isn’t accurately summing up DeRubeis’s findings. DeRubeis found “true drug effects were nonexistent to negligible among depressed patients with mild, moderate, and even severe baseline symptoms.” Dr. Kramer’s sentence instead tells of a finding that drugs help all patients along a spectrum — slight benefit for mild depression, marked benefit for more severe forms.
Third, Dr. Kramer writes that DeRubeis analyzed studies that “intentionally maximized placebo effects.” Here, Dr. Kramer is turning the biased design of the industry-funded trials, which employed a placebo washout to suppress the placebo effect, into an example of good design, and he is asserting that the six studies that didn’t employ a placebo washout were, in essence, biased against the antidepressants.
Together, Kirsch’s review of the FDA data and DeRubeis’s meta-analysis of studies published in medical journals tell a similar story. In clinical studies, antidepressants regularly fail to provide a clinically significant benefit over placebo for patients with mild, moderate, and even severe depression. But these drugs do provide a significant benefit for patients who are very severely ill. Their findings arise from an exhaustive review of the research, both published and unpublished, and thus can be seen as an in-depth look at what science has to say about the short-term efficacy of antidepressants.
But readers of “In Defense of Antidepressants” learned nothing of that. Instead, Dr. Kramer misrepresented their work, and then having done so, dismisses its relevance in this cavalier way: “In the end, the much heralded overview analyses look to be editorials with numbers attached.”
Turning a Blind Eye to Long-term Outcomes
Dr. Angell’s comment that psychiatric drugs might be “worse than useless” was in reference to Anatomy of an Epidemic, and to my review, in my book, of the long-term outcomes literature for antidepressants and other psychiatric drugs. The evidence for long-term outcomes may be very different thanfindings from short-term studies, and thus if the profession wants to “defend” its use of antidepressants, it needs to do more than show that the drugs are better than placebo in six-week trials. The profession needs to show that the drugs improve long-term outcomes, and that they do so in “real-world” patients.
There are two notable studies that Dr. Kramer could have reviewed to shed light on this question.
In 2004, John Rush, a prominent psychiatrist at Southwestern Medical Center in Dallas, observed that industry-funded trials of antidepressants were conducted in a group of patients that weren’t representative of larger patient populations because study criteria regularly excluded patients with comorbidities. In addition, the industry-funded trials were short term, and together these two factors led to a notable deficiency in the evidence base. “Longer-term clinical outcomes of representative outpatients with nonpsychotic major depressive disorder treated in daily practice in either the private or public sectors are yet to be well defined,” Rush wrote.
To remedy this deficiency, Rush and his colleagues conducted a study of antidepressants in “real-world” patients, and followed them for a year. During this period, they provided their patients with a wealth of emotional and clinical support “specifically designed to maximize clinical outcomes.” This was the best care that modern psychiatry could provide.
Here were their real-world results: Only 26% of the patients in their study even responded to the antidepressant (meaning that their symptoms decreased at least 50% on a rating scale), and only about half of those who responded stayed better for any length of time. Most startling of all, only six percent of the patients saw their depression fully remit and stay away during the yearlong trial. These “findings reveal remarkably low response and remission rates,” Rush said.
Dr. Kramer might also have discussed the findings from the STAR*D trial funded by the National Institute of Mental Health. This was the “largest antidepressant trial” ever conducted, and the one-year results are now known. Only 108 of the 4,041 patients who entered the trial remitted and then stayed well and in the trial throughout the follow-up period. The remaining patients — 97% of the total – either failed to remit, relapsed or dropped out of the trial.
But there was no discussion of these longer-term results in Dr. Kramer’s op-ed, which became the most-emailed New York Times article on Sunday. As a result, the Internet buzzed on Sunday with a prominent story from arguably the leading newspaper in the United States, which assured readers that all is well in the land of antidepressants. These drugs “work — ordinarily well, on a par with other medications doctors prescribe,” Dr. Kramer wrote.
As I noted in Anatomy of An Epidemic, the real problem we have in this field of medicine is that academic psychiatry hasn’t been honest in what it tells the public about psychiatric medications. If the medications are to be used wisely, and in an evidence-based manner, we need to have an honest discussion about what science is telling us about the drugs. But on Sunday, in this essay “In Defense of Antidepressants,” the American public has been treated to yet another dose of misinformation.
Mad in America hosts blogs by a diverse group of writers. These posts are designed to serve as a public forum for a discussion—broadly speaking—of psychiatry and its treatments. The opinions expressed are the writers’ own.