In a recently published commentary in Psychiatric Times, Ronald Pies and Joseph Pierre renewed their “case for antipsychotics,” citing in particular two “placebo-controlled” studies that they said showed that the drugs improved the quality of life of people diagnosed with schizophrenia. But before Pies and his co-author delved into that literature, they made this assertion: Only clinicians, with an expertise in assessing the research literature, should be weighing in on the topic of the efficacy of psychiatric drugs. They wrote their commentary shortly after I had published on madinamerica “The Case Against Antipsychotics,” and it was clear they had me in their crosshairs.
They wrote:
“We do not believe that armchair analyses of the literature by non-clinicians will answer the risk/benefit question in a humane and judicious manner. On the contrary, we believe that the working with psychotic patients, and appreciating their often profound suffering, is an essential part of the equation. Critics of psychiatry who have never spent time with patients and families coping with the ravages of schizophrenia simply do not grasp the human tragedy of this illness. These critics also miss the deep-seated satisfaction that comes from seeing severely impaired patients achieve remission, and even recovery—in which antipsychotic medication usually plays an important role.
As clinicians with many years of experience in treating patients suffering with schizophrenia, our views on antipsychotic medication are shaped not only by our understanding of the scientific literature, but also by our personal care of many hundreds of patients, over several decades.”
Their message here could perhaps be summarized more succinctly in this way: Whitaker, butt out. Then, in a comment Pies posted beneath his own article, he went a step further in his criticism.
“I am tempted to say that if there were such a thing as ‘journalistic malpractice,’ these critics would be up before the Journalism Board for reprimand.”
Now I, in turn, am tempted to reply to Pies and Pierre that my reporting on this subject already went up before a journalism board for review. In 2010, the Investigative Reporters and Editors Association gave Anatomy of an Epidemic its book award for best investigative journalism of that year. But then I realized that this assertion of theirs needed a more careful response.
They are asserting that review of the “evidence base” for medications should be left up to the clinicians who prescribe the drugs. To a large extent, this is indeed how our society functions. It is the “thought leaders” of a medical specialty who create the narrative of science that governs societal thinking and clinical practices. They are the “experts” who review the medical literature and inform the public what science has revealed about an illness and the merits of drug treatments.
A journalist is expected to report the experts’ conclusions. Indeed, 18 years ago, when I first began writing about psychiatry in any depth, I never imagined that I would write a paper like “The Case Against Antipsychotics.” That does lie outside the usual journalistic task. However, I can easily trace the journalistic path that led me to this end. I would never have started down this road if academic psychiatrists—and the American Psychiatric Association—had fulfilled their public duty to be trustworthy reporters of their own research findings, and exhibited a desire to think critically about their own research. It was precisely because I began reporting about that failure, step by step, that I have ended up trespassing on their turf.
On Becoming a Medical Journalist
The role of a medical journalist can be confusing. I first began writing about medicine in 1989, when I went to work for the Albany Times Union as a science writer, and I immediately had the sense that my job had now changed.
Before that, I had worked as a general reporter for a small newspaper, the Plattsburgh Press Republican. In this position, you cover local politics and business, and you are expected to be skeptical of what you are told. You try to support your reporting with an examination of documents, and while you may rely on interviews to flesh out a story, you are aware that the people you quote have an agenda. They want to present themselves to the public in a favorable light.
But once I became the science writer at the Albany Times Union, I understood that my job was to take complicated science matters and make them both understandable and interesting to a lay public. I was covering the march of science, and the people I interviewed – doctors, physicists, etc. – stood on a societal pedestal. Perhaps they could use a little help in explaining their work to the public, but that was because they were used to thinking and speaking as scientists, which apparently was a rarified language that we mere mortals had difficulty understanding.
My job, it seemed, was to serve as a translator of scientific findings. I could make the difficult science “clear” to the public. I have to confess, I was quite happy to be charged with this task. I have always loved science and the scientific mind, and being a science writer was like being paid to immerse yourself in that world. I thought, this is the best job ever.
At the same time, I hadn’t completely put my journalistic skepticism aside. In the spring of 1991, I wrote a series on laparoscopic surgery, which was being introduced to great fanfare at the time. But rather than write about that advancement, I became interested in reporting on how the introduction of the surgery had been botched, as many surgeons, eager to offer the latest technique, did not get proper training in the new method. This, I documented, had led to a number of patient deaths during routine gall bladder surgeries. That was my baptism into thinking of medicine as both a scientific and a commercial pursuit, with the latter having the potential to corrupt the former.
Over the next seven years, I studied and worked in various non-newspaper environments that, I believe, helped me become more skilled in assessing the merits of a published study, and more aware of the problems with the commercialization of medicine. I spent a year as a Knight Science Journalism Fellow at MIT, and later took a job as Director of Publications at Harvard Medical School.
In that position, I edited a weekly newsletter that reported on research by faculty associated with Harvard Medical School. The newsletter was read by other faculty (and in other halls of science), and thus the stories needed to capture the complexity of their research. Equally important, this was at a time that the idea of “evidence-based” medicine was being introduced. The rationale for this practice was that physicians could be deluded about the merits of their therapies, and thus they needed to have their care guided by science. I took that lesson to heart.
Next, I co-founded a publishing company called Centerwatch that covered the “business” of clinical trials of new drugs. From the outset, CenterWatch was an industry-friendly publication. We wrote about the opportunity for physicians to earn extra income by conducting clinical trials, and we presented clinical trials as an opportunity for patients to gain early access to promising new therapies. Our readers were from pharmaceutical companies, academic medical centers, contract research organizations, and financial institutions that covered the clinical trials industry. In addition to publishing a weekly newsletter and a monthly report, we developed a website that helped pharmaceutical companies find physicians to conduct their clinical trials. Pharmaceutical companies also paid us to list their trials that were recruiting patients.
And then I began writing stories that bit the hand that fed us.
As I learned more about the clinical trials industry, I came to understand that it could best be described as a commercial enterprise, as opposed to a scientific one designed to actually assess the merits of new drugs. Trials were often biased by design; there was selective publication of data that helped promote the drug’s commercial success; and academic “thought leaders” lent their names to this marketing enterprise.
We sold the company in 1998, and that was when I went to the Boston Globe and proposed doing a series on the abuse of psychiatric patients in research settings. Yet, and this is important, at that time I still believed in the larger story of progress that psychiatry had been telling to the public. Researchers, I believed, had discovered that major illnesses like schizophrenia and depression were due to chemical imbalances in the brain, which the medications then put back into balance, like “insulin for diabetes.”
The series reflected both of these perspectives. One part of the series focused on the testing of atypical antipsychotics, and how, among other things, a number of the patients who volunteered in the trials had died, and yet those deaths had not been mentioned in the articles published in medical journals. We wrote about the money being paid to the academic psychiatrists to conduct the trials, and told of specific examples of how that financial influence had led several astray, so much so they ended up either in prison or censured by a medical review board.
Another part focused on studies in which antipsychotics had been abruptly withdrawn from schizophrenia patients, with researchers then tallying up how frequently they relapsed. We said this was unethical, since the drugs were understood to be “like insulin for diabetes.” Who would ever conduct a study that withdrew insulin from a diabetic, and then counted how frequently their symptoms returned?
As I have written before, that would have been the end of my reporting on psychiatry, except for the fact that, just as the series was being published, I came upon two research findings that belied that story of progress. And that made me wonder whether there was a larger story to be told.
On to Mad in America
The research findings were these. First, the World Health Organization had twice found that schizophrenia outcomes were much better in three “developing” countries than in the United States and other developed countries. Second, Harvard Medical School researchers had reported in 1994 that schizophrenia outcomes today were no better than they had been a century earlier. It was then that I asked myself a new question, one that could be said to have arisen from my schooling in “evidence based medicine” while I was director of publications at Harvard Medical School.
Was it possible that psychiatry, as an institution, was deluded about the merits of its therapies? The conventional history of psychiatry tells of how the introduction of Thorazine into asylum medicine kicked off a psychopharmacological revolution, a great leap forward in care. But if one dug into the “evidence,” both historical and scientific, did it support that conclusion?
In Mad in America, I reported on a trail of history and science that contradicted the conventional wisdom. The book might be best described as a counter-narrative. It told of a medical profession that, for various reasons, had become committed in the 1960s to telling a story of how helpful the new antipsychotics were, and how that commitment grew stronger when DSM-III was published in 1980 and psychiatry adopted its “medical model” for diagnosing and treating mental disorders. After that, the American Psychiatric Association’s public pronouncements about the biology of mental disorders and the efficacy of psychiatric drugs turned into a full-bodied PR campaign, and academic psychiatry was—in the language of institutional corruption—”captured” by the pharmaceutical industry. Academic psychiatrists, starting in the 1980s, began working for pharmaceutical companies as speakers, advisors, and consultants, and once this “economy of influence” developed, these “thought leaders” told a story to the public that, again and again, pleased their financial benefactors.
There were many parts to that “counter-narrative,” but here are three such examples.
- I wrote that the simple dopamine hyperactivity theory of schizophrenia really hadn’t panned out, and that in the early 1990s, a leading psychiatrist in the United States had concluded that the hypothesis “was no longer credible.” This was at a time that the American Psychiatric Association was regularly informing the public that “we now know” that major mental illnesses like schizophrenia and depression are caused by chemical imbalances in the brain.
- I wrote that the drug-withdrawal studies cited by psychiatry as proving that antipsychotics provided a long-term benefit were flawed, as they compared drug-maintained patients to drug-withdrawn patients (rather than to a true placebo group), and it was well known that once schizophrenia patients had been on antipsychotics, they were at great risk of relapse if they abruptly stopped taking the medication.
- Based on documents that I obtained through a Freedom of Information request, I wrote that FDA reviewers of the risperidone and olanazapine trials had concluded that they were biased by design against haloperidol, and that the trials did not provide evidence that these new antipsychotics were safer and more effective than the old drugs. This was at a time that the atypicals were being touted by academic psychiatrists, in their pronouncements to the press, as “breakthrough medications.”
Now, in a sense, I was climbing out on a journalistic limb here. The history in Mad in America told of a profession that was deluded about the merits of its own therapies, and had practiced—as the book’s subtitle said—“bad science,” which in turn led to the “enduring mistreatment of the mentally ill.” This did not endear me to a number of people in the psychiatric establishment, but in the years after Mad in America was published, what did we subsequently learn?
- The chemical imbalance theory had in fact failed to pan out by this time. As Pies memorably wrote in a 2011 blog, “the chemical imbalance theory was always a kind of urban legend, never a theory seriously propounded by well-informed psychiatrists.”
- In 2002, shortly after Mad in America was published, psychiatrist Emmanuel Stip wrote that when it came to the question of whether antipsychotics were “effective” over the long-term, there was no “compelling evidence” on the matter. The relapse studies did not provide such evidence.
- In trials conducted by the NIMH and other governmental agencies, the atypical antipsychotics were not found to be superior to the first-generation antipsychotics, which led Lancet to pen this memorable editorial: “How is it that for nearly two decades we have, as some put it, been ‘beguiled’ into thinking they were superior?” Lancet wrote that in 2009, seven years after Mad in America was published.
In short, I had followed a journalistic path—asking questions and searching through documents—to tell a history in Mad in America that countered what psychiatry, as an institution, had been telling the public about chemical imbalances and telling itself about the “evidence base” for its medications. And here’s the point: If academic psychiatrists had been telling the public that the biological causes of major mental disorders remained unknown, and that the relapse studies did not provide good evidence that antipsychotics provide a long-term benefit, and that the atypical trials were biased by design against the old drugs, then I would not have had much of a book to write. It was psychiatry’s own failures, in its review of its science and its communications to the public, that invited a journalist to challenge its “ownership” of this story.
On to Anatomy of an Epidemic
In Mad in America, I had explored the thought that antipsychotics worsened long-term outcomes. This is obviously a question that any medical specialty ought to ask itself—how do its therapies affect patients over longer periods of time—and yet it was not a question that psychiatry, as an institution, had answered. It had relapse studies that it pointed to as evidence that antipsychotics needed to be taken on a continual basis, but I couldn’t find any instance where psychiatry had compiled an “evidence base” that these drugs—or any other class of psychotropics—provided a long-term benefit.
That was the hole in psychiatry’s evidence base that I sought to fill when I wrote Anatomy of an Epidemic. And in this book, I simply sought to put together a narrative of science, from psychiatry’s own research, that could best provide an answer to that question for the major classes of psychiatric medications. I was indeed stepping outside the usual journalist’s role in taking up this task, but I sought to do it only because of psychiatry’s failure to address this issue in any substantive manner.
As I had already learned from writing Mad in America, putting together such a narrative requires that you push past the abstracts and discussion sections in published articles to focus on the data. Anyone who has spent time reading psychiatry’s scientific literature will discover that, if the data is unfavorable to the drug, you often can’t rely on the abstract for an accurate summation of findings, and that the discussions can lead you astray. The abstracts may be spun to downplay the poor results, and the discussions may gloss over the poor results, or seek to explain them away. Often, the data itself—if the results are poor for medicated patients—may be presented in a confusing fashion. You have to learn how to deconstruct the study, based on the data that is presented, to best understand the results.
This type of spinning and obfuscation can be seen in many NIMH-funded stories. For instance, in the STAR*D study, which was touted as the largest antidepressant trial ever conducted, the investigators promoted the notion that two-thirds of the 4041 patients that entered the trial eventually remitted. If a patient’s first antidepressant didn’t work, try another, and eventually an antidepressant will be found that does work. However, nothing like that actually ever happened in the study. Only about 38% of the patients ever remitted (according to the criteria set forth in the protocol), and the graphic that presented the one-year outcomes defied being understood. It took years before an enterprising psychologist, Ed Pigott, figured out that graphic, which told of how only 108 of the 4041 patients had remitted, stayed well, and in the trial to its one-year end. Thus, the documented stay-well rate was 3%, which is a far cry from the 67% remission rate promoted to the public.
Similarly, in the TADS study, the investigators made it appear that just as many suicide attempts had occurred in the non-drug groups as in those exposed to Prozac, which was the conclusion suggested in the abstract. In fact, 17 of 18 of the youth that had attempted suicide had been on the medication. In the MTA study of ADHD treatments, you had to read very closely to see that medication use was a “marker of deterioration” at the end of three years, and that at the end of six years, the medicated children had worse outcomes.
In sum, as I said at the beginning of this post, I agree that writing a paper like the Case Against Antipsychotics is outside the usual domain of a journalist. Pies and Pierre are right about that part. But the only reason I came to write in this way about psychiatry is because psychiatry, as an institution, so obviously failed to fulfill its scientific duty to the public. This institutional failure was an important part of the journalistic story I sought to tell in Mad in America, Anatomy of an Epidemic and most recently in Psychiatry Under the Influence, a book I co-authored with Lisa Cosgrove. It’s not that I set out to trespass on psychiatry’s turf and write reviews of the “evidence base” for its drugs. Indeed, from a journalistic perspective, I am reporting on how the scientific literature contains a story, regarding the long-term efficacy of its drugs, that psychiatry is not willing to tell to the American public (or to itself.)
Deconstructing the Quality of Life Studies
With this context in mind, we can now turn to the claim by Pies and Pierre that there are two well-designed, placebo-controlled trials that showed that antipsychotics improve the quality of life of schizophrenia patients. This claim can provide a test case of what I have been writing about here: Do we see, in their reporting on the study, the critical thinking our society would like to see in those who tell of the “evidence base” for psychiatric drugs? Or do we see yet another example of how leaders in American psychiatry, in their communications to the public and to their peers, draw a conclusion that will support what they want to believe and support their clinical practices, even when such a conclusion is not supported by the data?
In short, we will want to see whether their interpretation of the study reveals a devotion to using clinical research to improve patient care, or using it to boost a belief in their profession.
S. Hamilton, et al. (1998). Olanzapine versus placebo and haloperidol: quality of life and efficacy results of the North American double-blind trial. Neuropsychopharmacology 18: 41-49.
Authors: This is a study that was conducted by Eli Lilly as part of the trials it conducted to get Zyprexa approved by the FDA. The study was authored by employees of Lilly Research Laboratories.
Methods: Investigators at 23 clinical centers recruited patients with a diagnosis of schizophrenia, ages 18 to 65, into the study. The patients were mostly a chronic group, with a mean age of around 36. After being enrolled, they were hospitalized and abruptly discontinued from their antipsychotic medications. They were kept off such medication for four to seven days, and any “placebo responders”—those who got better during this period—were washed out of the trial. Those who were suffering from an “acute exacerbation” of symptoms at the end of that washout phase were randomized into one of five treatment groups: placebo, three olanzapine groups at different dosages, and haloperidol at a daily dose of 15 mg. All patients at randomization were given a Quality of Life Score (QLS), which became their “baseline” measurement for assessing later changes in their quality of life.
All volunteers were hospitalized for the first two weeks, and then were discharged during weeks two to six if they “responded” to treatment (a 40% decrease in psychotic symptoms). At the end of six weeks, “responders” were entered into an extension study designed to last another 46 weeks, with their quality of life assessed at weeks 12, 24, 36, and 52. The researchers hypothesized that those treated with olanzapine would show superior improvement in their QLS scores than the placebo patients, with this benefit persisting throughout the year-long study.
Results: There were 335 volunteers randomized into the five treatment groups (67 each). Only a minority of patients (28%) responded to treatment in the first six weeks and continued into the extension part of the trial. Of the 95 patients who entered the extension trial, only 76 survived another six weeks and thus had one post-baseline QLS store (at week 12 from start of study). There were only three patients in the placebo group, 4 in the haloperidol group, and 33 in the three olanzapine groups who stayed in the trial to week 24 and thus had a second QLS assessment. There were no further detailed reports of QLS scores because so few patients stayed in the trial past the 24-week mark.
To calculate QLS scores, the Eli Lilly investigators used a Last Observation Carried Forward (LOCF) score for those who survived until week 12 but then dropped out before week 24, and then added this LOCF data to the 24-week scores for the 40 patients who stayed in the trial to that point. The Lilly investigators reported these findings:
- The olanzapine patients who responded to the drug during the first six weeks had much better QLS scores at 24 weeks than they did at baseline.
- The olanzapine responders showed significantly more improvement in QLS scores at the end of 24 weeks than the placebo responders.
- There was no significant difference in improvement in QLS scores for the olanzapine and haloperidol responders.
Conclusion in published paper: “Improvement in quality of life was observed in olanzapine-treated responders.”
My interpretation
a) The study is unethical.
As psychiatry regularly informs the public, schizophrenia patients who abruptly stop taking their medication are at great risk of suffering a severe relapse, which puts the person at high risk of suicide. There is also worry that after such a relapse, a person may not regain the same level of stability following the resumption of medication. In this study, the abrupt discontinuation was expected to lead to an “acute exacerbation of symptoms.” Then one-fifth of those patients would be left untreated and go through weeks of withdrawal symptoms.
Thus, all 335 patients randomized into the study were exposed to harm (abrupt discontinuation of their drugs), and 67—the placebo group—were exposed to extended harm.
All told, 240 of the 335 patients (72%) failed to respond to “treatment,” and thus could be counted as harmed in the study. In addition, 55 of the responders then dropped out before week 24, and given that they were in clinical care at the start of the study, this drop-out result would seemingly tell of harm done. Thus, 295 of the 335 patients who entered the study (88%) either failed to respond to treatment or dropped out of care by week 24.
The investigators did not report on adverse events or suicides, and they provided no information of what happened to non-responders following week six. However, when I reported on the atypical trials for the Boston Globe, I discovered that 12 of the 2500 volunteers in the olanzapine trials had died by suicide. If that rate held true in this study, one or two of the 335 volunteers would have killed themselves, their deaths properly attributed to a study design that put all of the volunteers in harm’s way.
b) The study is biased by company authorship.
This was a study of olanzapine conducted by Eli Lilly: it designed the study, analyzed the results, and reported the results. It was a study designed to produce a marketing blurb, which is that its drug produced a Q of L benefit over haloperidol and placebo.
c) The study is a failed study.
The endpoint for this study was quality of life at the end of 52 weeks. However, there were so few volunteers who made it past week 24 that no results were reported after that date. Thus, the study failed to provide evidence that olanzapine provided a Q of L benefit that persisted, as had been hypothesized.
d) The study is biased by design against “placebo.”
The patients randomized to “placebo” were going through abrupt withdrawal from antipsychotics, and were then left untreated for withdrawal symptoms. Given what is known about the hazards of abrupt withdrawal, this “placebo” group could be expected to fare poorly.
This design lies at the heart of psychiatry’s self-deception, and the gaping hole in its evidence base. Pies and Pierre write in their paper of focusing on “placebo-controlled studies,” as such studies are seen as the gold standard in clinical research. But psychiatry has very few true “placebo-controlled” studies in its research literature. What it has is an abundance of studies where patients abruptly withdrawn from their medications are dubbed a placebo group, which means they masquerade as a placebo group. This is psychiatry’s dirty little secret, and like all dirty little secrets, it is conveniently kept hidden from the public, and as this hiding goes on and on, psychiatry convinces itself it really has “placebo-controlled studies” that it can cite.
e) There is no evidence that olanzapine improved the quality of life for patients diagnosed with schizophrenia.
The baseline QLS score in this study was taken when patients, having been abruptly withdrawn from their medications, were randomized into the treatment groups. Thus, the baseline score told of Quality of Life when the patients were suffering from a withdrawal-induced exacerbation of symptoms. This set an artificially poor baseline score.
Furthermore, to assess whether a drug improves quality of life for a group of patients, it would be necessary to collect QLS scores for all of the patients randomized into the study, and do so at all of the scheduled assessments (weeks 12, 24, 36 and 52). In this study, we know that 69% of the patients treated with olanzapine failed to “respond,” and thus this group could be expected to have a poor Q of L score. Another 15% of the olanzapine patients dropped out before week 24, and thus we might imagine that their quality of life was not so terrific at that point. In order to draw any conclusion about the effect of antipsychotics on quality of life in this study, even in relation to the artificially poor baseline score, you would need to report the scores for all patients.
In short, the results reported in this paper are for a small, select group of good responders to olanzapine (16% of the initial patients), and not for all of the patients treated with olanzapine.
f) The bottom-line results
From a scientific perspective, the design set up this comparison: in patients who had been ill on average about 9 to 10 years, the study compared outcomes for those who were abruptly withdrawn from their medications and then put back on an antipsychotic to those who were withdrawn and left untreated for that withdrawal. The study found that there was a higher response rate for those placed back on olanzapine, and also found that in a select group of olanzapine responders, their Q of L scores had improved notably in comparison to when they were going through abrupt withdrawal. At the same time, the study found that chronic patients withdrawn abruptly from their medications and left untreated fared poorly, as could be expected.
H. Nasrallah, et al. (2004).Health-related quality of life in patients with schizophrenia during treatment with long-acting, injectable risperidone. J Clin Psychiatry 65:531-36.
This study suffers from all of the same defects as the olanzapine study, plus one. The study was funded by Janssen, the manufacturer of risperidone. Most of the authors of the study were Janssen employees, and the lead author was psychiatrist Henry Nasrallal, who disclosed he had financial ties—including serving on speakers’ bureaus—for Janssen and a number of other pharmaceutical companies. A 2014 “Dollars for Docs” report by ProPublica showed that Nasrallah had been paid by 14 pharmaceutical companies for some service or another from August 2013 to December 2014.
The volunteers recruited for the study—a slightly older, more chronic group than the patients in the olanzapine study—were abruptly withdrawn from their medications. It appears that “placebo responders” were then washed out from the study, although this is not clearly stated. The 369 patients were randomized to placebo, or to three groups given different dosages of injectable risperidone (along with an oral dose of risperidone for the first three weeks).
As was the case with the olanzapine study, the placebo group was composed of chronic patients exposed to the hazards of abrupt drug withdrawal, and left untreated for those symptoms.
The study authors do not provide any details about the fate of all 369 patients. There is no information on study dropouts. The authors simply report that at the end of 12 weeks, quality of life had deteriorated in the placebo group, and had improved remarkably in the three risperidone groups. Patients given a 25 mg. dose of injectable risperidone were reported to be enjoying a quality of life at the end of 12 weeks similar to the general U.S. population, with their mental health now just as good as the average Joe’s. The chronic patients, it seemed, had been restored in 12 weeks to near physical and mental “normalcy,” a result so remarkable it reminds one of stories from the Bible.
Pies and Pierre Report the Results
In their article, Pies and Pierre assert that only caring clinicians, such as themselves, are capable of evaluating the evidence base for their patients. Journalists who dare to do so should be seen as guilty of journalistic malpractice, and given that I have now offered my opinion on the merits of these two studies, I imagine they would want me reprimanded anew.
For their part, Pies and Pierre informed readers that in “both studies, patients treated with the antipsychotic showed significantly greater improvement in Q of L than those treated with placebo.” Moreover, they wrote, “Nasrallah et al found that long-acting risperidone (25 mg.) improved Q of L to levels not significantly different from normal.”
And thus the relevant question for society: Does their review of the evidence in this case instill confidence that they can be trusted as the keepers of the evidence base for psychiatry? Or do we see the very type of assessment that made me skeptical about this medical specialty in the first place?