Dr. Johnathan Shedler recently published a paper critiquing how the term “evidence-based” is being used in the field of psychotherapy. He argues that “evidence-based” has come to refer to select, manualized therapies that are wrongly upheld as superior.
“The term evidence-based has come to mean something very different for psychotherapy,” Shedler writes. “It has been appropriated to promote a specific ideology and agenda. It is now used as a code word for manualized therapy—most often brief, one-size-fits-all forms of cognitive behavior therapy (CBT).”
The term “evidence-based” was originally meant to encourage practice grounded in critical thinking, multiple forms of information, and scientific research. It was popularized in the 1990s within medical fields. However, Shedler contends that “evidence-based therapy” has taken on a new meaning within psychotherapy.
Ultimately, this shifting discourse has developed alongside what Shedler refers to as a “master narrative” in psychotherapy: that the field has evolved away from therapists practicing unproven, unscientific psychotherapy toward practicing evidence-based therapies that are proven and superior. Cognitive Behavioral Therapy (CBT) has been positioned as superior to insight-oriented therapies. The media participate in denigrating relationship-based or insight-oriented therapies in a way that forwards the master narrative. Practice that deviates from the manual is painted as a rejection of science.
“Note how the language leads to a form of McCarthyism,” Shedler writes. “Because proponents of brief, manualized therapies have appropriated the term ‘evidence-based,’ it has become nearly impossible to have an intelligent discussion about what constitutes good therapy.”
Past research has previously called CBT’s superiority into question (see MIA report) and Shedler begins his paper by outlining primary sources that contribute to a counternarrative, “that evidence-based therapies are weak treatments.”
“There is a yawning chasm between what we are told research shows and what research actually shows,” Shedler explains.
By outlining the popularly cited work of the National Institute of Mental Health’s (NIMH) 1989 study, he conveys that the misconceptions of CBT as superior treatment are not a result of data misrepresentation. Instead, they are the result of a translation error, or the failure to accurately discern research jargon from colloquial language that informs policy and practice. Specifically, Shedler describes the confusion that results from the word “significance,” and its context-dependent meaning:
“Major misunderstandings arise when researchers ‘disseminate’ research findings to patients, policymakers, and practitioners. Researchers speak of ‘significant’ treatment benefits, referring to statistical significance. Most people understandably but mistakenly take this to mean that patients get well or at least meaningfully better.”
In revisiting this 1989 NIMH study, Shedler writes he is “embarrassed” to admit that he initially assumed that the 1.2-point outcome difference between the CBT group and placebo was meaningful, only to find that this difference was not even statistically significant. How did the results of this study come to be represented so differently from the actual data? These are the questions Shedler addresses in this paper.
“It was difficult to wrap my head around the notion that widespread claims that the study provided scientific support for CBT had no basis in the actual data. This seems to be a case where the master narrative trumped the facts.”
What about recent cases? Shedler jumps to a review of a recent, randomized control trial (RCT) of CBT for depression. The findings are strikingly similar to the 1989 NIMH study: about 75% of patients did not get well. Despite the results of these two major studies, and everything in between, brief manualized treatments are continuously promoted as “evidence-based” rather than ineffective.
The support for brief manualized therapies as a treatment for panic is equally bleak, according to Shedler. Worse still are findings that consider the long-term impact, or lack thereof, of manualized therapies.
“Sadly, such information reaches few clinicians and fewer patients,” he adds. “I wonder what the public and policy makers would think if they knew these are the same treatments described publicly as ‘evidence-based,’ ‘scientifically proven,’ and ‘the gold standard.’”
In this paper, Shedler continues by addressing problematic research practices behind the studies that support manualized therapies. First, most participants or patients are not included in studies as the research requires that patients only meet criteria for one diagnosis. The findings are then practically meaningless with real-world presentations, Shedler notes.
Second, comparator treatments for CBT are “shams” writes Shedler. They are essentially “fake treatments that are intended to fail” to prop up CBT as efficacious. This is done in numerous ways. For example, researchers might recruit graduate students rather than established professionals to deliver the non-CBT treatments. In other studies featuring psychodynamic therapy for PTSD as a comparator to CBT, therapists were instructed to avoid discussing trauma.
Unfortunately, these cases are not the exception. Shedler cites a review of the literature which sought to identify studies that compared “evidence-based” therapy with an alternative, bona fide psychotherapy treatments. After sifting through the extant literature, they found only 14 studies that accomplished this, none of which demonstrated “evidence-based” therapies to be more effective.
Further, these manualized therapies are not only paradoxically considered to be evidence-based bereft of actual evidence, but studies that demonstrate evidence to support the opposite are suppressed. “Data are being hidden,” writes Shedler, which is not a new concern given well-documented publication bias. A team of researchers found that “the published benefits of CBT are exaggerated by 75% owing to publication bias,” Shedler highlights.
Shedler ends by reexamining how much the term “evidence-based has veered from its original intended purpose. “‘Evidence-based’ does not actually mean supported by evidence, it means manualized, scripted, and not psychodynamic. What does not fit the master narrative does not count.”
He writes that the newfound meaning behind the “evidence-based” hype discounts patient values and perspectives as well as clinician judgment. When patients are not appropriately informed about the potential drawbacks and benefits to different forms of treatment, they cannot exercise informed choice. Further, clinicians encouraged to adhere to manuals rather than exercise clinical judgment are limited in the degree to which they can respond to client needs.
“The narrative has become a justification for all-out attacks on traditional talk therapy—that is, therapy aimed at fostering self-examination and self-understanding in the context of an ongoing, meaningful therapy relationship.”
Shedler leaves readers with the following words of advice:
“You should not take my word for any of this—or anyone else’s word. I will leave you with 3 simple things to do to help sift truth from hyperbole. When somebody makes a claim for a treatment, any treatment, follow these 3 steps:
- Say, ‘Show me the study.’ Ask for a reference, a citation, a PDF. Have the study put in your hands. Sometimes it does not exist.
- If the study does exist, read it—especially the fine print.
- Draw your own conclusions. Ask yourself: Do the actual methods and findings of the study justify the claim I heard?”
For a limited time, the original paper by Johnathan Shedler is available online for free: https://authors.elsevier.com/a/1W-Os,4QFJ3htz
Shedler, J. (2018). Where Is the Evidence for “Evidence-Based” Therapy?. Psychiatric Clinics of North America, 41(2), 319-329.