Prominent Researcher and Psychotherapist Questions “Evidence-Based Therapy”


Dr. Johnathan Shedler recently published a paper critiquing how the term “evidence-based” is being used in the field of psychotherapy. He argues that “evidence-based” has come to refer to select, manualized therapies that are wrongly upheld as superior.

“The term evidence-based has come to mean something very different for psychotherapy,” Shedler writes. “It has been appropriated to promote a specific ideology and agenda. It is now used as a code word for manualized therapy—most often brief, one-size-fits-all forms of cognitive behavior therapy (CBT).”

Photo Credit: Pixabay

The term “evidence-based” was originally meant to encourage practice grounded in critical thinking, multiple forms of information, and scientific research. It was popularized in the 1990s within medical fields. However, Shedler contends that “evidence-based therapy” has taken on a new meaning within psychotherapy.

Ultimately, this shifting discourse has developed alongside what Shedler refers to as a “master narrative” in psychotherapy: that the field has evolved away from therapists practicing unproven, unscientific psychotherapy toward practicing evidence-based therapies that are proven and superior. Cognitive Behavioral Therapy (CBT) has been positioned as superior to insight-oriented therapies. The media participate in denigrating relationship-based or insight-oriented therapies in a way that forwards the master narrative. Practice that deviates from the manual is painted as a rejection of science.

“Note how the language leads to a form of McCarthyism,” Shedler writes.  “Because proponents of brief, manualized therapies have appropriated the term ‘evidence-based,’ it has become nearly impossible to have an intelligent discussion about what constitutes good therapy.”

Past research has previously called CBT’s superiority into question (see MIA report) and Shedler begins his paper by outlining primary sources that contribute to a counternarrative, “that evidence-based therapies are weak treatments.”

“There is a yawning chasm between what we are told research shows and what research actually shows,” Shedler explains.

By outlining the popularly cited work of the National Institute of Mental Health’s (NIMH) 1989 study, he conveys that the misconceptions of CBT as superior treatment are not a result of data misrepresentation. Instead, they are the result of a translation error, or the failure to accurately discern research jargon from colloquial language that informs policy and practice. Specifically, Shedler describes the confusion that results from the word “significance,” and its context-dependent meaning:

“Major misunderstandings arise when researchers ‘disseminate’ research findings to patients, policymakers, and practitioners. Researchers speak of ‘significant’ treatment benefits, referring to statistical significance. Most people understandably but mistakenly take this to mean that patients get well or at least meaningfully better.”

In revisiting this 1989 NIMH study, Shedler writes he is “embarrassed” to admit that he initially assumed that the 1.2-point outcome difference between the CBT group and placebo was meaningful, only to find that this difference was not even statistically significant. How did the results of this study come to be represented so differently from the actual data? These are the questions Shedler addresses in this paper.

“It was difficult to wrap my head around the notion that widespread claims that the study provided scientific support for CBT had no basis in the actual data. This seems to be a case where the master narrative trumped the facts.”

What about recent cases? Shedler jumps to a review of a recent, randomized control trial (RCT) of CBT for depression. The findings are strikingly similar to the 1989 NIMH study: about 75% of patients did not get well. Despite the results of these two major studies, and everything in between, brief manualized treatments are continuously promoted as “evidence-based” rather than ineffective.

The support for brief manualized therapies as a treatment for panic is equally bleak, according to Shedler. Worse still are findings that consider the long-term impact, or lack thereof, of manualized therapies.

“Sadly, such information reaches few clinicians and fewer patients,” he adds. “I wonder what the public and policy makers would think if they knew these are the same treatments described publicly as ‘evidence-based,’ ‘scientifically proven,’ and ‘the gold standard.’”

In this paper, Shedler continues by addressing problematic research practices behind the studies that support manualized therapies. First, most participants or patients are not included in studies as the research requires that patients only meet criteria for one diagnosis. The findings are then practically meaningless with real-world presentations, Shedler notes.

Second, comparator treatments for CBT are “shams” writes Shedler. They are essentially “fake treatments that are intended to fail” to prop up CBT as efficacious. This is done in numerous ways. For example, researchers might recruit graduate students rather than established professionals to deliver the non-CBT treatments. In other studies featuring psychodynamic therapy for PTSD as a comparator to CBT, therapists were instructed to avoid discussing trauma.

Unfortunately, these cases are not the exception. Shedler cites a review of the literature which sought to identify studies that compared “evidence-based” therapy with an alternative, bona fide psychotherapy treatments. After sifting through the extant literature, they found only 14 studies that accomplished this, none of which demonstrated “evidence-based” therapies to be more effective.

Further, these manualized therapies are not only paradoxically considered to be evidence-based bereft of actual evidence, but studies that demonstrate evidence to support the opposite are suppressed. “Data are being hidden,” writes Shedler, which is not a new concern given well-documented publication bias. A team of researchers found that “the published benefits of CBT are exaggerated by 75% owing to publication bias,” Shedler highlights.

Shedler ends by reexamining how much the term “evidence-based has veered from its original intended purpose. “‘Evidence-based’ does not actually mean supported by evidence, it means manualized, scripted, and not psychodynamic. What does not fit the master narrative does not count.”

He writes that the newfound meaning behind the “evidence-based” hype discounts patient values and perspectives as well as clinician judgment. When patients are not appropriately informed about the potential drawbacks and benefits to different forms of treatment, they cannot exercise informed choice. Further, clinicians encouraged to adhere to manuals rather than exercise clinical judgment are limited in the degree to which they can respond to client needs.

“The narrative has become a justification for all-out attacks on traditional talk therapy—that is, therapy aimed at fostering self-examination and self-understanding in the context of an ongoing, meaningful therapy relationship.”

Shedler leaves readers with the following words of advice:

“You should not take my word for any of this—or anyone else’s word. I will leave you with 3 simple things to do to help sift truth from hyperbole. When somebody makes a claim for a treatment, any treatment, follow these 3 steps:

  1. Say, ‘Show me the study.’ Ask for a reference, a citation, a PDF. Have the study put in your hands. Sometimes it does not exist.
  2. If the study does exist, read it—especially the fine print.
  3. Draw your own conclusions. Ask yourself: Do the actual methods and findings of the study justify the claim I heard?”



For a limited time, the original paper by Johnathan Shedler is available online for free:,4QFJ3htz


Shedler, J. (2018). Where Is the Evidence for “Evidence-Based” Therapy?. Psychiatric Clinics of North America41(2), 319-329.







Mad in America hosts blogs by a diverse group of writers. These posts are designed to serve as a public forum for a discussion—broadly speaking—of psychiatry and its treatments. The opinions expressed are the writers’ own.


Mad in America has made some changes to the commenting process. You no longer need to login or create an account on our site to comment. The only information needed is your name, email and comment text. Comments made with an account prior to this change will remain visible on the site.


  1. Here we go again. I’m not sure why MIA deems it useful to provide a platform for an outspoken psychodynamic therapist who argues that our most scientifically supported approach to therapy is not as good as his preferred long-term psychodynamic therapy which has far less evidence to support it. Within the field, Shedler is a hero to psychodynamic therapists who resent having their unscientific approach usurped by more evidence-based approaches. He is also not taken seriously by science-based psychologists who favor CBT because he misrepresents the state of the science and refuses to engage with his critics and counter-arguments to his ideas, of which there are many.

    Interested readers can check out two of his blog posts at Psychology Today and in particular the comments sections, in which Shedler oddly fails to respond to any of his many critics who presented thoughtful rebuttals of his claim. Missing from the comments at the first link is a comment where I pointed out with some concern his failure to engage thoughtful critics – a comment he deleted.


    Report comment

    • Brett, I believe MIA comments on this not to promote any particular approach but to highlight that when psychology uses the term “evidence-based treatments” it is banging the drum of the medical model. This model is flawed on so many levels which I am sure I don’t need to elaborate on, but worth mentioning is the stigmatising and discouraging message of the problem “in” people, or put differently that they “have” some or other fictitious disorder as result of some or other inner failing yet to be proven. This cannot be supported in any shape of form and thus MIA should report on this

      On psychotherapy research I find the claims that this or that approach is more helpful/effective than others astonishing. There are so many factors, circumstances and variables that can impact on outcome and these studies do not and cannot account for that. Using the term evidence based is thus more a selling point than something based in fact – a money spinner for those in private practice while the simplicity of it all very alluring for prospective clients

      Report comment

      • Gerard, I understand your perspective, but it is a scientific fact that some therapies are more effective than others for certain types of psychological problems. I happen to specialise in a type of therapy that even staunch proponents of the “all therapies are equally effective” camp acknowledge to be specifically effective (e.g., exposure therapy for anxiety. When people are seeking help for, say, a specific phobia (e.g., of needles, like a client I saw today), exposure therapy is more effective in improving the problem than other therapy approaches without regard to the many aspects that make that person a unique individual. What I generally experience on this site is rejection of the idea that the kind of therapy a practitioner provides for any type of client issue is of any importance whatsoever, because it’s all about the relationship. Bullshit. I have a practice full of clients who have seen many therapists whom they describe as kind, well-meaning, and unhelpful because they weren’t equipped with a scientific understanding of their clients’ particular type of problem and an approach to intervention based on our best available science. My needle phobic client today has seen numerous therapists who taught him to breathe deeply and relax when he gets an injection, which is precisely the opposite of what he should do (based on a scientific understanding of the problem) as this makes him more likely to pass out, which is precisely the problem. “Evidence-based” absolutely can be a hoax intended to make money, but it can also be a well-informed description of the kind of therapy that clients are desperate to find – and clearly prefer – but have great difficulty accessing, in part because of the belief among many therapists that whatever therapy they happen to be providing to their clients is as good as anything else because “technique” is irrelevant.

        Report comment

        • Brett, how can you know that your needle phobic client was helped more by the technique you used than by your care and respect for and genuine interest in helping them, your confidence, their perception of your expertise or their trust placed in you? My point is no “evidence based research” can measure this and by ignoring these factors as contributors makes psychology appear “scientific”

          Report comment

          • I would add that while I think techniques are handy and valuable to have around, and do not in any way diminish their potential value to a particular client, it is more than possible that the next client you have will not respond well at all to the approach that worked so well for this particular person. The idea that a therapy is “evidence based” appears to suggest that it is the better therapy for everyone with a particular problem. Since the problems that we’re talking about can’t be defined in any kind of objective way, it seems arrogant, at the minimum, to suggest that “science” has somehow come up with the “best way” to deal with problems that are heterogeneous in both origin and in meaning to the client.

            So I’m not dismissing therapeutic techniques here. I’m saying that suggesting one school of therapy is superior for all based on the fact that it has been “studied” and that more people with a particular complaint on the average seem to benefit is a very big leap. It may well be that exposure therapy is more likely to be helpful for a specific phobia, and we should all know that, but that doesn’t make exposure therapy better for everything, nor does it mean that a particular client will do better with exposure therapy for their particular phobia. It also doesn’t mean that some goofball who is thoughtless and insensitive and has lots of personal issues that make him/her emotionally unavailable and difficult to relate to can take out the “exposure therapy” manual and be trained up to do “exposure therapy” on anyone successfully.

            Evidence is important and should be considered, but making out that one school of therapy is the best and should be used on everyone, or that the characteristics and interpersonal skills of the therapist are irrelevant, is simply not true.

            — Steve

            Report comment

  2. Quick follow-up: quote from above: “[Shedler] writes that the newfound meaning behind the “evidence-based” hype discounts patient values and perspectives as well as clinician judgment. When patients are not appropriately informed about the potential drawbacks and benefits to different forms of treatment, they cannot exercise informed choice. Further, clinicians encouraged to adhere to manuals rather than exercise clinical judgment are limited in the degree to which they can respond to client needs.”

    Shedler obviously believes clients (or in his words, “patients”) do not value or prefer evidence-based therapies. He is wrong, according to research that has examined this issue. Turns out people want their therapy to be scientifically credible and shown to be effective in clinical studies, in addition to wanting a good relationship with their therapist. And therapists (like Shedler) tend to fail to appreciate this, and assume their clients share their own biases, which they appear not to share.


    Report comment

    • Hi Brett there are some really good books analyzing the research thats often poorly controlled, biased and cannot be reproduced – several good books are:
      The Therapy Industry: The Irresistible Rise of the Talking Cure, and Why It Doesn’t Work

      Psychology Gone Wrong: The Dark Sides of Science and Therapy

      Psychology Led Astray: Cargo Cult in Science and Therapy

      Power, Interest and Psychology: Elements of a Social Materialist Understanding of Distress

      exposure therapy can be useful, if the person can actually tolerate it but a significant majority can’t.

      Report comment

      • Chris, thanks for sending these links. There is a lot of bad science in the field, baseless claims, and fraudulent marketing, and I have been an outspoken critic of pseudoscience in psychology for a long time. For example, I co-authored this book chapter, published in “Science and Pseudoscience in Clinical Psychology,” on therapies for trauma ( in which I called out EMDR for its pseudoscientific nature. I’ve written about the pseudoscientific nature of “antidepressants” as well ( I value science, appreciate good science, and am a fierce critic of pseudoscience anywhere it shows up.

        There is actually a lot of good science to support the safety, tolerability, and effectiveness of exposure therapy for anxiety. It’s probably the most clearly science-based approach of any kind, for any type of psychological problem. I’ve written about this quite a bit (e.g., Your claim that a “significant majority” can’t tolerate exposure is empirically false, according to a large body of science on this topic. Meta-analyses indicate that exposure-based therapies do not have higher dropout rates than non-exposure-based therapies for anxiety (e.g.,; Notably, dropout rates are very low to non-existent in the kind of highly intensive exposure-based therapies critics might expect to be especially intolerable. To illustrate, Hansen et al. ( reported no dropouts among 65 clients diagnosed with OCD who initiated their highly effective, four-day intensive exposure treatment. Similarly, Foa and colleagues ( obtained fewer dropouts (14%) in 2 weeks of massed exposure (10 daily sessions) for PTSD than in 8 weeks of exposure therapy (25%). I could go on and on, there is a large scientific literature here and it unequivocally does not show that most clients cannot tolerate exposure.

        But what is certainly true is that many therapists believe, incorrectly, that most clients cannot tolerate exposure. I have studied this as well ( Therapists who believe this have been shown in many studies to eschew exposure in favor of “gentler” approaches like relaxation, mindfulness, and so on that are less effective, but less distressing – to the therapist (e.g.,; Exposure is the most powerful therapy we have for anxiety, and it as safe and tolerable as other therapies, but most therapists don’t provide it and most clients can’t access it, and this is a big problem. If anything, MIA worsens this problem fostering cynicism about anything and everything on offer in the mental health system, even the approaches that work well and are not aligned with psychiatry’s biomedical model. This continues to be a thorn in my side as a participant here and I’m sure is the major reason why science-based (clinical) psychology has basically no presence here.

        Report comment

        • Thanks for those links Brett. In my experience working in the mental (ill) health field for many years and attempting to ‘treat’ people with phobias and all manor of trauma using CBT, EMDR and just being present with people as well as talking with many colleagues, it is clear that these issues are hard to resolve and people most often drop out because they cannot tolerate it.
          Not to mention the chaos and complexity that is often currently present in our lives beyond the comforting confines of any therapy room. From all of my years of working with people attempting to help, people tell me they most value having someone to share parts of their story, with someone truly and compassionately interested in them and their distress and who doesn’t burden them with judgement.
          This could be done by a good friend or family member IF we were living in healthy cultures where people actually had some time, energy and resources to care properly for themselves, others and their (none existent) communities.
          But they/we don’t because most are trapped in jobs that harm and that most hate, mass struggles with debt, family breakdown, and so on – most are running ever faster to either stand still or actually go backwards in life and insecurity and uncertainly are increasing everywhere as the current political ideology crushes more and more people.
          It is interesting to note that one of the features of so called PTSD is that it is aid to bring about a state of ‘pathological’ fear and uncertainty about the world – it could be argued that this is not pathological but actually quite accurate and it is WE well-adjusted people to a profoundly sick society that are actually quite dissociated and distracted from the reality of massive and growing uncertainty and fear and the sheet volume of systemic threats around us.
          Perhaps the traumatised are seeing the world and its many, varied and often random threats with a new sharper clarity but this is intolerable to both them and us. Like the research that shows the mildly depressed (whatever that means) have a more accurate view of the world than those considered ‘normal’.
          I am sure for some people exposure when tolerated is helpful but for how long for given the issues with our disordered cultures? do people really have a discrete disorder called OCD or are we seeing reactions to the world and seeking to ‘treat’ this set of experiences might bring some temporary relief but leaves us all vulnerable to harm because its utterly missing the context and system we operate in.

          I agree that EMDR is pseudo scientific but there again many critical psychologists tell us the entire field of clinical psychology is pseudo scientific and is driven more by fashion, fad and self interest than any sort of science – just a look over the history of the field and that of psychiatry clearly shows the nonsense that has been upheld as the ‘gold standard’ treatment of the age.

          Take CBT the marriage of two not so long ago utterly opposed ideas where behaviourists would have said the cognitive/psychological cannot be measured or seen. We know we are largely rubbish at introspection, we story tell automatically and fabricate to fill in the gaps and we are largely a mystery to ourselves and each other. yet insight based therapy is mostly what we have – the main insight for me is that we have little insight into our selves and others.

          It seems quite clear to me that the mental (ill) health system looks almost exclusively at the individual as having a disorder – rather than seeing US as reacting quite understandably and meaningfully to a disordered world.
          So do we need to stop looking within at hypothesized personal pathology and look without to make the world a place we can actually thrive in? .

          We’ve had many decades of psychology, psychiatry and pharmacology and each year suffering increases massively. Something is very wrong with this picture.

          Maybe you’ve heard of this critic? his books are interesting and this interview is useful

          #056 – Why Psychotherapy is Bullsh*t (Dr. William Epstein)
          In today’s episode Dr. William Epstein joins us to explain why he believes psychotherapy is not only ineffective and possibly even harmful, but why it is little more …

          Report comment

  3. While I am not well versed in the various arguments and data in favor of one form of therapy over another, I am agreement in the author that calling “CBT” an “evidence-based therapy” privileges it in a way that is not deserved. In fact, my observations and my limited exposure to research on particular therapeutic schools suggests that the school of therapy is a very small part of what makes therapy effective. I regard CBT as ONE tool (or maybe one TYPE of tool) in a large toolbox, and psychodymanic approaches are another set. But bottom line, what seems to matter most is 1) the relative emotional health of the therapist, and 2) the ability of that therapist to support the client in his/her own discovery process by whatever means appear to be most effective in his/her case. This requires therapists to be genuine, real, honest, non-judgmental, safe, creative, thoughtful, and sensitive regarding verbal and nonverbal feedback they receive from the client. It also requires the therapist to be aware of his/her own unresolved issues and constant vigilance to keep these from directing his/her intervention and support in any way.

    None of this can be taught in a manual, and I doubt it is even measurable. Nor is a client’s progress really measurable in real terms. Most studies use “symptom reduction” as their outcome measure, but clients generally have a lot more than symptom reduction in mind when they come to see a therapist. How do you measure things like an increased sense of personal power? Hope for the future? Ability to set boundaries? Ability to connect with difficult emotions without recoiling or acting out to avoid them? These things are subtle and don’t improve on a 1-10 scale. They are things that are more FELT by the client than observed by the therapist directly, and even if they were measurable, “evidence based medicine” doesn’t have either the means or the interest in measuring them.

    Evidence Based Medicine is more appropriate to actual disease states where outcomes like lowered blood pressure or failure of a tumor to return can objectively be recorded. There is no objective measure of even who HAS a particular “mental disorder.” How can we measure improvement when we have no objective measure, or even any real objective CONCEPT, of what improvement looks like? Given the circumstances, the only possible worthwhile measure of success in therapy is the client’s opinion of whether or not it was worth his/her time and energy. And of course, no one is ever going to care enough to try and measure THAT!

    Report comment

    • “Since the problems that we’re talking about can’t be defined in any kind of objective way, it seems arrogant, at the minimum, to suggest that “science” has somehow come up with the “best way” to deal with problems that are heterogeneous in both origin and in meaning to the client.”

      I think this sums it up, Steve

      Report comment

      • Does the fact that a problem like the fear of needles can’t be defined according to a biological test mean that it does not exist? That it cannot be measured, even with psychometrically high-quality self-report measures, or behavioural measures? That there is no point in investigating which strategies best help people overcome the fear of needles? That such research, when it exists and produces clear conclusions, can be entirely ignored because the fear of needles is not a bona fide medical disease that can be shown to exist with objective measures? That everyone with a fear of needles is such a unique individual that they have nothing in common that could allow us to understand what contributes generally speaking to a fear of needles, and what helps people generally speaking to overcome it? My answer to these questions is no, and that about sums up my rejection of the idea you and Steve have noted here.

        Report comment

        • I don’t think anyone has denied that a fear of needles actually exists and that it can be very distressing. What is worth considering though is that a fear of needles may not be a fear of needles. For some people there may be a displacement of a fear of something more abstract but nevertheless still threatening into something more concrete (such as needles, or dogs, or heights, etc.) in order to manage (avoid and control) better with that. Unfortunately I can’t cite any scientific psychological research to support these wild claims that these peoples’ fears may be very unique to them and that they have absolutely nothing in common with people who are actually afraid of needles based on injections gone wrong in the past

          Report comment

        • I think we’re speaking to two different issues here. Is it possible that a certain approach works better for a certain kind of problem? Yes. Is it scientific to suggest that you can train anyone to use a workbook to apply such approaches to anyone who comes to them and expect success? No. You’re talking about probabilities with an incredible number of variables. I do believe in probabilities, but the variable of who is talking to the person and how they treat the person is AT THE LEAST as important as the technique they choose to use. Moreover, suggesting that “CBT”, for instance, is a “better therapy” because a larger percentage of people with a certain kind of condition respond positively by some subjective measure is a gross oversimplification.

          Let’s get off of your one example of phobias and talk about something more general. Is CBT the “best” therapy for “major depression?” You know and I know that the name “major depression” can be assigned to a huge range of conditions that vary from childhood abuse to low thyroid function to a bad job situation to domestic abuse in a current relationship to existential concerns about the meaning of one’s life to feelings of hopelessness regarding a chronic medical condition that is drastically reducing one’s quality of life. Do you think that one brand of therapy is going to address every case of this “diagnosis?” Do you think that the style, emotional health, flexibility, creativity and life experience of the therapist would not be at least as important a factor?

          I have used CBT techniques plenty. I’ve also used Motivational Interviewing (though I kind of invented that myself before I realized I was using it), regressive techniques, exposure, “rejection therapy,” spiritual guidance, meditation, journaling, dreams, empowerment techniques, reflective listening, reframing, positive reinforcement, and a few inventions of my own that I won’t get into trying to explain. It all depends on who the person is and what needs they have. My experience is that a) what works for one or even most people won’t work for everyone, and 2) the PREREQUISITE for ANY of these techniques working well is the establishment of sufficient rapport and trust with the client, which is not something that ANY manual can teach – it is learned through having good therapy oneself and/or through humility and the hard work of introspection over many years. Again, I’m not saying that techniques don’t have their place, or that a particular technique might not work well for a lot of people with similar “symptoms.” I’m saying that pulling out a workbook and going through the steps of CBT or exposure therapy or any kind of manualized therapy doesn’t work without these other elements, and I’m also saying that trying to suggest that one particular therapy is “evidence based” and therefore BETTER than other techniques creates unfortunate dynamics that don’t really connect with the intangible stuff that HAS to be present, nor does it allow for the observable fact that people presenting with the same “symptoms” don’t always have the same problem or the same needs.

          Not being argumentative here, just trying to be clear. I have NO problem with knowing a range of techniques AND knowing such data that informs when they may be more likely to be effective. What I have a problem with is deciding that “CBT” or whatever is the ONLY approach that can be applied and that any other approach is “less than” because it doesn’t have an “evidence base.” None of what I’ve said even gets into the sketchy research techniques used to gather such evidence, nor the effect of financial incentives to research or not research particular techniques or areas (impossible to be “evidence based” if no one is motivated to pay to research your particular approach). Bottom line, I think that knowing how to handle a wide range of techniques is important and helpful, but will never overshadow the essential elements of establishing and maintaining genuine rapport and flexibility with clients, which of course will never be a focus of any research.

          Report comment

          • Steve, you’ll get no argument from me that the relationship is critically important, and without it it doesn’t much matter what “techniques” the therapist uses. But beyond the relationship, its also clear from our available science that for certain problems, some approaches (not “techniques,” but “approaches” that have a unified philosophy/theory/strategy, like exposure therapy for anxiety) are more effective than others. I don’t remotely see technique/type of therapy and relationship as antithetical, and debates of the merits of one vs. the other miss the point to me. It’s great to have the right relationship factors in place, but those alone aren’t always enough, and the research is clear that for some problems – like the ones I help my clients address – the relationship alone (in the context of non-exposure-based therapy) isn’t optimal. That’s not the case for everyone, but the data are the data, and they show that people in general who seek help for anxiety problems tend to benefit more from exposure-based therapy. Yes, they are all unique individuals with their own complex histories and contexts, but still, isn’t it best to start as a default by using the approach science shows to work best, and modify as needed from there?

            I’ll end by quoting from the great psychologist/philosopher Paul Meehl, whose words on this topic speak to me ( “The vulgar error is the cliché that “We aren’t dealing with groups, we are dealing with this individual case.” It is doubtful that one can profitably debate this cliché in a case conference, since anyone who puts it quite this way is not educable in ten minutes. He who wishes to reform the thinking in case conferences must constantly reiterate the elementary truth that if you depart in your clinical decision making from a well- established or even moderately well-supported) empirical frequency— whether it is based upon psychometrics, life-history material, rating scales or whatever—your departure may save a particular case from being misclassified predictively or therapeutically; but that such departures are, prima facie, counterinductive, so that a decision policy of this kind is almost certain to have a cost that exceeds its benefits. The research evidence strongly suggests that a policy of making such departures, except very sparingly, will result in the misclassifying of other cases that would have been correctly classified had such nonactuarial departures been forbidden; it also suggests that more of this second kind of misclassification will occur than will be compensated for by the improvement in the first kind (Meehl, 1957—reprinted here as Chapter 4). That there are occasions when you should use your head instead of the formula is perfectly clear. But which occasions they are is most emphatically not clear. What is clear on the available empirical data is that these occasions are much rarer than most clinicians suppose.”

            Report comment

          • Well, it sounds like we’re not too far apart here. I think just approaching the problem from different directions. My big concern about manualized therapy approaches is that they convey the idea that if you follow certain steps, you’ll get results, regardless of who you are or what the client’s full presentation is. The corollary to that quickly becomes: clients who DON’T respond to the ‘recommended technique’ are “resistant clients” or have “treatment-resistant depression (or anxiety or whatever)” and are classified as somehow “difficult” clients because they don’t cooperate with the therapist’s biases. In addition to the problems with warped data collection and some approaches failing to ever BE researched (as I discussed above, and as I THINK you agree), calling some therapies “evidence based” has been used to dismiss anything OTHER than the “evidence based” approach, so that instead of saying, “Let’s start here, as this is what is most likely to work,” the field quickly devolves into “This is the only way to do it, and anyone who denies this or tries anything else is “antiscientific.” There is simply NOT enough scientific research available to make such claims, especially (a point you have not really addressed) as the groups being so “treated” are by definition highly heterogeneous in nature.

            As to your quote at the end, the discussion of “classification” and “misclassification” really does suggest a power relationship of the therapist to the client with which I strongly disagree and have found to be detrimental to any kind of help. No one needs or wants to be “classified.” They want help finding a way to survive and thrive better in their lives. It is exactly this kind of “classification” that concepts like “evidence based practices” enshrine. And of course, it is not by chance that both classification AND “evidence based practice” are most strongly supportive of the sketchiest intervention of all – giving drugs for every ailment. I sometimes wonder if that was the original purpose of the concept.

            Report comment

      • Gerard, you wrote, “Too many variables when it comes to studying people and then generalising those results to everyone, Brett. If one does that, then it smacks of arrogance (as Steve said) and looks and sounds like psychiatry.” Can I assume, from what you’re saying here, that you reject the entire science of psychology, based as it is on the study of groups of people? Or that you reject the notion that what we learn from the scientific study of groups of people is at all relevant to the experience of individual people (of which groups consist)? Or both? If it is arrogant to conclude otherwise, what adjective would you use to describe the rejections I noted above?

        Report comment

    • Science has been amazing in helping us to develop knowledge of the physical world. I realized, when I was taking postgrad courses that it was also dehumanizing, so I switched to medical school at McGill and was delighted to find our lectures in psychiatry to be extremely human.
      This was in 1952 and the lecturer was Dr Heinz Lehman, an empathic European gentleman, who the following year discovered that chlorpromazine could help his incurable schizophrenic patients.
      Unfortunately this precipitated the drug revolution that assumes that our psychological and spiritual problems are physical in origin, and not human relation problems.
      False science is causing tremendous damage to people, who get treated like objects, instead of like the human beings they are.

      Report comment

  4. “Evidence-based” is one of those terms like “health food” — as a friend asked long ago in consternation, “if this is a health food store what kind of food are they selling in all the other stores?”

    If the selling point of a particular medical treatment or approach to something is that it’s based on evidence, does that mean that all the others are just pulled out of someone’s ass?

    A corollary to this is that “evidence” is not proof. Ask a judge.

    Report comment

    • Also keep in mind that psychiatry is not medicine, it is social control, and “therapy” is not science (at best it is philosophy or ideology); any medical trappings are simply a way of disguising this, so in that light this “debate” is pretty much just falling into the same diversionary trap.

      Report comment

      • oldhead, as I see it, science is a method of inquiry, not the product of this method. Some questions are beyond the scope of science, such as those that pertain to philosophy or spirituality whose validity cannot be revealed through scientific study. But if we can adequately measure a variable and study it using the scientific method, it is within the scope of science. We can adequately measure psychological phenomena like behaviours, beliefs, subjective experiences, quality of life, etc. And we can examine the impact of various manipulations/interventions of these phenomena using the scientific method. “Therapy” is one such intervention, and we have a massive amount of well-conducted scientific studies that speak to its effects on different types of problems of thinking/feeling/behaving, as measured with psychometrically sound instruments. This is science, in my book. Now, this science is done by people, which means it is often misinterpreted, suppressed, inappropriately manipulated, etc., but that is another topic.

        Report comment

  5. We can adequately measure psychological phenomena like behaviours, beliefs, subjective experiences, quality of life, etc.

    I can’t believe that you believe this. You cannot objectively measure or quantify subjective experiences, that’s part of what makes them subjective.

    Report comment

      • Dragonslayer, I just said what I consider science to be. Psychology is the realm of thoughts/feelings/behaviours, so a psychological phenomenon would be something in that realm. oldhead, I didn’t say we can *objectively* measure subjective phenomena – you accused me of saying this and critiqued me for it. I said we can adequately measure (many) psychological phenomena like behaviours, beliefs, and subjective experiences with the use of instruments like questionnaires, rating scales, etc. If you are claiming that we cannot adequately measure any psychological phenomena like behaviours and beliefs, then well, I can’t believe you believe that. The snark level in this comment thread is unnecessary.

        Report comment

        • I agree with oldhead that it is not possible to objectively measure emotions, behaviors, and beliefs with behavioral rating scales. First off, we are relying on either self-report, which depends on the reporter both being honest and sufficiently self-aware to answer accurately, or on observer report, which opens us to prejudice and value judgments that are almost impossible to sort out. Additionally, what we are measuring doesn’t really have width or weight or pressure – things like “do you sleep well at night” or “do you frequently have trouble concentrating?” don’t have yes/no or scalable answers. Normalization allows for some kind of statistical studies, which makes it possible to look at large groups and draw some very general conclusions about probabilities, but as for measuring individuals’ emotions or thoughts or anything of that sort, we’re getting into a realm so subjective that the term “measurement” can’t really be applied. And applying probabilities to individuals is part of what doesn’t work about “mental health” interventions.

          Report comment

          • Does quantum physics constitute science?

            First off, we are relying on either self-report, which depends on the reporter both being honest and sufficiently self-aware to answer accurately, or on observer report, which opens us to prejudice and value judgments that are almost impossible to sort out.

            According to quantum theories backed up by “evidence” the subject being observed literally is changed by the fact of being observed, as there is a relationship between the observing force and that being observed. Not talking about mind here but matter. Additionally, objects can physically exist in many locations at once — until an observer focuses on one of those locations, at which point that location becomes the only one. So it appears that at a deeper level of focus even objectivity is subjective. Don’t want to ruin anyone’s day though.

            Report comment

        • Dr. Deacon, thank you for your reply. Yes, you wrote that “science is a method of inquiry, not the product of this method.” What is this “method of inquiry” of which you write? How did you come upon it?

          “Psychology is the realm of thoughts/feelings/behaviours, so a psychological phenomenon would be something in that realm.”

          What evidence do you have for the assertion that psychology is the realm of thoughts/feelings/behaviours? Of course its a tautology to state that psychology is this realm, and that therefore a psychological phenomenon exists in this realm. What is a psychological phenomenon? It’s still not very clear. Thank you.

          Report comment

        • I didn’t say we can *objectively* measure subjective phenomena…I said we can adequately measure (many) psychological phenomena like behaviours, beliefs, and subjective experiences with the use of instruments like questionnaires, rating scales, etc.

          If they can’t be measured objectively how can they be measured “adequately”? If that’s not too snarky.

          Report comment

  6. Steve, did you read my comment that I didn’t say it was possibly to *objectively* measure emotions, behaviours, and beliefs? Nobody here is claiming this. But we can often measure them in a scientifically adequate manner. I’ve published many studies on self-report measures of psychological constructs. Here is an example: Turns out we can strongly predict the therapy practitioners provide to their clients by measuring their negative beliefs about exposure therapy. From this research, we can give an individual practitioner our measure of negative beliefs about exposure therapy and, knowing his/her score, have a probability estimate of how he/she might work with anxious clients. And we can be pretty confident in our probability estimate because of how strongly scores on our measure predict therapist behaviour. This is solid science and I’m proud of it.

    I’ve been reflecting on my presence at MIA, more specifically why I am an active commenter here. I totally support MIA’s criticism of the biomedical paradigm, but beyond that, what I encounter at MIA is often an affront to what I stand for as a psychologist with strong scientific values. This perception is regularly reinforced in my discussions here and often enough in what MIA staff post, like the article that started this discussion. Mental health professionals who share my scientific values are all but absent here; some used to be around but have long since left. I’ve mentioned this before – MIA has had basically zero apparent reach/impact in the world of science-based clinical psychology. And that is too bad, because I believe there could be a natural alliance to be built. For example, a massive global trend in psychotherapy is toward acceptance and mindfulness-based approaches, like Acceptance and Commitment Therapy (ACT), that explicitly reject the biomedical paradigm. Find a way to connect MIA with ACBS, the Association for Contextual Behavioral Science, and the game would change overnight. But a cultural shift would have to occur at MIA for that to happen. At present, most ACBS members wouldn’t last long here, not because they have thin skin, but because MIA doesn’t reflect their values.

    For my part, I don’t want to have to continually defend why scientific knowledge is important as a mental health professional, why I advocate therapies that have a great deal of scientific support over those that do not, why it makes sense to apply the results of group-level research in a probabilistic manner to individuals, how psychological phenomena like beliefs and behaviour can be adequately measured, etc. I’m tired of it. And I look around and notice there is nobody else here like me and I wonder what in the hell I am doing here. And I don’t have a good answer. And so, I think I need to stick to only reading articles here, at least for a while. Thanks Steve and others for the discussions of late, I have genuinely enjoyed them.

    For the record, I’m not saying I’m right and others are wrong, that’s not at all the point of this post. What I am saying is that MIA, for all its awesomeness, is a lonely and contentious place for someone like me who is both a fierce opponent of the biomedical model and a staunch advocate of rigorous science and useful psychosocial approaches derived from it. When it comes to biomedical paradigm criticism, I think MIA nails it, hands down best resource in the world. But this site is all over the shop when it comes to identifying and encouraging useful science-based alternatives to psychiatry’s drug-based paradigm of care. And the antagonism toward all things science-based and brought to you by mental health professionals, no matter how well-established and useful, gets old after a while, at least to me. Alright, I’ve said what I wanted to say, thanks for reading. Catch ya’ll later. -Brett

    Report comment

    • Not sure why we appear to be arguing here. I have never spoken against using science to study human behavior. I am a scientist (chemist) by training and am well aware of the advantages and limitations of the scientific method. I also posted my clear understanding that general trends can be arrived at scientifically using norms and averages. What I objected to is the idea that certain therapy “brands” can be identified as “evidence based” and therefore considered reliably better or more effective than those lacking this “evidence base” in most or all cases. I and others have outlined in several posts both the limitations of such evidence when applied to individuals AND the financial and other biases that warp the “evidence base” in favor of certain kinds of interventions (drugs being the MOST supported by “evidence” because, of course, they get the most funding for research since they make the most MONEY.)

      It seems to me that your arguments mostly support an evidence base for certain very specific signs/symptoms being best approached (at least initially) by certain means. I don’t have any real argument with that. The problems arise when we either overgeneralize (if phobias are most likely to improve with exposure therapy, then exposure therapy is the best therapy for ALL forms of anxiety) or fail to adapt to individual circumstances (some people WON’T improve with exposure therapy but will with something else, and some don’t have elimination of that “symptom” as their goal). So the science involved in psychology is mostly applicable to people considered as a group, but only if that group has very specific characteristics in common, which we all know that most people working in the field don’t grasp.

      The greater danger of “EBP”, though, is when it is applied to entire DSM categories, or to therapy in general. I can’t tell you how many stories I’ve heard where CBT or DBT (or drugs) have been forced on someone because “that’s what works” or “it’s evidence-based practice.” I recently heard from someone that they and others were FORCED to do “mindfulness” exercises every day as part of a DBT group (they got in trouble if they refused). I’m not sure how familiar you are with mindfulness, but I think it’s fair to say that forcing someone to do mindfulness exercises is deeply ironic and defeats the very purpose of the concept, kind of like your Dad saying, “We’re going on a trip and you are going to have FUN, do you hear me? FUN, whether you like it or not!” Oddly, this person did not find “mindfulness” very helpful…

      You can (perhaps rightfully claim) that this isn’t really the “EBP” that was studied, and that it’s being misapplied, but that is what clinicians tend to do when presented with this EBP concept. As a scientist, my observation over many years is that science is by far the best at showing what DOES NOT work by falsification. This is particularly true when humans are involved. It is, again, arrogant, in my view, to suggest that a particular therapy is THE BEST for any diagnosis, or that ANY particular therapy is bound to work for a particular person. I think that science itself has shown us that the DSM categories are not scientific entities and are grossly heterogeneous, such that proposing any single solution based on grouping people together by how they feel or think is bound to lead to unscientific practices.

      BTW, I noticed you didn’t address my point regarding “EBPs” for a diagnostic category like “Major Depressive Disorder.” It seems to me that your example of phobias is a very focused and specific category compared to any DSM diagnosis. Do you think there can be an “evidence based practice” that applies to all people who are diagnosed with “MDD” or “Bipolar disorder” or “Borderline Personality Disorder?” Do you see a danger in prescribing a particular approach to take with ALL people in such a category? Do you see potential corruption in marketable “workbook strategies” that would encourage the marketers to claim more general success than is actually observed?

      In my experience, what these “EBPs” provide are concepts that can be applied or attempted in specific situations, but the bottom line continues to be whether or not the client him/herself accomplishes his/her goals in his/her own opinion, and any clinician who considers his/her school of therapy more important than the client’s response is going to have a lot of failures, and will be very tempted to blame his/her clients for those failures instead of coming up with a different approach.

      Report comment

  7. I realise I am not capable of meeting the purity test requirements of those who dominate the comments section here. This is true despite my rejection of the biomedical model, which I describe on my own professional website ( I’ve said as much in many blog posts and published articles (e.g., I have given dozens of free copies of Anatomy of an Epidemic to my clients in the past year. I reject the notion of “mental health,” as opposed to “mental illness,” and I talk to my clients all the time about this. I could go on and on, but what’s the point? I do not belong here, according to the culture of the comments section at MIA, and I’ve finally come to understand that. People who share my scientific values do not belong here. This helps to explain why MIA has made basically no inroads, and indeed has no chance of making inroads, in the world of non-biomedically-oriented “mental health” professionals who care about science, as long as the present culture remains intact. I’m sure I will get flamed for saying this but I’m also sure this is true, and I have a very educated opinion about this based on extensive personal experience. And that is why I have come to recognise the futility of being an active member in the comments section of this website. And so, with respect, I am bowing out of discussions here. I wasn’t planning on writing this but felt compelled to do so to respond to some recent posts. That’s it for me, thank you.

    Report comment

    • Hi Brett and Everyone

      I realize that I am way late to this discussion/debate. I found it very interesting and educational, and despite the fact that there was no clear resolution, it IS an important topic to be discussed.

      It is too late for me to go back into the main points, which were all covered quite thoroughly by many commenters. As with many discussions, we MUST continue to thrash them out (perhaps again and again) because the world needs solutions.

      Brett said the following (out of frustration and disappointment):

      ” I do not belong here, according to the culture of the comments section at MIA, and I’ve finally come to understand that. People who share my scientific values do not belong here. This helps to explain why MIA has made basically no inroads, and indeed has no chance of making inroads, in the world of non-biomedically-oriented “mental health” professionals who care about science, as long as the present culture remains intact.”

      Of course Brett belongs here and his presence has ALWAYS been positive and educational.

      However, his/your assessment that the “present culture” of the commenters at MIA IS the reason that more inroads have not been made influencing “biomedically-oriented “mental health” professionals,” CANNOT be supported by science or any other method of analysis.

      We must first look at who has “STATE POWER” in society and controls the media, educational system and all other significant institutions involved in the promotion and control of the current paradigm of so-called “treatment.” They spend billions of dollars YEARLY to carefully control public opinion and maintain the power and authority of psychiatry and their entire paradigm.

      This class based capitalist system needs psychiatry (and all that comes with it) to maintain its existence, and it WILL NOT give it up just because some minority group can now prove the harm that it causes.

      Again, there are BILLIONS being spent here, and it is of tremendous importance to those in power that psychiatry and its ” genetic theories of original sin” continue to be accepted widely throughout society and its legal and “scientific” authority sustained.

      I am not saying that the culture of the comment section at MIA has zero effect on its participants. But we must put all this in the perspective of what we are truly up against here, and what it will actually take to end all forms of psychiatric abuse.

      People participating in discussions at MIA SHOULD NOT EXPECT that a highly rational discussion will always lead to a resolution, and then be deflated or demoralized if they feel misunderstood or don’t reach the desired conclusion. This rarely happens in the comment section.

      Sometimes we must just finish our main arguments and then move on. In the mean time, there is growing chaos in the world and scientific and political struggle (of all kinds) breaking out all over the planet. Tomorrow will be a new day, with new conditions and opportunities for us to make trouble and possibly small inroads against those in power.

      Everyone get a good nights sleep and get ready to do battle in the new day.

      Carry on! Richard

      Report comment

    • There really are such things as “scientific values,” but unfortunately, anything can be co-opted.

      My understanding of scientific values includes: 1) Observable data is the only basis for determining what is true; 2) Human beings are inherently susceptible to confirmation bias, therefore, the primary role of science is to be skeptical and to intend to disprove potential hypotheses rigorously rather than searching for data to support them; 3) Scientific models are only as “true” as they are useful in predicting real data and events, and they are true only as long as they consistently produce this kind of result; 4) ALL data relating to any particular hypothesis must always be made available to all researchers – per #2 above, any data potentially REFUTING a hypothesis is particularly important to make available.

      There are more and there are other viewpoints on what makes an inquiry scientific, but the idea that scientists have some special knowledge and ability to determine truth and that those less qualified should stand back and let the scientists do their jobs is certainly not valid. Many scientific discoveries (or invalidations) are made by people in a different field entirely. Science is about finding the truth, and no one has special access to the truth.

      Report comment

  8. self assessment questionnaires seem an almost total waste of time to me – in the production line of suffering known as IAPT in the UK they routinely use the PHQ9 and GAD7 two measures that measures nothing more than someones best guess at how they may or may not be feeling in a given subjective moment of time – near useless, yet these measures are how the service deems someone to have received a ‘successful treatment’ an empty notion for an empty measure for if your life is falling apart around you and you score ‘below clinical’ on these measures you have been successfully treated and are now in ‘recovery’ another empty term yet these same empty measures are also lined to continued service funding and are the driving force behind the mass burnout of staff as all anyone has time to care about are these useless self assessment measures.

    Report comment

  9. I’m done with MIA as well. It is next to impossible to have any sort of a conversation with anyone without comments being moderated away because everything is interpreted as a “personal attack” or not politically correct. Time to take my conversation somewhere where people are actually capable of reasoned and civil dialogue. I don’t know where that might be actually… perhaps on Mars.

    Oh, and “science” is not an excuse to avoid substantive questions or to assume some pretended mantle of authority, particularly when very few people seem capable of defining what “science” might even be.

    Thank you everyone. It’s been fun. If you’re interested to learn the truth about psychiatry, please consult the following website:

    Over and out.

    Report comment