Editor’s Note: Over the course of several months, Mad in America is publishing a serialized version of Sami Timimi’s book, Insane Medicine. This week, he explores the common factors that influence therapy’s success, the equal outcomes for different therapy types, and the over-promotion of CBT. Each Monday, a new section of the book is published, and all chapters are archived here.
The replication of findings is one of the defining hallmarks of science. For a finding by one research team to be more broadly accepted as a reliable finding, other teams should follow a similar methodology and get similar results. Replication of results is a necessary step to something becoming part of scientific knowledge.
This process protects against what is known as “false positives,” which is when researchers announce a result which other research teams cannot also find—and therefore, the original finding is likely not to be a true finding.
I have already given several examples of how biological research into psychiatric conditions has faced a replication crisis. Actually, the whole of psychology is facing a replication crisis. It turns out that many studies in psychology—including famous and highly cited studies—do not adequately replicate.
To give you an idea of the extent of non-replicability of psychology findings, a group led by University of Virginia psychologist Brian Nosek reported in 2015 that only about a third of the psychological studies in premier psychology journals replicate.
This problem is particularly pronounced for social psychology. One of the issues psychological research encounters is the bias in populations studied. Nearly all psychology research takes place in the developed world of European, North American, or Australasian continents. Even in these locations, certain captive populations such as university students account disproportionately for the subjects they study.
The inherent bias means that what we refer to as “psychology” is really the psychology of Western societies and, largely, Western educated society. The so-called science of psychology struggles to overcome this major limitation. The minds of those living in a poor Nairobi slum will have important differences to those living in a predominantly white middle class suburb of Birmingham, who will also have significant differences to those in their neighbouring Birmingham middle class, but predominantly made up of Pakistani immigrants.
Psychology is rooted in subjectivity and cannot escape it. Whilst there are a number of aspects that are more mechanical, such as the reflexes, perception, and motor control, and thus are more amenable to a natural sciences investigative approach, most of our psyche exists in a subjective space that we are not able to tap into by purely objective methods.
The research that tries to establish elusive psychological laws is of less interest to me than that which acknowledges and investigates our differences. These differences arise from the multiple sources of subjectivity that may affect our mind from our personal histories to the cultural beliefs we are exposed to.
For example, Richard Nisbett and Takahiko Masuda in their classic research touched on some broad differences in the way Japanese and Americans tend to perceive the world. This includes the observation that when American and Japanese participants are asked to take a photo of a person, the Americans most frequently take a close-up, showing all the facial features, while the Japanese were more likely to take a picture that showed the person in his or her environment with the human figure quite small. They interpret their findings, from that and similar studies, that Americans are more inclined to attend to some focal object, analysing its attributes and categorising it, whereas Japanese participants are more likely to attend to a broad perceptual and conceptual field, noticing relationships and grouping objects based on family resemblance rather than category membership. Whatever the reasons, it is these differing subjectivities that shape our experience and interpretations of the worlds we exist within.
Most of psychology is therefore not reducible to easily quantifiable universals that can be measured with a neutral objective eye. All we can really measure by the empirical methods of natural science are inputs to the person (environmental stimuli) and outputs from the person (functioning in response). What happens between the input and the output is simply not available to any revealing and “objective” analysis. You cannot get a proper handle on what is happening in the mind by clever neuroimaging or giving people complicated puzzles to solve.
The story of Breaking Bad will not reveal itself by examining the TV hardware for patterns of electric current activity. We cannot measure meaning. “Cognitive” is essentially a construct. We do not have reliable insight into the processes that occur beyond input and output. We have no window on the mind. We cannot escape subjectivity.
What we have are different philosophies that enable us to construct meanings. Psychology is but one branch of philosophy expounding a particular Western-centric view of the mind.
According to research, are some psychotherapies better than others?
The answer is no.
Just as in psychology in general, the mental health field has increasingly faced questions about its conventional treatments and outcomes, as service users, researchers, academics, and others have cast aspersions on its ethical standing and scientific status. There are over 500 different forms of therapy documented and every year new ones come on stream.
Not only has this proliferation of models not resulted in improving outcomes, but studies also show that psychotherapy is less effective for those who are poor, have minority status, or are on antidepressants.
The field has been troubled not so much with a replicability crisis, but rather that the repeated finding is of a lack of progress. Nothing in therapy seems to be getting better. Controlled trials that test efficacy of therapies started using the sort of methodologies we now recognise, and use regularly in research, in the 1970s. Studies carried out since then with different therapeutic modalities have not shown improved the rates of recovery as a result of treatment. Some comparisons even suggest outcomes from therapy in controlled trials have got slightly worse over time.
Therapies have hung on the coattails of psychiatric paradigms selling themselves as particular treatments for particular diagnoses. This is how they are now categorised and researched. But this “battle of the brands” has not revealed that any psychotherapy that is properly compared to another recognised psychotherapy delivered by therapists who are trained in the model they implement is superior to others.
What does the outcome research show, then?
The good news is that those who volunteer to go for psychotherapy will do better than those on a waiting list. But what is most potent about therapy turns out to be humbler and more human. The variable that most influences outcome from treatment is factors outside of therapy. These are all the things people coming to treatment walk in with; from their personal histories to their socioeconomic status, financial situation, social network, beliefs about the treatment on offer, and so on.
According to researchers, factors outside of therapy account for anywhere between 40% and 85% of the influence on outcomes for treatment of the common mental health conditions. The outcomes studies use “symptom”-based questionnaires to track improvement; a dubious focus that doesn’t necessarily tell you much about functioning or quality of life.
Within treatment, the factor that has the biggest impact on outcomes is the therapeutic alliance as rated by the patient. The therapeutic alliance is a complicated construct involving not just a sense of empathy (or unconditional positive regard)—important as this is, along with a sense of genuineness and interest in the person from the therapist—but also things like having some common idea on what the goals of therapy are, how change might occur, and trust that the person is knowledgeable and effective.
It isn’t even about agreeing. Some research suggests that being able to repair what is referred to as a “therapeutic rupture”—in other words, being able to reconcile after a falling out—is particularly associated with positive outcomes.
These two factors—the extra-therapeutic context and the therapeutic alliance—are intertwined. For example, patient expectations, and particularly a prior belief that the treatment is going to be helpful, may be the biggest “extra-therapeutic” factor, but also has a big impact on the subsequent therapeutic alliance. Not surprisingly, positive expectation that the treatment will be helpful is predictive of a positive therapeutic alliance, which in turn is predictive of a positive outcome from treatment. All pretty obvious, really.
You see, real human relationships are messy. Not unexpectedly, then, so are therapeutic relationships. Relationships are central to how we understand and construct subjectivity, and psychology and psychotherapy cannot escape this very human reality. Here then is the unspoken dirty secret of the therapy world: It’s not that any formal therapy is better than other formal therapies, it’s that some therapists are better than others.
What makes one therapist better than others seems very hard to capture. Research has shown no consistent associations between things like therapists’ style, profession, ethnicity, gender, age, experience, and so on, and the outcomes they achieve. Each person/therapist pairing is in fact unique and will generate their own unique dynamic, and so generalisations are elusive. You cannot escape the inter-subjectivity of the human encounter.
Just as factors affecting the therapist’s side of the alliance resists easy identification, so too does the patient’s side of the alliance. The quality of the patient’s participation in the treatment appears to be the most important within-treatment determinant of outcome on the patient’s side of the equation. This is basically another way of saying that patients who are ready and willing to make changes are more likely to develop a positive therapeutic alliance and thus do better in treatment.
Is it just me, or does all of this seem fairly obvious? Is the “science” just telling us that what we might judge from a human relational rather than technical perspective to be most important, is in fact the most important?
As far as the findings from research on what most affects outcomes, these are about it as far as the headlines are concerned: model of treatment (brand of psychotherapy) isn’t a key mediator. Factors outside of treatment (your real-life history, context, and attitudes to treatment) have the biggest impact, and within treatment it’s the therapeutic alliance (therapist and patient fit) that has the biggest impact.
That’s pretty much it. Other attempts to drill down and distil more specific factors have run into that problem of replicability.
This hasn’t stopped claims of brand superiority. The belief that one (or more) therapy would prove superior to others for particular conditions has found little support when properly investigated. This was becoming increasingly evident as far back as the late 1980s as the critical mass of data, when viewed all together, was revealing little in the way of clinically significant differences in effectiveness between the various treatment models for psychological distress. Despite the development of many new techniques, the outcomes being achieved in studies conducted 50 years ago have remained broadly similar to those being achieved now.
Cognitive behavioural therapy (CBT) has emerged as the “king” of psychotherapies. It is regularly touted as the treatment of choice for many conditions such as “depression,” “anxiety,” “OCD,” and even for the psychological treatment part of psychosis. Yet when properly controlled studies are done comparing it to other brands of therapy (such as psychodynamic or interpersonal therapy), it fares no better or worse.
On top of this, several studies have shown that most of the specific techniques of CBT can be dispensed with without affecting the outcomes. In other words, if you take out any one feature of the standard CBT model (for example a requirement to do certain types of homework) you get the same outcomes as if you had included them. This is also true for other therapies. Why is CBT “king,” then? I think this is related to how fads enter into the professional, and then public, “common sense” through a macho marketing process of “I’ve got a bigger one than yours.”
The “I’ve got a bigger one than yours” phenomena simply relates to volume of publications, professional attention, media attention, and training courses, which in turn foster more training, research, publications, and media attention. CBT started gaining ascendance in the 1970s, when it began displacing the (up until then) more favoured and established psychoanalytic approach to psychotherapy.
With its emphasis on thinking patterns, it was easier to research, and the early CBT researchers developed many of the research questionnaires still used to this day for investigating response to treatment for various psychiatric conditions. In outcome research, then, CBT stole a march on other therapies and rapidly established an “evidence base” showing that CBT worked better than a waiting list.
As the volume of research into CBT rose, so did its popularity in training and research and its reputation. When researchers finally conducted properly controlled trails making a direct comparison between CBT and other established therapies, the fact that there was no difference between them didn’t matter. CBT had already established its reputation as “the” therapy and its tentacles had already spread far and wide. Like a local burger joint trying to outdo McDonald’s on the high street, it was now more arduous to get your brand recognised next to the well-known giant. “I’ve got a bigger one than yours” usually wins in the capitalist marketplace.
These findings of treatment model equivalence extend across the spectrum of treatments. Thus, treatments for children, adolescents, and families are also characterised by the same, rather obvious common factors of real-life histories and context having the biggest outcome, followed by the therapeutic alliance.
Thus, although various treatments are more effective than no treatment (at least in the short term of most study durations, typically a few weeks to a few months), no difference in outcome is found between cognitive and non-cognitive approaches. Similarly, component studies find that the theoretically claimed “critical” ingredients of CBT for young people are not specifically necessary, as full CBT treatments offers no significant benefit over treatments with only partial components of the full model.
What about outcomes in clinical settings?
Although some studies have found that outcomes for therapy in some clinical (real-life) settings are comparable to those found in research settings, most have not. Those who have examined outcomes for patients who are accessing treatments from standard community mental health services, who will thus be subject to all the medicalising tendencies I have already discussed, have found their outcomes to be very poor indeed.
The unacceptable picture that emerges when we look at patients attending such services is that only 15-25% report ongoing clinical improvement, way off the 50-80% who might do so according to research.
In child and adolescent mental health, some evidence suggests even larger differences in outcomes between research and clinical practice than for adults, with some studies finding that there isn’t even a short-term difference between those with a similar level of distress (according to a rating questionnaire) who attend a service and those who don’t. Other studies suggest that those who stay with the service longer may have worse outcomes than those who don’t stay with services.
Some studies have had ambitious “service transformation” aims. “The Fort Bragg” evaluation, a famous study in the US during the 1990s, involved designing an all-singing, all-dancing, state-of-the-art, $94 million demonstration project designed to improve mental health outcomes for children and adolescents who were referred for mental health treatment.
The Fort Bragg project was a beefed-up treatment pathway model with all the bells and whistles. In other words, it was designed around prioritising technical aspects through making a diagnosis and then allocating a specific therapy or therapies based on the idea that particular diagnoses need specific treatments, whether this is a specific medication or brand of psychotherapy.
Extensive data were collected on children and their families, and evaluations continued for several years. Outcomes in the experimental service were no better than for those who attended a neighbouring comparison service, despite the considerable extra costs incurred. The researchers then thought they could iron out some of the problems and so went on to design an even better service in Stark County. Evaluation of that service compared to another that didn’t receive all these extra funds, again found no differences in outcomes.
Scaling up the existing diagnosis-followed-by-treatment pathway doesn’t improve outcomes. The problem isn’t how to scale up the technical aspects of care. The problem is the false, non-evidence-based paradigm that can’t let go of the illusion that we have made progress with the technical aspects and that therefore ignores the centrality of the human aspects—real life contexts, subjectivity, and the interpersonal.
There are also a number of other reasons why patients in clinical practice don’t do as well as those in research, including: research tends to keep out patients who are more difficult to treat (for example, those with several problems); variability among therapists is the rule rather than the exception; patients involved in research tend to be more motivated and optimistic about change; and those who attend clinics are more likely to “relapse” and end up exposed to several interventions, including medication, that are likely to worsen rather than improve the long-term outlook.
This means your typical psychotherapy research patient may not be quite the same as your typical mental health service patient, particularly once the patient has been accessing services for a while.
The set-up in mental health services privileges the technical illusion that the evidence-based way to practice is to make a diagnosis, which then leads to specific treatments, whether this is medication or a particular brand of therapy. As a practising psychiatrist who has worked in services with plenty of other psychiatrists and had contact with a whole variety of mental health teams, from crisis to inpatient, from outpatients to specialist teams, I can confirm that the way services are designed is quite ignorant of the evidence.
The service paradigm of diagnosis, followed by a standardised “treatment pathway,” simply replicates and embeds the belief that we have a technology that accurately categorises patients’ complaints in order to deliver some specific corrective. It is this surgical model that dominates and renders people as passive recipients of a fantasy skilled expert intervention. By surgical I mean that the human factors, like a good alliance, are the anaesthetic to render the patient under your spell so that you can “get in there” with the right therapy brand and excise or manipulate the malfunctioning units. This is how we work and what dominates the way most services operate. This type of disempowerment creates long-term patients.
The gap we have between research outcomes and real-life clinical practice in standard mental health services illustrates the depth of how dysfunctional and blind to the science they are. This has incorrectly been blamed on mental health services being too stretched, meaning that we had become too reliant on medication and weren’t providing enough “talking” treatments.
In response to this critique, in 2007, the UK Government embarked on an ambitious project to scale up the availability of psychological therapies in England, to provide better support for people with conditions such as anxiety and depression.
The project, called Improving Access to Psychological Therapies (IAPT), planned to dramatically increase access to these therapies with the aim of decreasing waiting times and allowing more people with common mental health problems to recover. IAPT, in service terms, has gone from strength to strength, expanding year after year to allow more and more people to access the therapies they have on offer, with referrals to the service rising to over 1.5 million in 2019. However, from the start IAPT has come with considerable baggage in terms of both ideology and results. Its mixed reputation has not dented its continued expansion.
IAPT makes the same mistake of valorising the technical aspects over the human aspects. Just as with mainstream mental health services, they have not understood, or perhaps do not want to understand, the implications of the evidence base. Thus, certain therapies, particularly CBT, are fetishized and make up the bulk of what is delivered by IAPT.
A second problem relates to its origins in an economic model promoted by Lord Layard. The economist Layard envisaged common conditions such as anxiety and depression being broadly naturalistic phenomena that affect individuals just like other diseases might, and that result in significant loss of productivity through days lost by sickness. Layard believed that scaling up treatments for these conditions would lead to more people returning to work, resulting in increased national productivity, thereby more than paying back government’s considerable investment in the service.
In this way, some of the problems of the economy can be “magicked” away by treating these “ill” people. Not only does this obscure the social origins of many people’s anxiety and depression (for example, in poor working conditions or chronic job insecurity), but paves the way to individualising economic woes and psychologising the economic consequences of failing social conditions under capitalism.
A third and related issue is that IAPT uses a model of mental health that contributes to individualising, medicalising, and industrialising proposed solutions to a variety of psychosocial stresses that have been shown to be associated with an increased likelihood of developing a mental health problem. The result is an upside-down strategy to improving wellbeing where Lord Layard, who was advising the government on mental health at the time, announced that in Britain mental illness has now taken over from unemployment as our greatest social problem.
In such a model, whether intentional or not, the political and economic order benefits when distress or dysfunction that may connect to its policies and practices is relocated from a socio-political space—where it is a public and collective problem—to a mental space—a private and individual problem.
It can also obscure other sources of powerlessness and oppression, such as those connected with child abuse and other adversities, where formulaic therapies on offer may accidentally reinforce some people’s conviction that the adversities they experience were their fault.
Not surprisingly, the IAPT project leaders have reported favourable results being delivered by their services, claiming that over 50% who complete a treatment episode are recovering. However, their reports do not include comparisons with the costs and outcomes achieved by non-IAPT services. Nor do they have a natural comparison group; therefore we don’t know how many would have got better with time anyway.
Finally, their figures do not account for what happens to those refused treatment or who drop out after one session. In 2019, for example, two-thirds of patients seen by IAPT did not complete their treatment. Once you start to dig into their data, a picture of poor outcomes similar to that of other mainstream mental health services emerges.
The first independent evaluation of the initial IAPT pilot sites found little difference between the IAPT sites and comparator services, but IAPT treatments had cost more per patient than those provided in neighbouring boroughs. According to reports compiled by the Artemis Trust in 2011, the average number of patients achieving recovery for a fixed expenditure when treated by an IAPT service was far lower than for pre-IAPT primary care counselling services or voluntary sector counselling services. In addition, recovery rates, as a percentage of patients referred, was lower for IAPT services than comparable services.
Looking at IAPT’s own data also reveals considerable variation between IAPT services. Their data shows that the prevalence of mental health problems is greater in poorer areas and that these areas have lower average recovery rates from IAPT treatment.
But it may be even worse than that. A paper published in 2018 described a detailed evaluation of a group of patients who had been discharged after being seen by IAPT, allowing comparison of IAPTs data for these patients and various other indicators. IAPT’s data claimed 44% of these patients were “moving to recovery.” Local GP data revealed a recovery rate of 23% in this group, whereas the author’s own study data found that only 9.2% of these post-IAPT clients could be regarded as recovered when using a more in-depth standardised semi-structured interview.
The researcher also documented the patients’ accounts of their interaction with IAPT, which supported the impression that the treatments they were offered made little real-world difference to their problems.
Like other mainstream mental health services, IAPT fails to improve outcomes.
But perhaps IAPT has helped reduce medicalisation and use of medication by providing extensive access to talking therapies? Again the answer is no. Prescriptions for antidepressants have continued to rise with little evidence that introducing IAPT has had any meaningful impact on these trajectories. In the years since IAPTs national implementation, prescriptions for antidepressants have risen steadily, with the increase accelerating after 2008, coinciding with the financial recession.
Access to the much-vaunted IAPT programme is not associated with the extent of antidepressant prescribing, with availability of IAPT services having no impact on the continuous increase in antidepressant prescribing rates. As the service is not effective, and as it is parked firmly in the technical model of diagnosis followed by specific therapy, it cannot reduce or challenge the continued medicalisation of suffering.
In 2011 IAPT gave birth to the Children and Young People’s Improving Access to Psychological Therapies (CYP-IAPT) project. CYP-IAPT had a wider remit than IAPT, focusing on improving the skills of the existing Child and Adolescent Mental Health Service (CAMHS) workforce and to achieve this by training CAMHS staff in the implementation of particular therapies; mainly CBT and a specific form of “family work.”
In addition, CYP-IAPT included the more ambitious aim of transforming the whole CAMHS service nationally. Unlike adult IAPT, which was set up as a separate project providing a separate service to standard community mental health services, CYP-IAPT was designed to become the model for delivering community child and adolescent mental health services. CYP-IAPT broadly conformed to the principles of IAPT and was imposed on all community child and adolescent mental health services in England in 2016, which were now required to have a standardised national service model of a complex and bureaucratic structure based around diagnostic “care pathways.”
In 2014 and 2015 I had a debate, conducted through the journal Psychiatric Bulletin, about CYP-IAPT with the national project leads for IAPT and CYP-IAPT. In my initial paper, I pointed out the poor evidential basis for these services and questioned the legitimacy of the paradigms they were using. In their reply, Peter Fonagy (head of the CYP-IAPT) and David Clark (head of IAPT) failed to address the problem of their reliance on diagnostic-based guidelines and instead argued that their technical approach supports collaborative working. Typically, their model of collaboration is the pre-surgical anaesthetic I described earlier, where collaboration is about bringing the patient under your spell in order to deliver the “right” treatment.
They ignored the problem of patients who do not want the model of therapy that the CYP-IAPT protocol has told them they should have or what happens if that therapy doesn’t prove helpful. They quoted all sorts of largely irrelevant data (such as percentage of services working to CYP-IAPT principles, percentage of clinicians using routine outcome data, and so on) to support their claim that CYP-IAPT is improving services.
Tellingly, they did not reference any patient outcome data even though collecting this was a core feature of the project. I later found out that the clinical outcome data was likely already in their possession, but they tellingly did not reference this in their reply. The first national audit of CYP-IAPT data was quietly published in September 2015. It showed rates of clinical “improvement” (as a result of a CYP-IAPT service treatment) from their national pilot sites of between 6% and 36% across the different treatment pathways.
No wonder they didn’t mention outcomes in their reply.
In Part 2 of this chapter, we will explore the use of Western “folk psychology” as the “technology” underpinning therapy.
Mad in America hosts blogs by a diverse group of writers. These posts are designed to serve as a public forum for a discussion—broadly speaking—of psychiatry and its treatments. The opinions expressed are the writers’ own.