From the Health Minister Down, Nobody Is Seriously Interested in the Quality Control of Mental Health


Last month I asked the newly appointed UK Health Minister, Dr Coffey, seven questions about whether our mental health services are credible. On October 12th 2022, I received a response, from one of her aides, that began by informing me that NHS England would be best placed to respond to my queries! Only two of the seven questions (reproduced at the end of the blog) are refenced in the reply, with only one question answered.

With regards to the answered question, I had asked that given that the government is introducing Community Diagnostic Centres for physical conditions, were they going to follow suit for mental health?  The response was that “there would be significant challenges to making this approach work for mental health conditions. Diagnosis for mental health is less straightforward…would require a significant expansion in numbers of mental health staff… no plans at present to replicate this model for mental health.” So unbridled clinical judgements will continue to reign.

A photograph of an archery target with three arrows in it. None have hit the bullseye.Is it any wonder that in the real world, the output from mental health therapists amounts to so little? My own independent examination of 90 cases passing through the UK Improving Access to Psychological Therapies (IAPT) Service revealed that only the tip of the iceberg recover, in the sense of losing their diagnostic status.

The judges might agree on features of the fare, e.g., presentation, but without tasting the produce, the judgements are meaningless. The ultimate metric is whether psychological treatment makes a real-world difference to client’s lives. But the data proffered by official agencies cannot answer this key question. Their claims are like those of a totalitarian state in which the populace/consumers are not seriously considered.

NICE Intentions

In the UK, the National Institute of Health and Care Excellence (NICE) guidelines are the ultimate arbiter of physical and mental health care. Failure to comply with them leaves a clinician open to possible legal action. Clinical commissioning groups operate with the belief that the services they fund are NICE compliant; service providers assiduously assert that they are indeed NICE compliant. But it can be argued that, at least as far as mental health is concerned, the NICE guidelines function as a revered cookbook with no evidence of fidelity to the recipes at the coal-face. The guidelines are a committee’s take on good recipes; they represent the committee’s distillation of evidence­-based studies. The influence of NICE is, at best, distal.

The Trickling Down of Good Mental Health Provision?

Ultimately, politicians determine what is spent where. But they have no unique body of knowledge with regard to mental health, and are therefore particularly vulnerable to the effects of lobbyists. A politician who takes a particular interest in mental health is likely to appear particularly credible, “one of our own,” even more so if they recruit an eminent, eloquent mental health expert. The politician will likely opine that it’s not necessary to demonstrate to colleagues that a certain mental health modus operandi is evidence-based, simply that it is “plausible” e.g., “investment in psychoeducation in schools (or drop-in centres) will reduce levels of anxiety and depression in adults,” for success.

Whilst politicians in collaboration with charismatic mental health professionals have brought about mental health policy changes, they have shied away from independent mental health audit.

Politicians and professional bodies may also look to government-funded bodies (e.g., the Department of Health in the UK, or National Institute of Mental Health in the US), to take the lead on implementation and monitoring of evidence-based treatments. But they are likely to take action only if prompted. For example, in the UK following the NICE guidelines on chronic fatigue syndrome (2021), the Department of Health began a consultation with a range of professional bodies with at least some stake in mental health, not all of which decided to participate.

Distilling a joint statement in such a context is a monumental task, and important issues can easily be kicked into the long grass. The inclusion of persons with lived experience of mental health problems (including professionals affected by the particular mental health problem) in the deliberations has borne no obvious fruit to date; rather it has become an article of faith that this is the way forward.

The evidence that politicians and professional bodies have aided in the translation of positive results from randomised controlled trials into practice is lacking. One can only guess at why, in the UK, the National Audit Office (NAO) discontinued its investigation into the Government funded Improving Access to Psychological Therapies service. The NAO’s brief is to help Parliament hold the government to account and improve public services.

Determinants of Quality Control

There are five proximal influences on the quality control of therapeutic output: a) the courses accredited, b) external examiners, c) supervision, d) service providers, and e) professional bodies. Individuals often exert their influence simultaneously via a number of routes. But the prime movers across these sources of influence belong to a managerial class. They may micro-manage what happens at the coal-face but they are not visible there. Importantly, they do not assess and treat clients with the limitations of the “coal-miner.” Further, the managers are not involved in research and development on the product.

Considering the proximal influences in turn:

Accreditation of Courses

Most mental health courses are founded on an alliance of stakeholders. Often there is a prime mover in a university department, where the academic part of the course is located. But service providers also have an input, providing placements and would-be therapists with opportunities to hone their skills. Representatives of “lead” organisations also have a say in the accreditation of courses. Thus, the establishment of a mental health course is usually the product of a working alliance of stakeholders, together with a co-opted member of the public with “lived experience” of mental health difficulties.

But the stakeholders have different agendas. The university has a prime concern with the monies the course can generate. The service providers are concerned with reputation management by links with the university, which in turn enhances their ability to secure funding from commissioning bodies (in the UK, Clinical Commissioning Groups). Representatives of lead organisations can enhance their CVs by involvement in the accreditation process. If asked, most of those involved in the accreditation process would probably say they are simply trying to make a difference and, although this is undoubtedly true, it is likely to be only a partial explanation of their behaviour.

The working alliance will likely fracture if one or more of those involved in the accreditation process asks, “what is the level of evidence that this course makes a real-world difference to those on the receiving end of the ‘trained’ therapists?.” If an academic clinician voices such a question he/she may well find themselves deprived of the service providers data; research has to be done within the metrics defined by the latter. Thus, the accreditation process is not a guarantor that the public’s mental health needs will be met.

External Examiners

External examiners are often headhunted by course leaders at professional conferences. Following the old maxim that people only appoint those like themselves, the possibility of disagreements is minimised. The university rests content with a statement from the external examiner that the said course is of the same standard as comparable courses in other academic institutions. Course leaders provide external examiners with a sample of course work to be evaluated with the approved metrics of a professional body. The system is designed to ensure both conformity and uniformity. At best an external examiner may be able to nudge a course in a slightly different direction. I know of no data on the proportion of external examiners who resign before their tenure is completed.


What is the primary purpose of supervision? The response of most professional and would-be professional therapists is “to promote professional development/ growth.” But development and growth to what? Almost a decade ago I suggested the primary purpose of supervision was to act as a conduit for evidence-based treatment. This has been met with a deafening silence. Workshops on supervision have become a rarity: just one at this year’s 50th Anniversary celebration of the British Association for Behavioural and Cognitive Psychotherapies (BABCP). The most common mode of supervision in low-intensity IAPT involves two to three minutes of case discussion per case and is, in effect, managerial rather than clinical supervision.

Service Providers

There are a plethora of service providers, from government backed agencies such as IAPT to charities such as Mind. But the latter ape the former to secure funding. Independent private practitioners engage in a similar form of imitation, in the UK, to ensure funding from insurance agencies like Bupa. They all utilise a meaningless, self-serving measuring tape that has two key features: a) test scores invariably get lower with the passage of time, and b) clients do not wish to be discourteous to their largely well-meaning therapists and oblige by endorsing improvement. Service providers thereby maintain the fiction of recovery.

Professional Bodies

Bodies such as the British Psychological Society (BPS) validate courses such as IAPT’s low-intensity course. But this was done without any review of empirical evidence, simply at the behest of BPS members who had volunteered themselves for a validating committee. BABCP accredits 60 courses including CBT courses, IAPT trainings, and clinical psychology doctorates, with members invited to join the Course Accreditation Committee. As the self-proclaimed lead organisation for CBT, it dictates how therapists should be assessed with the cognitive therapy rating scale revised for depression, despite therapists not being taught to reliably identify which disorder the person is suffering from. It further runs approved supervision training courses. All this despite a paucity of evidence that clearing the hurdles of BPS or BABCP makes any real-world difference to clients’ lives.

Saying more than is known. This looks very much like an exercise in power.

This analysis gives raise to major questions:

  1. How do therapists fare at the coal-face? In the UK, 42% of GPs wish to retire in the next five years, which will bring the NHS to a state of near collapse. I do not get the impression matters are any better in the government IAPT service, with most complaining of burnout.
  2. Is diagnosis the least worst metric for measuring outcome? If, after psychological treatment, a person cannot report, to an independent assessor, that they are either back to their old selves or their best and have been for what they regard as a significant period, it is difficult to see how the intervention can be judged effective. But the answers to these questions would lack credibility if the person had not also lost their diagnostic status according to a standardised semi-structured interview. Without the foundation of the diagnostic interview, it is possible that the reported improvement is simply a reflection of their need to feel that they (and their therapist) have not wasted all their time in going through therapy. So it is not that diagnosis is the least worst metric; it merely enhances the credibility of claims of a real-world gain.
  3. Should we be using a quality control metaphor? For some, the use of a quality control metaphor regarding psychological therapy is anathema. But there is also a certain poetic licence about metaphor to make a point rather than to be taken literally. It is not at all meant to imply that clients are units of production. My underlying philosophical position is that we are not defined by information processing, but that it is consciousness that makes us human, and bestows our sense of self, purpose, meaning, values, and appreciation of beauty and love. Although these aspects of consciousness have correlates in the brain, they are not synonymous with them. In short, there is mind, not just brain. It is from the mind, rather than the brain, that the two crucial elements of therapy—reverence and honesty—are derived. There is no physical mechanism to generate these.
  4. Are self-report measures an impression management strategy? We are involved in impression management all the time (e.g., my wearing a tie for an interview when I don’t normally wear one). It is important to bear in mind that self-report measures are an impression management strategy that clients may use not only for others but also for their own sake.
  5. How well-grounded are alternatives, such as computer algorithms, transdiagnostic conceptualisations, and research domain criteria? It has been suggested that a combination of computer algorithms and key features from a person’s past will better predict who will benefit most from therapy. But, to date, this has not been established. However, it may be that our real interest should be in those who would likely benefit least: the poor, the disabled, etc. Transdiagnostic therapies have only been evaluated by those who have developed them and the field awaits independent replication. Similarly, there is a dearth of evidence that looking at research domains is any more fruitful than utilising diagnostic categories.
  6. How do we move forward? Psychological therapy needs to get back to basics by listening to clients’ narratives with a sense of reverence before helping them to consider reframing and reimagining their future. But all this must be done with a sense of honesty about the real-world limitations of the client’s actions, and the therapist’s limitations in bringing about system change.

The starting point has to be that clinicians need to engage in mental time travel and recall what made most of them move in the direction of a mental health career in the first place. Most will, I think, answer that it was to make a real-world difference to client’s lives. Then to ask, “in all honesty, am I delivering on this?” If the answer is “no,” there has to be a major rethink—the sense of self is lost by simply escaping into a role.

The rethink needs to include a critique of whether the system within which they are operating is facilitating an encounter with the client in which the needs of the latter are paramount. So that the delivery of therapy is not solely a top-down process determined by powerholders, but also a bottom-up process involving a human encounter. The powerholders may have a unique body of knowledge (for example, about the strengths and limitations of a proposed intervention), but the message of the last decade or so is that they have prescribed beyond their knowledge.

We need to rediscover walking with the client, essentially a pilgrimage in which we are not waylaid by the powerholders. But for any real change to take place, the managerial class have to stop distancing themselves from where the action is and experience what it is like trying to treat people with limited training, performing the administrative tasks deemed necessary, and managing the uncertainty of being sanctioned if targets are not met. Unfortunately, I see no sign of managers getting their hands dirty anytime soon.

A letter in the October issue of The Psychologist rightly draws attention to unacknowledged and pernicious managerialism. The managers need to taste the fare.


Questions to the Health Minister from Dr Scott

[Explanatory note: In the UK, IAPT is the main provider of psychological therapy services in primary care and is free to the public. But there are other providers, such as the charity Anxiety UK, who charge, but on a sliding scale. Whilst they adopt the same assessment/outcome measures, the PHQ-9 and GAD-7, they do not operate a stepped care approach of usually low-intensity therapy first and then, if unsuccessful, high intensity. Most recently, Anxiety UK appears to have outperformed IAPT.]

  1. The Government Improving Access to Psychological Therapies (IAPT) Service is experimenting with public, direct access to a Psychological Wellbeing Practitioner. But PWPs are not trained in diagnostics, nor are they qualified therapists. Why then are they being given this gatekeeping role?
  2. The IAPT service has cost billions of pounds since its inception in 2008. Why then has there been no independent audit of the service?
  3. With regards to physical health, the Government is funding Community Diagnostic Centres; with regards to mental health, why is there no facility for reliable
    diagnosis in IAPT?
  1. With regards to mental health, there is no evidence that those availing themselves of IAPT fare any better than those attending the Citizens Advice Bureaux. What then is the added value of funding IAPT?
  1. How is the experiment of making PWPs gatekeepers being evaluated and who decided on the criteria?
  1. IAPT’s claimed recovery rate of 50% has not been independently verified. The independent evidence of an Expert Witness to the Court suggests that, in fact, only the tip of the iceberg recover. Is this not grounds for a publicly funded independent audit?
  2. How do we know IAPT is value for money?


Mad in America hosts blogs by a diverse group of writers. These posts are designed to serve as a public forum for a discussion—broadly speaking—of psychiatry and its treatments. The opinions expressed are the writers’ own.


Mad in America has made some changes to the commenting process. You no longer need to login or create an account on our site to comment. The only information needed is your name, email and comment text. Comments made with an account prior to this change will remain visible on the site.


  1. The “Psychological Wellbeing Practitioner[‘s]” are not trained in diagnostics, nor are they qualified therapists. Why then are they being given this gatekeeping role?

    One must wonder that about all non-medically trained psychologists as well, however.

    “The IAPT service has cost billions of pounds since its inception in 2008. Why then has there been no independent audit of the service?” Good question.

    “why is there no facility for reliable diagnosis in IAPT?” Probably because all the DSM “diagnoses” – thus also the ICD “diagnoses,” or at least everything within them that the psychologists have believed in for decades – was declared “bullshit” and scientifically “invalid” in 2010 and 2013.

    No non-medically trained person should ever “diagnose” anyone, for any reason, with something claimed to be a “medical” diagnosis.

    “How is the experiment of making PWPs gatekeepers being evaluated and who decided on the criteria?”

    Since I’m in the US, and unfamiliar with UK systems, but know the satanic, systemic child abuse covering up, psychological systems in my country come from Britain, religions – and prior failed governmental, paternalistic, psychological control systems. I’d say that’s a good question.

    “Those who do not know history are doomed to repeat it,” and sadly those funding the American education system, have funded the repeating of the worst of history. We are dealing with a never ending “holocaust,” by the DSM deluded psychiatric and psychological industries, in my country.

    Report comment

  2. Great article. I’ve shared it far and wide. I’m always surprised at just how many people don’t know that our mental health services aren’t measured and tested.

    With regard to the computer algorithm, we have created exactly that in partnership with Keele university. It’s called Bessie, a bespoke evaluation and spotlight on stress in employees. We are also just about to launch Bessi, a version for individuals and in 2023, Besss, a version for students.

    The truth is, most professional mental health services that we approach, don’t want their work and results measured. They refuse to take part. I can only conclude that they don’t really want to know the results.

    Report comment

    • Devastating article.

      Basically most MH services have been cut while IAPT grew into a managerial style nightmare swallowing up huge budgets helping very few and making therapists miserable too.

      We have increased used of sections, ie forced detention under mental health acts, while people witter on about anxiety, depression and nuridiversoty. The moderately distressed get inadequate pampering via IAPT while the severely distressed get tranquilizers and locked up.

      The managers do very nicely, everyone else is miserable.

      My diagnosis is a very weak left, or this wouldn’t happen.

      Report comment

  3. Hello Mr. Scott, I enjoyed your article, but the orientation of your solutions are part of what has made therapy less person oriented and more pseudo scientific. This march towards specificity is probably inevitable, but also doomed by the complexities of human beings and their relationships. It is a little like trying to scientifically determine best parenting. It will always have competing theories. Some of what makes therapy stilted and de-humanized is this attitude that it can be measured. So that contributes to demands for measured results and for time limited treatment, and therapists who put more value on technique over relationships.
    While it is beneficial for therapists to be educated, it is a misunderstanding to think it is only therapists who provide emotional healthy growth. This is also the role of friends, spouses, relatives, teachers, doctors, every person who gives another person a more accurate way of seeing themselves or of reality. What may be one of the most important features of the therapeutic relationship is that the therapist needs to be healthier than the client. Unfortunately, very many seriously emotionally wounded people turn to become counselors and are too easily triggered by their clients or do not have the depth of nurturing that would be needed. The statement in your article that I most enjoyed………..”How do we move forward? Psychological therapy needs to get back to basics by listening to clients’ narratives with a sense of reverence before helping them to consider reframing and reimagining their future.” Take care, David B., LCSW (retired trauma therapist at:

    Report comment

  4. Hi Michael
    What did you make of the previous article in todays MiA newsletter that kids who receive an ADHD diagnosis had worse quality of life and were more likely to self-harm when compared to kids with the same symptoms, but no diagnosis? Are you open to the possibility that diagnoses are harmful in most mental health care? Did you see the interview with Seikkula last week, where he said, “not focusing on symptom interventions seems to remove the ‘symptoms’ most effectively, as we have repeatedly seen in several studies.” If you had your way you would make as a foundation the diagnoses interview, which would in effect outlaw Solution Focused Brief Therapy.
    I suggest you engage in a wider reading base – and ask questions as to whether this field (MH) is similar to (or so dissimilar from) medicine and it doesn’t lend itself to measuring in this way..

    Report comment

  5. Since I wrote my blog NHS England have replied, or rather not replied, suggesting that I put the questions to my local Integrated Care Board (ICB’s are the recent replacement for Clinical Commissioning Groups). But my questions obviously relate to national policy and not local difficulties! NHS England has chosen to duck the issues, taking a leaf out of the Health Minister’s book. I have now asked them for a considered response to my questions, but I’m not going to hold my breath waiting for an answer.
    Nick is absolutely right, when a diagnostic label is used with no agreed meaning, as in the case of ADHD, chaos is bound to ensue. Some ‘diagnoses’ have been based on self-report measures, some are based on clinical interviews plus or minus neuropsychological tests and plus or minus information from informants. The plethora of assessment modes leads to an ill-defined population. The worse outcome for diagnosed children likely reflects arbitrary assessments. There is a need to more carefully define, what it is that is being measured. The lack of standardisation of assessment opens the pathway for the use of heuristics e.g ‘ADHD is mainly a boy thing’, such that 90% of identified ADHD sufferers are boys, but the ratio of boys to girls is actually only 3 to 1! But not all mental health diagnoses are too fuzzy for use, for example with the CAPS/SCID interviews for PTSD agreements are of the order of 80-90% and carry clear implications for treatment.
    What would effective treatment with brief solution focussed therapy (BSFT) look like? To be credible it would have to be demonstrated that a) a significant proportion of those treated with BSFT return to being their old selves or at least almost their old selves with treatment b) this proportion was significantly greater than in a comparison attention placebo group c) the return to old self lasted for a period that was clinically meaningful to the person d) the assessors were independent of the treatment providers and did not have an allegiance bias e) the study was such that the findings could be independently replicated, this would require a detailed specification of the population studied (in the absence of diagnosis it is difficult to see how this would be achieved). It is not so much that BSFT is outlawed, as that it outlaws itself to bodies like NICE because of a failure to meet criteria a to e. BSFT has no inbuilt protection against the unbridled clinical judgements of its clinicians with regards to assessment and outcome. For example a BSFT practitioner declaring that treatment was a success because the client was now driving, when the reality was that the person was still suffering from PTSD, depression and binge eating disorder, with no systematic identification or treatment of either.
    I must look into Bessie, your right Service providers do not want independent assessment

    Report comment

  6. At the end of my comment I suggested a wider reading basis maybe helpful for you Michael to address this subject. The philosopher Wittgenstein ended his Philosophical Investigations by saying the barrenness of psychology is not due to it being a young science (as William James suggested), but because there are conceptual confusions. “The existence of experimental methods or science makes us think we have the means of solving problems which trouble us, although the problem and method pass one another by”. Confusions are clarified (ie the problem “dissolved” in Witt.) by philosophy although most psychologists don’t bother with philosophy.

    Being a little solution focused I ask what’s working now. A successful marker for Seikkula is people in full-time work or study post-intervention; another is are they medication free; that ADHD study in Australia that found diagnoses makes for problems, used quality of life (Q-O-L) measures. Those of us who live in small communities see the results amongst those who remain. I would further suggest that another, broader outcome, is a particular community would not be making as many referrals to MH services, as the community knows how to look after each other (which is a long term effect of Open Dialogue as veterans of previous OD meetings attend – which means the community is learning how to manage “alienation”). [Suggested further reading – “relational responsibility”]
    Michael wrote “There is a need to more carefully define, what it is that is being measured”. This is the rhetoric of science but not everything can be or needs to be tightly defined – “stand over there somewhere ‘cos the light is right for photographing you” – you say to a friend – that doesn’t have to be exact to be “good enough”. Yes self-report measures are limited, but only when they are the only measure. But they are a good start.

    Michael, I fear that psychologists who see themselves as scientists and not philosophers will foster upon all practitioners some requirement such as the diagnostic interview, in a similar vein to NICE (and utilise licensing bodies), claiming that it is more “scientific”; and trying to scare us all with horror stories of the widespread incompetence amongst us; but with end result of making a rod for our backs. Spare me from such bureaucrats.

    Report comment

  7. WE need to start doing it for ourselves. Most of the recovered go through Psychiatry but ultimately look outside Psychiatry for solutions. And People do find their solutions outside the Professional services.

    I believe there’s already a shortage of Psychiatrists in the UK (and Ireland) which is a good thing – as Drug Psychiatry has no answers.

    Besides this the country can’t afford to pay for lots and lots of invalids – that could get completely well. The Eyes need to Open.

    Report comment

    • Hello Fiachra,
      I completely agree! The world is just discovering the power to heal through experiencing Bi-lateral stimulation of the brain while visualizing traumatic events. There are dozens of self-help programs that help people do their own therapy. Se-REM (Self effective – Rapid Eye Movement) combines elements of 6 different therapies, but the heart of it is EMDR Bi-lateral stimulation. Its origin story was published last month in madinireland and July 2, 2021 in madinamerica. It is in use in 26 countries.

      Report comment

  8. Ah Nick, you are caricaturing me, I am vehemently against ‘scientism’, the view that science explains everything. Because there is nothing within science that can prove it is the explanation of everything. I have a great interest in philosophy/religion as the solution focussed colleagues I worked with just before the pandemic would testify, together with a commitment to reverence and honesty (not science products).
    I simply believe that we should have outcome measures that are meaningful to the client such as ‘back to old self’ that are gauged by independent assessors, people generally don’t want to seem unappreciative of a therapists usually well meaning efforts. Diagnostic interviews should only be a part of a conversation with the client, and in 30+ years of using them it has never been a problem with a client. They have just better illuminated the landscape for us to both traverse on a pilgrimage of equals.

    Report comment

    • I have always thought that the only one in any position to evaluate our “outcomes” is the person receiving the “help.” No “independent assessor” will be in a position to truly judge what is helpful, unless they are looking at therapy as a way to improve the behavior of the client for the benefit of society at large. Which is a VERY slippery slope!

      The client is the one who knows IF something is wrong, WHAT exactly is wrong, and what an improvement would look like. The therapist’s job, in my view, is to help the client come to understand their OWN view of what they want and what the barriers are to accomplishing it, and to help them circumvent or remove such barriers so they can succeed. It is not the therapist’s job to tell the client what is wrong or what they “should” do or not do. Most people are messed up specifically BECAUSE other people told them what was wrong with them and/or what they “should” do or not do. As such, the therapist is not in a position to decide for the client if therapy “worked” for them. If I think therapy was great and the client said it was worthless, it was worthless. All the more so with drug interventions. There is a balance between “reduced symptoms” and reduced quality of life that only the client can assess.

      So skip the outside assessor. Ask the client. They are the only ones who can define ‘success!’

      Report comment

  9. I am so against diagnostic interviews – they are not necessary for good therapy. I was very attracted to outcome measures that are meaningful to people I’m endeavouring to help – (I went Chicago to study Scott Miller’s system 20 years ago; I have designed outcome measures for indigenous people, etc), – but they have an unfortunate effect of the tail wagging the dog – somebody wrote psychiatry is heading towards a Stepford Wives scenario (on another thread this morning/evening). Simple and broad I think is best with regards to outcome – are they working or in full time study, are they taking meds, a random sample at 2 years post treatment and/or 5 years, random sample quality of life, etc. Too tightly defined and it becomes counter-therapeutic – you maybe happy to do diagnostic interviews, but I find them a recipe for “stuck-ness”.

    Report comment

  10. You’re absolutely right Steve, the only person in a position to evaluate the outcome of an intervention that they receive is the person receiving it. But the context in which their opinion is elicited can make a huge difference. If asked on a self-report measure that they remember having completed at the start of therapy, to give themselves a sense of hope, they can indicate a lower score and avoid the sense of having wasted their time. In addition they are aware their therapist will see their completed questionnaire, as most think their therapist has done their best and they don’t want to appear ungrateful , these demand characteristtics can result in a lowered score. It is crucial that the primary outcome measure focussed on has real world meaning to the client e.g ‘are you back to your old self now?’, if yes for about how long would you say you have been back to your old self? and/or since therapy started ‘do you feel the same, a little worse, little better, much worse, much better?.But even this may not be thought sufficient to determine the truth of he matter, and the credibility of the positive gain can be enhanced or otherwise if the person has lost their diagnostic status according to a reliable standardised diagnostic interview.
    Any publicly funded service/charity has to be independently evaluated because the agencies have a vested interest in their promotion. It is not at all that the independent assessor is making a judgement of the client about how well they are, he/she is trying to impartially assess whether the agencies intervention has made a real world difference for this client.
    In the UK it is a scandal that the biggest provider IAPT has not been subject to publicly funded independent assessment and my data suggests a total mess. It has got away with its own idiosyncratic outcome measure ( a change on a self report measure) taking credit for any improvement, ignoring regression to the mean and the beneficial effects of mere attention. Agencies like IAPT have had have a field day with Nick’s simple and broad measures, they claimed their interventions got people back to work. But there was no specification of the mechanism by which this happened, what was it in their intervention that helped clients persist with a task? pace themselves? manage the hassles of the workplace? Without a specified mechanism there can be no effect. Such agencies go on a fishing expedition to claim causation when all that exists is a correlation, people present at their worst, get a bit better with time enough to return to work and hey presto it was the ear lobe therapy that has returned those with mental ill health to productivity!

    Report comment

      • Steve – yes the client is a viable source, but how exactly you obtain this information from her or him is not so straightforward. In my part of planet mental health a group of noisy “consumers” insisted they represented all “consumers”, and could develop and design an outcome measure. So the Ministry of Silly Walks (aka The Ministry of Health) gave them a few million dollars of tax-payer money, and flew them round the country to meet over a 2 year period. They then launched their “outcome measure” which consisted of over 35 items and took the best part of an hour to 2 hours to administer; and there were many launch meetings up and down the country (in which they served lovely cream cakes). Needless to say the measures were never used as they were impractical – how many people want to take another hour or two filling in a form to say how they did (let alone validity and reliability tests to see if the measure was actually scientific). However it did provide some employment for a group of “consumers” for 2 years. This was 20 years ago.
        Next cab off the ranks was some indigenous academics who designed an outcome measure for the local indigenous folk. 2 years in the making, various mtgs around the country, and a 30 item 1 hour to complete outcome measure was complete. To the best of my knowledge it has never been used – again because its impractical. I pissed these academics off a little as I designed, researched, and launched a brief 4 item (that took less than a minute to administer) measure before they launched theirs.
        But as Michael comments, these type of measures although of some use, have demand characteristics (who wants to score your therapist badly if you like him or her), that counts against them. But Michael’s solution is diagnoses – and we know (or there is empirical evidence demonstrating) that diagnoses can be harmful – the Dark Side Manual is, I believe, evil.
        Simple and broad measures of outcome that reflect what would occur naturally in a small community where word of mouth would put bad practitioners out of business.

        Report comment

        • I’m really talking about simply asking the person to tell you what did and did not work about their experience. Clearly, a third party not associated with the therapist or his/her agency would be ideal. But I’m not advocating for any kind of “outcome measures.” We don’t have a homogeneous group of people to apply this concept to anyway. We just have to give a shit enough to listen to the client, whatever s/he said. I do think that a person who REALLY cares and wants to know will be honest in their response, but there’s no guarantee. It’s more of an integrity thing to me. There is no mathematical way to evaluate the success of any “mental health” intervention.

          Report comment

          • Yes but the bureaucrats want this objectified. That is insurance companies in health insurance countries, or govt departments in tax paying for health countries. Some kind of objective measure is needed so results can be compared. The status quo exists because psychiatry doesn’t have an agreed upon measure of outcomes – which was the main point of Michael’s article. Psychiatry goes on serving up poisons, premature deaths, crippled people, etc, with impunity because there is no measure of outcome. Many don’t care; but those that do are struggling to find a way of comparing results.

            Report comment

          • You mean some kind of quasi-objective outcome measure.

            I do agree that the status quo is able to CONTINUE because there is no “yardstick” of success that psychiatry has to adhere to. My problem is that any such yardstick is ultimately not really objective, largely for the reasons you enumerated earlier. Cancer can be tested for reduction in tumor size or in certain chemicals present in the bloodstream. Heart disease can show better outcomes through increased QT intervals or other such electrical measurements, as well as blood flow measurements. If a drug REDUCED blood flow through the coronary arteries, no amount of flim-flam could be used to say that this led to “improved outcomes.”

            But there literally is no such measurement possible for matters of the mind/spirit. It is only when such testing and measurement fails that we should even be considering something a “mental health” problem. It’s not a body thing, and trying to pretend it is is just misleading. The real truth is, “mental health” intervention should not be considered part of the medical system, and demanding “outcome measures” is simply allowing the ruse to continue that any such objective measure is even possible. So I have a problem with the idea of “outcome measures” in the sense of “treating” some kind of physiological illness. Hell, they can’t even define WHAT they are “treating,” let alone what a good outcome would be! And of course, they always fall back on “symptom reduction” as their “outcome measure,” even when that may have nothing to do with the goals and purposes of the client. And “symptom reduction” is, of course, the area where the drugs are most likely to look “successful,” and I don’t think the choice of these “measures” is in any way disconnected from that fact.

            Bottom line, the whole idea of “treating” “mental illness” as a disease state is wrongheaded and destructive. I can’t see any way to get around that, and I think we’ll be a lot better off simply stating that fact than trying to come up with quasi-objective measurements to keep the insurance industry happy. Impractical, yes, but at some point, I think we have to stop pretending!

            Report comment

          • Yes Steve – you are right – so-called “mental health” cannot be measured in the same way that physical health can. I agree the whole idea of treating “mental health” as a disease is wrongheaded and destructive. But to have no outcome measurement at all allows the status quo to go on; it allows the widespread and largely invisible (to most prescribing psychiatrists) to remain invisible. I think a more sociological measure(s), rather than a medical-like measure is the way to go. If you were living in a country where your taxes paid 90% of the health bill, would you be happy with the status quo?

            Report comment

          • Of course not. But most people haven’t the first beginnings of a clue about what happens in the “mental health” system. Most still believe in the TV “psychiatrists” who are wise and compassionate and good listeners who do therapy. They see the MH system as protecting them from dangerous “schizophrenics” and dispensing magical pills, and the only reason a “mentally ill” person is suffering is because “they’re off their meds.” I’d love to see psychiatrists held accountable to some sort of client-based outcome measure, but they’ll fight anything sane and if it did happen, they’ll find a way to doctor the results (sorry for the pun!)

            Sorry for being so cynical, but it seems we have an entire profession run on false assumptions and secret ill intent, and it’s hard to see any measures to create honest accountability being successful even if such a measure could be found and validated.

            Report comment

  11. Ah a further update, I’ve had a further communication from NHS England, would you believe it, they suggest I contact the Department of Health! But it was they who 1st said that NHS England would be better placed to answer my questions. I have therefore asked them to make a considered response to my questions, but I’m not holding my breath on this.
    But just a point Nick, the cavalier use of diagnosis in mental health is commonplace and wreaks havoc, but that is totally different to using a gold standard diagnostic interview to make a diagnosis and help chart direction. I have seen no ill effects from the use of a gold standard. Mental heath diagnoses have no biological markers and are therefore judged by their utility or lack of it.

    Report comment

    • I don’t think it is only the cavalier use of diagnosis – solution focused therapists demonstrate daily that you don’t need to diagnose to have successful outcomes. Look at that study that was done in Australia with ADHD – those receiving diagnosis fared poorly compared to those that didn’t. Same symptoms in both cases. That study is not alone. Diagnoses are ornamentations to good therapy – you can do therapy without them. A study of Wittgenstein’s philosophy first addresses what are necessary ingredients before it does the science of therapy. I suggest that the only reason you want them Michael is for your outcome measures – and that is a case of the tail wagging the dog.

      Report comment

  12. Without using a standardised diagnostic interview I would not have been able to demonstrate that that the UK IAPT service was failing clients, the solution focussed colleagues with whom I was working at the time and had a great relationship with could not demonstrate this. The utility of an intervention is gauged by the proportion of clients who can report that they are back to their old selves with the treatment and remain so.

    Report comment

    • Well, that’s not a bad standard if you want to set one. I get that you’re using this to battle the status quo. I’m just saying that the fundamental of the status quo that needs to be changed is not going to change by better outcome measures. I do care about the people who have to put up with the system as it is, and I used to be the clinician “behind the lines” and feel like I made a huge difference for some folks I encountered. But I got worn down by the general acceptance of the DSM diagnoses and the abusive treatment of clients that resulted. It’s hard to see what can really alter the deep issues with the current disingenuous “philosophy” of the system.

      Report comment

  13. I am with you Steve. The problem is that people often pay lip service to diagnosis, creating chaos, instead of only using it in mental health with a standardised diagnostic interview. The consequences of this are shown in the November issue of the British journal of Clinical Psychology were clinicians have used a rule of thumb ‘cant be PTSD because its a child in care, must be a developmental problem’ rather than engage in a standardised diagnostic interview. Without a standardised diagnostic interview, heuristics, (peoples idiosyncratic mental shortcuts) abound and there is no agreed treatment for the fuzzy label e.g ‘developmental problem’. Diagnosis is like nuclear energy used properly it is great contribution to addressing Climate change, used badly the results are unthinkable. The problem s not with nuclear energy or diagnosis per se.

    Report comment

    • But the real problem is not having an objective way to “diagnose.” It is human nature to allow “drift” in any non-objective line that is drawn, and that drift will always tend toward what makes the powerful more comfortable and the less powerful even less powerful than before. When it was first proposed, “ADHD” (or MBD as it was called at the time) was supposed to affect a tiny proportion of the population, kids who’d had some kind of trauma or disease that made them unusual. By 1980, suddenly 3-5 % of the population “had it,” and they were 90% boys. Not long after, it was “discovered” that girls got it, too, and that kids don’t grow out of it, and that adults also had “ADHD.” The requirement that the child showed signs of “it” before 6 was removed, and now we’re seeing rates of 10-15%, and some places 20% of boys in a particular school or area are “diagnosed” with this “disorder.”

      I don’t see any way to stop this slide without hammering away at the utter spuriousness and subjectivity of these “diagnostic criteria.” I really don’t see “better diagnosis” as a way out of the mess. You can set up the best “standard diagnostic interview” in the world, but people will use it in ways that fit their worldview, and the industry will continue to chip away at these soft boundaries until more and more and more of the population will be seen as “mentally ill.” Because that is the INTENT of the system. At best, a standard interview format can slow the drift, but it will not stop it, because the force is pushing in that direction, and there is no objective standard one can point to and say, “No, you’ve really gone too far on this one.” The “diagnosis” is, in the end, a matter of opinion, and that opinion quickly deteriorates into “whatever the clinician wants to see.” That’s the problem I see as unsolved by the diagnostic interview. Not that I want to minimize the thought and good work that’s gone into this for you. I’m sure if we had honest clinicians and an industry truly focused on helping people, such an interview could be conducted in the knowledge that these “diagnoses” are indeed only ways of describing situations and don’t actually tell us what (if anything) is wrong or how to “treat” the patient. Of course, that begs the question, what use is a diagnosis that doesn’t tell you what is wrong or how you should treat the patient? But that’s for another post.

      Report comment

  14. Asking a person whether they are back to their old selves is no different to asking them if they are in pain both are subjective but no less valid. The notion of science you are using is somewhat antiquated essentially Newtonian e.g explains the speed of a billiard ball if hit by another, neglecting atomic physics which is based on uncertainties.

    Report comment