Mental Health Professionals and AI Tools Fail to Predict Suicide

Attempts to predict suicide risk using clinical judgment or machine learning algorithms are not useful in clinical practice.

4
1379

A new study aimed to compare suicide risk prediction, including clinical judgment and machine learning algorithms, to see what method was best. Unfortunately, none of the methods were clinically useful, and new algorithms were not statistically any better than clinical judgment. The authors write:

“Until such time as the use of any suicide prediction model has been shown to reliably reduce suicide, our clinical advice is to focus on understanding and caring for the person in front of us, including by treating any modifiable risk factors, irrespective of estimations of any overall suicide risk category.”

The study was led by Michelle Corke at the University of New South Wales, Australia, and Matthew Large at the University of Notre Dame, Australia. It was published in BJPsych Open. The authors conducted a meta-analysis of 86 studies, including 102 predictive models, to determine if any type of model was better than any other for predicting suicide.

Although machine learning had a nominally higher predictive value and clinical judgment nominally lower, the difference was not statistically significant—meaning that it could just as easily have been due to chance or statistical error. That is, statistically, no type of suicide prediction was better than any other.

But how well did they do, in general? After all, if all the models were good at predicting suicide, it would be a good thing that they were all just as good.

Unfortunately, none of the models were very good at it at all. Overall, the sensitivity was 44%, meaning that the prediction missed more than half of the people who died by suicide. The specificity was 84%, meaning that it correctly identified 84 out of every 100 people who were at low risk for suicide.

Finally, and perhaps most importantly, the positive predictive value was 2.8%, meaning that there was a 2.8% chance that the test was correct for every person who screened positive. More than 93% of the people who screened positive for suicide risk were actually at low risk for suicide.

Additionally, the authors found that including more risk factors did not improve predictability for clinical judgment, more objective measures, or computerized algorithms. More complex information did appear to improve machine learning algorithms—but they were still no better than the others after statistical analysis.

This is consistent with previous research. A 2017 study by Matthew Large and other researchers concluded, “Suicide risk models are not a suitable basis for clinical decisions in inpatient settings.”

In the current study, the authors suggest that it is possible that eventually, an algorithm might be able to predict suicide risk—although the current research is unable to support this.

However, they add: “It might be that suicide is fundamentally unpredictable.”

 

****

Corke, M., Mullin, K., Angel-Scott, H., Xia, S., & Large, M. (2021). Meta-analysis of the strength of exploratory suicide prediction models; from clinicians to computers. BJPsych Open, 7(1), e26. DOI: https://doi.org/10.1192/bjo.2020.162 (Link)

4 COMMENTS

  1. Gosh, I’m SO surprised!

    But will this lead to a reduction in the use of “suicide screening tools” or the substitution of “clinical judgment” for actual communication with the client in front of the “clinician?” You bet it won’t!

    Report comment

  2. What psychiatry does not want the public to find out is that people are much more
    likely to exit just to escape them.

    And regular medicine, the whole culture really rarely has safe spaces.
    It is a mess of disjointed, fragmented, dishonest sea of language and prejudice.

    It is of no matter to the “professionals” if people leave due to their own hardships and
    the callousness of the offices they entered.

    They could make a difference IF they so desired to be truly interested in people.
    It takes a village. But instead they became a machine.

    There is not a single person in the world that won’t reach a point of wanting out. Perhaps when you are a hundred, or perhaps when whatever disease you have becomes too much.

    But yes definitely the majority will think it best to be done, at some point. Not uncommon for doctors to
    die at home and not in hospital.

    AI is only as smart as the creators, which of course means it was fed the wrong information from inception.

    Report comment

  3. A friend recently described 3 of her friends who ‘passed themselves away’. I found this interesting and useful. Long ago, I stopped using the phrases ‘suicidal ideation’ or ‘committed suicide’, and now I use the word ‘suicide’ only when absolutely necessary or typically convenient for public consumption. I used ‘self-murder’ with myself, but I don’t need to use that anymore, thankfully.

    Can you imagine someone listening to someone struggling with this kind of thing, and they put a check mark by ‘suicidal ideation’? Or after the person has passed, putting a check mark by ‘committed suicide’? Such a profound and personal thing being so quickly and easily turned into a notation or a statistic.

    I have learned that if you are asked ‘do you have thoughts of hurting yourself or others?’, the reaction and help usually won’t make much of a difference either way (although the conventional ones will think so). But then when they ask ‘and do you have any plans?’ (which is really what they are after), that’s when something will happen: not help or love or amelioration of pain. But rather legal mandate and compulsion.

    Report comment

LEAVE A REPLY