New App Aims to Predict Whether People with Psychosis Are Worth Hiring

14
2995

In the third season of the dystopian sci-fi show Westworld, Caleb (played by Aaron Paul) is a veteran diagnosed with PTSD whose mental health treatment is conducted by app and algorithm. His options are dictated by a sophisticated artificial intelligence that predicts the best options for him and delivers them instantaneously.

But when that AI determines that Caleb is at risk of dying by suicide—years down the road—it systematically cuts him off from work, relationships (ever wonder why you can’t find a match on that dating app?), and all of the other aspects of life that might be protective. Because the AI thinks he will fail, it never even gives him the chance to succeed.

Is this just far-fetched dystopian science fiction?

In a new article in JAMA Psychiatry, researchers offer just such an app. Their goal:

“To develop an individual-level prediction tool using machine-learning methods that predicts a trajectory of education/work status or psychiatric hospitalization outcomes over a client’s next year of quarterly follow-up assessments. Additionally, to visualize these predictions in a way that is informative to clinicians and clients.”

3D illustration of a skeletal robot hand emerging from a laptop computer screen, against a background of purple and blue 80s style lasers.The research, funded by a grant from the National Institute of Mental Health, was led by Cale N. Basaraba, MPH at the New York State Psychiatric Institute.

The data came from 1298 people who were enrolled in OnTrackNY, a system of 20 programs across New York for people with recent-onset (first episode) psychosis. The algorithm was trained using data from around 80% of the people, and “internally validated” on that same data. Then it was also tested on the remaining 20%—the data that had not been included in its training—to see how well it might predict for people that hadn’t been part of its creation (“externally validated”). It was also tested for new patients that came in after its creation. 

So, how successful was it?

Using just a person’s baseline data, the algorithm was able to predict whether the person would be in school or employed three months later with 79% accuracy for the internal dataset, and 78% accuracy for the external data. By 12 months, that fell to 70% in the internal dataset, 67% for new clients, but strangely increased to 81% for the external dataset. In general, not a terrible showing—but not particularly good either.

However, using the baseline and three-month data, the algorithm was able to predict six-month work/school status with 85% accuracy in the internal dataset, 79% accuracy in new clients, and 99% accuracy for the external dataset—a near-perfect prediction. This fell to 77% for predicting one-year outcomes, but is an incredible showing for this algorithm.

It was much worse at predicting hospitalization, though. Using baseline data, the algorithm predicted hospitalizations by the three-month mark with 58% accuracy in the internal dataset, 55% for new clients, and 42% accuracy in the external dataset—managing to perform even worse than random chance (50%).

Given more time-points of data, it was able to predict three-month outcomes at slightly better accuracy, ranging around 70% in the internal dataset but remaining low in the others.

The real question, though, is what the algorithm adds. The top predictors it used seem self-explanatory: Those who already had a job, who were younger, and who were higher-functioning were more likely to end up employed.

What is the purpose of such an algorithm? It doesn’t identify these risk factors—these factors are all the inputs to the algorithm, identified and then entered by the treatment team. The output is a score, a number that turns these risk factors into a likelihood of success. It tells clinicians simply this: what is the chance that this person is worth taking a risk on? Should we bother to help this person gain employment, set up a dating app, find housing?

If the algorithm determines that it’s unlikely that they will find employment, why would a clinician try to help them find a job?

If the algorithm determines that they’re just going to end up involuntarily hospitalized, do you think a clinician would help them set up a dating app?

And if it predicts they’ll be unemployed and in and out of the hospital, will a clinician try to help them find a home?

This app is coming. The researchers write that they have made plans to field test their app and have it evaluated by focus groups.

But what of the ethical implications of algorithm-driven mental health care? What of the possibility—as in Westworld—of a self-fulfilling prophecy of failure?

“Unfortunately, the ethical considerations of incorporating these tools are rarely acknowledged in published prediction articles,” the researchers write.

***

Mad in America hosts blogs by a diverse group of writers. These posts are designed to serve as a public forum for a discussion—broadly speaking—of psychiatry and its treatments. The opinions expressed are the writers’ own.

***

Mad in America has made some changes to the commenting process. You no longer need to login or create an account on our site to comment. The only information needed is your name, email and comment text. Comments made with an account prior to this change will remain visible on the site.

14 COMMENTS

  1. At what point will you also convey the choice of creating a job as in “self-employed” as an entity? Or the creation of a non-profit with a 501.c3 classification that conveys and teaches democracy? Or the various other routes one might create and work through as an owner? (See if We Can, rise to the challenge? with or without the mind swallowing benefits afforded in e-connectivities?) Please elaborate if possible. Tks.

    Report comment

  2. It’s possible that “Nobody With Psychosis” might be hired, because a “person with psychosis” might not even know where they were. If a person had recovered and was fairly represented, then there might be no reason not to hire them.

    Tinnitus or wringing in the ears is a very distressing condition but it is not a Mental Illness (as far as I know). A person hearing voices might be more than capable of neutralizing the effect of “voices”, (and hireable).

    In my own case the claim to “diagnosis” came from a Dr Barry Leonard Stone of the Maudsley, attempting to pass off my normal internal social thought, as the hearing of external ‘voices’ – i.e. Malpractice.

    Dr Barry Leonard Stone committed suicide on 11 October 1999 at the age of 51

    https://www.findagrave.com/memorial/
    214245905/barry-leonard-stone

    (No disrespect intended).

    Report comment

  3. I have over 40 years of computing experience, with 35 years of professional experience, additionally I have several pieces of toilet paper from “institutions of higher learning” I am a hackers worst enemy, A Systems Administrator… BUT … I can’t find a decent job in any of it. It is not because we are not qualified, papered, or prepped ….. its simply because they are greedy and the system is fucking corrupted.

    Report comment

  4. I went on FaceBook and did not add anyone or anything. Every day, two times a day, this bullshit sociopathic media, presents people in far flung africa to be my friends and family. Never anyone I Know. I challenge anyone here to go through the process (and it is one) to delete your Face Book account…. Then come back and make an entirely new account, with out additions. Do You Suddenly Speak ‘Swahili’….. where the fuck did my suggested friends go — Amazon.com ! It’s not like I Want them to know everyone in my life, but at shared last name and language would be and improvement Ha Ha Ha!

    Report comment

LEAVE A REPLY