The future is here: People are turning to AI chatbots based on large language models (LLMs) instead of human therapists for help when they feel emotional distress. Numerous startups are already providing this service, from Therabot to WoeBot to character.ai to 7 Cups’ Noni.
A recent study claims to show that Therabot can effectively treat mental health concerns. However, in that study, human clinicians were constantly monitoring the chatbot and intervening when it went off the rails. Moreover, the media coverage of that study was misleading, as in an NPR story that claimed the bot can “deliver mental health therapy with as much efficacy as — or more than — human clinicians.” (The control group was a waitlist receiving no therapy, so the bot was not actually compared to human clinicians.)
Proponents of AI therapists (and there are many, especially on reddit) argue that a computer program is cheaper, smarter, unbiased, non-stigmatizing, and doesn’t require you to be vulnerable.
But is any of that actually true? Researchers write that AI replicates the biases of the datasets it is trained on, whether intentional or unintentional. The implementation of AI is fraught with peril, especially in medical care.
For instance, one recent study found that when asked for medical advice, chatbots discriminate against people who are Black, unhoused, and/or LGBT, suggesting that they need urgent mental health care even if they came in for something as benign as abdominal pain. An eating disorder hotline that fired all its workers and replaced them with chatbots had to shut down almost immediately when the AI began recommending dangerous behaviors to desperate callers. And character.ai is being sued after a 14-year-old boy died by suicide, encouraged by texts with an “abusive and sexual” chatbot.
A prominent feature in Rolling Stone showcased the way chatbots feed delusional thinking, turning regular Joes into AI-fuelled spiritual gurus who self-destruct their lives because they think they’re prophets of a new machine god.
Now, a new study out of Stanford University’s Institute for Human-Centered AI demonstrates that chatbots discriminate against those with psychiatric diagnoses, encourage delusional thinking, and enable users with plans for suicide.
“Commercially-available therapy bots currently provide therapeutic advice to millions of people, despite their association with suicides. We find that these chatbots respond inappropriately to various mental health conditions, encouraging delusions and failing to recognize crises. The LLMs that power them fare poorly, and additionally show stigma. These issues fly in the face of best clinical practice,” the researchers write.
The study was led by Stanford researcher Jared Moore. It was published before peer review on preprint server arXiv.
These are just standard models being used as guides – they are being perfected all the time and new models are coming more tuned to life guidance/therapy – The therapy industry is also completely oversold in terms of its actual evidence base and there are countless human therapists causing a lot of harm, while ignorantly thinking they are helping.
Report comment