Stanford Researchers: AI Therapy Chatbots Encourage Delusions, Suicide, Stigma

“LLMs make dangerous statements, going against medical ethics to ‘do no harm,’ and there have already been deaths from use of commercially-available bots,” the researchers write.

1
201

The future is here: People are turning to AI chatbots based on large language models (LLMs) instead of human therapists for help when they feel emotional distress. Numerous startups are already providing this service, from Therabot to WoeBot to character.ai to 7 Cups’ Noni.

Screenshot of 7 Cups' ad for Noni, depicting someone chatting with their "AI counsellor"

A recent study claims to show that Therabot can effectively treat mental health concerns. However, in that study, human clinicians were constantly monitoring the chatbot and intervening when it went off the rails. Moreover, the media coverage of that study was misleading, as in an NPR story that claimed the bot can “deliver mental health therapy with as much efficacy as — or more than — human clinicians.” (The control group was a waitlist receiving no therapy, so the bot was not actually compared to human clinicians.)

Proponents of AI therapists (and there are many, especially on reddit) argue that a computer program is cheaper, smarter, unbiased, non-stigmatizing, and doesn’t require you to be vulnerable.

But is any of that actually true? Researchers write that AI replicates the biases of the datasets it is trained on, whether intentional or unintentional. The implementation of AI is fraught with peril, especially in medical care.

For instance, one recent study found that when asked for medical advice, chatbots discriminate against people who are Black, unhoused, and/or LGBT, suggesting that they need urgent mental health care even if they came in for something as benign as abdominal pain. An eating disorder hotline that fired all its workers and replaced them with chatbots had to shut down almost immediately when the AI began recommending dangerous behaviors to desperate callers. And character.ai is being sued after a 14-year-old boy died by suicide, encouraged by texts with an “abusive and sexual” chatbot.

A prominent feature in Rolling Stone showcased the way chatbots feed delusional thinking, turning regular Joes into AI-fuelled spiritual gurus who self-destruct their lives because they think they’re prophets of a new machine god.

Now, a new study out of Stanford University’s Institute for Human-Centered AI demonstrates that chatbots discriminate against those with psychiatric diagnoses, encourage delusional thinking, and enable users with plans for suicide.

“Commercially-available therapy bots currently provide therapeutic advice to millions of people, despite their association with suicides. We find that these chatbots respond inappropriately to various mental health conditions, encouraging delusions and failing to recognize crises. The LLMs that power them fare poorly, and additionally show stigma. These issues fly in the face of best clinical practice,” the researchers write.

The study was led by Stanford researcher Jared Moore. It was published before peer review on preprint server arXiv.

Holding a cell phone, warning symbols flashing in the air

You've landed on a MIA journalism article that is funded by MIA supporters. To read the full article, sign up as a MIA Supporter. All active donors get full access to all MIA content, and free passes to all Mad in America events.

Current MIA supporters can log in below.(If you can't afford to support MIA in this way, email us at [email protected] and we will provide you with access to all donor-supported content.)

Donate

Previous article“Present continuous”: searching for beauty.
Peter Simons
Peter Simons was an academic researcher in psychology. Now, as a science writer, he tries to provide the layperson with a view into the sometimes inscrutable world of psychiatric research. As an editor for blogs and personal stories at Mad in America, he prizes the accounts of those with lived experience of the psychiatric system and shares alternatives to the biomedical model.

1 COMMENT

  1. These are just standard models being used as guides – they are being perfected all the time and new models are coming more tuned to life guidance/therapy – The therapy industry is also completely oversold in terms of its actual evidence base and there are countless human therapists causing a lot of harm, while ignorantly thinking they are helping.

    Report comment

LEAVE A REPLY