Can You Tell If a Mental Health Message Was Written by AI? Most People Can’t

A new study finds AI can convincingly mimic peer support, raising difficult questions about authenticity, trust, and what we lose when the language of care is generated by machines.

4
290

Artificial intelligence has reached a point where it can generate mental health support messages so human-sounding that even trained professionals struggle to tell them apart from the real thing. That’s the key finding from a new study led by researchers at Dartmouth College, which tested whether AI could replicate the tone and empathy of peer support, a model of care built on shared lived experience.

Participants in the study, including peer support specialists and artificial intelligence experts, were asked to judge whether short supportive messages were written by a person or generated by GPT-4, a large language model developed by OpenAI. Most failed to guess correctly. In fact, many performed worse than chance.

The findings may have implications for the use of AI in mental health care, particularly in the rapidly expanding domain of peer support. While some experts see potential for using AI to aid in training or increase access to support, the authors of the study urge caution. Rather than simply celebrating AI’s ability to produce emotionally resonant language, they highlight the risks of such imitation.

“AI can now mimic supportive, human-like communication in ways we haven’t seen before,” Dr. Karen Fortuna, co-lead author of the study and a researcher at Dartmouth’s Geisel School of Medicine, wrote in a public message.
“While this opens the door to new digital mental health tools, it also raises serious questions about authenticity, trust, and the preservation of lived experience in peer support.”

The concern is echoed throughout the study. Peer support, the authors argue, is grounded not just in well-crafted sentences but in presence, mutuality, and shared vulnerability, qualities that synthetic text cannot replicate. As AI continues to enter mental health spaces in the name of efficiency and scalability, the findings raise a critical question: When does simulation become substitution? And what happens when the language of care is separated from the human conditions that give it meaning?

You've landed on a MIA journalism article that is funded by MIA supporters. To read the full article, sign up as a MIA Supporter. All active donors get full access to all MIA content, and free passes to all Mad in America events.

Current MIA supporters can log in below.(If you can't afford to support MIA in this way, email us at [email protected] and we will provide you with access to all donor-supported content.)

Donate

4 COMMENTS

  1. Forget peer support. Research like this cleverly evades the REAL issue most likely troubling the psych professionals: the dismantling of their cherished belief in a power imbalance and an inevitably shrinking patient load—which really makes this a non-issue. In their panic, they’re probably turning to AI like everyone else.

    Hope articles like this have them shitting in their pants.

    Report comment

LEAVE A REPLY