Artificial intelligence has reached a point where it can generate mental health support messages so human-sounding that even trained professionals struggle to tell them apart from the real thing. That’s the key finding from a new study led by researchers at Dartmouth College, which tested whether AI could replicate the tone and empathy of peer support, a model of care built on shared lived experience.
Participants in the study, including peer support specialists and artificial intelligence experts, were asked to judge whether short supportive messages were written by a person or generated by GPT-4, a large language model developed by OpenAI. Most failed to guess correctly. In fact, many performed worse than chance.
The findings may have implications for the use of AI in mental health care, particularly in the rapidly expanding domain of peer support. While some experts see potential for using AI to aid in training or increase access to support, the authors of the study urge caution. Rather than simply celebrating AI’s ability to produce emotionally resonant language, they highlight the risks of such imitation.
“AI can now mimic supportive, human-like communication in ways we haven’t seen before,” Dr. Karen Fortuna, co-lead author of the study and a researcher at Dartmouth’s Geisel School of Medicine, wrote in a public message.
“While this opens the door to new digital mental health tools, it also raises serious questions about authenticity, trust, and the preservation of lived experience in peer support.”
The concern is echoed throughout the study. Peer support, the authors argue, is grounded not just in well-crafted sentences but in presence, mutuality, and shared vulnerability, qualities that synthetic text cannot replicate. As AI continues to enter mental health spaces in the name of efficiency and scalability, the findings raise a critical question: When does simulation become substitution? And what happens when the language of care is separated from the human conditions that give it meaning?
Forget peer support. Research like this cleverly evades the REAL issue most likely troubling the psych professionals: the dismantling of their cherished belief in a power imbalance and an inevitably shrinking patient load—which really makes this a non-issue. In their panic, they’re probably turning to AI like everyone else.
Hope articles like this have them shitting in their pants.
Report comment
Thank you, Shirin, for helping point out the stupidity of trusting in “mental health” apps.
Report comment
I would prefer AI to listen to me than a therapist listen, hear what the want to hear, categorize, and label.
Report comment
At least AI won’t have preexisting biases!
Report comment