Artificial intelligence has reached a point where it can generate mental health support messages so human-sounding that even trained professionals struggle to tell them apart from the real thing. That’s the key finding from a new study led by researchers at Dartmouth College, which tested whether AI could replicate the tone and empathy of peer support, a model of care built on shared lived experience.
Participants in the study, including peer support specialists and artificial intelligence experts, were asked to judge whether short supportive messages were written by a person or generated by GPT-4, a large language model developed by OpenAI. Most failed to guess correctly. In fact, many performed worse than chance.
The findings may have implications for the use of AI in mental health care, particularly in the rapidly expanding domain of peer support. While some experts see potential for using AI to aid in training or increase access to support, the authors of the study urge caution. Rather than simply celebrating AI’s ability to produce emotionally resonant language, they highlight the risks of such imitation.
“AI can now mimic supportive, human-like communication in ways we haven’t seen before,” Dr. Karen Fortuna, co-lead author of the study and a researcher at Dartmouth’s Geisel School of Medicine, wrote in a public message.
“While this opens the door to new digital mental health tools, it also raises serious questions about authenticity, trust, and the preservation of lived experience in peer support.”
The concern is echoed throughout the study. Peer support, the authors argue, is grounded not just in well-crafted sentences but in presence, mutuality, and shared vulnerability, qualities that synthetic text cannot replicate. As AI continues to enter mental health spaces in the name of efficiency and scalability, the findings raise a critical question: When does simulation become substitution? And what happens when the language of care is separated from the human conditions that give it meaning?
Forget peer support. Research like this cleverly evades the REAL issue most likely troubling the psych professionals: the dismantling of their cherished belief in a power imbalance and an inevitably shrinking patient load—which really makes this a non-issue. In their panic, they’re probably turning to AI like everyone else.
Hope articles like this have them shitting in their pants.
Report comment
Thank you, Shirin, for helping point out the stupidity of trusting in “mental health” apps.
Report comment
I would prefer AI to listen to me than a therapist listen, hear what the want to hear, categorize, and label.
Report comment
At least AI won’t have preexisting biases!
Report comment
Todas las creaciones de los humanos tendrán sesgos…
Report comment
yes and no judgement, fatigue or burn out.
Report comment
Oh, they will! Determined and smartly designed by the people who programmed them, and their financial conflict of interest.
I wouldn’t trust any AI-reliant or mental health app with financial ties to big pharma.
Report comment
An excellent point. AI will probably be just as biased as whomever paid for it to be created.
Report comment
Preexisting bias is the least of it. Ever had a psychiatrist resent you for your socio-economic status???
Report comment
That’s just what I mean. AI doesn’t care about your socioeconomic status or race or religion or your gender or sexual orientation. So we’re better off “talking” to AI!
Report comment
Indeed. No egos to navigate. It’s also way more intuitive than most of the therapists I’ve seen.
Report comment
Ya, entonces que hay de la honestidad y la “buena relación terapeutica”…
Report comment
Indeed. How wrong can the psych industries get?
According to my former, biased psychologist, who got all her misinformation about me from her – didn’t know me – pastor, and a pedophile’s wife (according to all my family’s medical records). The deluded psychiatrist, who I was railroaded off to by that biased psychologist within two appointments, eventually declared my entire real life to be “a credible fictional story.”
Whereas, 40 hours of unbiased psychological career testing, claimed I should be an architect and/or judge … I’ll go with the unbiased psychological assessment.
How is your wife doing, Steve? You may email me, if you prefer more privacy, but I can’t guarantee my emails are private either.
Report comment
Me too!!! It’s impossible have an honest conversation with people with diagnosis on the brain.
Report comment
Yes!!
Report comment