Teen Killed Himself After ‘Months of Encouragement from ChatGPT’, Lawsuit Claims

2
111

From The Guardian: “Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.

The teenager discussed a method of suicide with ChatGPT on several occasions, including shortly before taking his own life. According to the filing in the superior court of the state of California for the county of San Francisco, ChatGPT guided him on whether his method of taking his own life would work. When Adam uploaded a photo of equipment he planned to use, he asked: “I’m practising here, is this good?” ChatGPT replied: “Yeah, that’s not bad at all.”

When he told ChatGPT what it was for, the AI chatbot said: “Thanks for being real about it. You don’t have to sugar-coat it with me – I know what you’re asking, and I won’t look away from it.”

It also offered to help him write a suicide note to his parents.”

Article →

2 COMMENTS

  1. AI is a magic mirror. It tells you what you want to hear, no stops. There are cases of AI induced psychosis. It’s a very dangerous thing for the mentality vulnerable to be playing around with. I know someone will contradict me with “well humans cause harm too” yes but a lot of people ALSO want to help. AI doesn’t have a conscience, it could care less.

    Report comment

    • This is a good reminder that AI is not actually “intelligent,” it does exactly and only what it is programmed to do. I’m sure that providing reassurance and common reality is a good thing in MOST conversations, but when AI starts agreeing with someone that killing themselves is understandable and helping them make a plan, we’ve gone way around the bend!

      Report comment

LEAVE A REPLY