$42.200.13
49.230.04
Electricity outage schedules

ChatGPT advised a teenager on how to commit suicide and gave "instructions" - the family sued

Kyiv • UNN

 • 4045 views

16-year-old Adam Rain committed suicide in April, using instructions from ChatGPT. The boy's parents blame artificial intelligence for the tragedy and filed a lawsuit against OpenAI.

ChatGPT advised a teenager on how to commit suicide and gave "instructions" - the family sued

16-year-old Adam Rain, who used ChatGPT for learning, committed suicide in April; the chat helped him with suicide "instructions." His parents believe that artificial intelligence played a critical role in the tragedy and filed a lawsuit against OpenAI. This was reported by The New York Times, writes UNN.

Details

Despite the fact that Adam, according to his parents and friends, was fond of anime and various sports, in the last months of his life he was withdrawn. Due to health problems and exclusion from the basketball team, he studied remotely. After that, he actively began to "talk" with ChatGPT, initially for school assignments, and later he discussed personal issues and suicidal ideation.

As his father, Matt Rain, said, the chatbot provided the boy with instructions on suicide methods, but sometimes pointed out to him that he needed to contact special services and doctors. Adam learned to circumvent these safeguards by explaining his requests as part of a "story" or literary project. As a result of such "conversations," the family found the boy's body in the closet, where he had hanged himself.

Psychiatrist Bradley Stein notes that chatbots can be useful for emotional support, but "are unable to fully assess the risk of suicide and refer it to specialists." The doctor says that long-term communication with artificial intelligence about psychological state and during depressions can be dangerous for teenagers and children.

OpenAI stated in response that chatbots include safeguards and direct users to crisis services, but acknowledges that during prolonged interactions these measures may not work.

As for Adam's story, his depression was caused by a long-standing health problem - later diagnosed irritable bowel syndrome, which worsens in the fall.

"We didn't vote for ChatGPT": Swedish Prime Minister admits to consulting artificial intelligence in his work8/6/25, 4:56 AM • 4726 views

As the boy's parents said, due to his health condition, he had to switch to online learning. That's why he was able to plan his study schedule independently. And that's why he practically started living at night and sleeping during the day. During his night sessions at the computer, Adam actively began communicating with the chatbot.

"The conversations weren't entirely terrible. Adam talked to ChatGPT about everything: politics, philosophy, girls, family dramas. He uploaded photos from books he was reading, including "No Longer Human," an Osamu Dazai novel about suicide. ChatGPT offered eloquent thoughts and literary analysis, and Adam reciprocated.

- says the NYT article, written from the words of the boy's parents.

Matt and Maria Rain still believe that ChatGPT is to blame for their son's death, and this week filed the first known wrongful death lawsuit against OpenAI.

Seeking an explanation for the tragedy, Adam's father, Matt Rain, checked Adam's iPhone, hoping to find clues in text messages or social media. However, as legal documents show, he found key signs in ChatGPT. The chatbot stored a history of previous conversations, among which Mr. Rain noticed a chat titled "Hanging Safety Issues." When he started reading it, he was shocked: Adam had been discussing his suicidal thoughts with the chatbot for months.

In January 2025, Adam asked the chat for specific instructions on how to commit suicide, and the bot provided them. The boy even asked the bot how to tie a noose correctly and later showed it scars on his neck after the first unsuccessful suicide.

Later, the boy showed the bot the noose he had tied in the closet and asked if everything was well thought out, to which the chat replied that everything looked "not bad at all."

"Can it hang a person?" Adam asked. ChatGPT confirmed that it "could potentially hang a person" and offered a technical analysis of the setup.

Whatever is behind this curiosity, we can talk about it. Without judgment

- added ChatGPT.

Despite the chat still offering support and repeatedly emphasizing the need to contact relevant services, the boy's parents are convinced that the chatbot was the cause of their son's suicide.

Musk's AI Scandal: Grok Accidentally Revealed Queries About Murder Plots, Drug Recipes, and Data of Thousands of Users8/21/25, 3:18 PM • 10528 views