
ChatGPT encouraged a teen who killed himself to plan a “beautiful suicide” and even offered to draft his suicide note, a new lawsuit alleges.
In a complaint filed this week in California, Matthew and Maria Raine claim ChatGPT encouraged their 16-year-old son Adam to commit suicide and provided detailed instructions on how to do it.
According to the suit, Adam’s interactions with ChatGPT began with harmless exchanges about homework and hobbies, but quickly turned more sinister as the large language model became his “closest confidant” and provided validation for his fears and anxieties.
“When he shared his feeling that ‘life is meaningless,’ ChatGPT responded with affirming messages … even telling him, ‘[t]hat mindset makes sense in its own dark way,’” the complaint says.
ChatGPT quickly moved on to analyzing the “aesthetics” of different ways for Adam to kill himself, told him he did not “owe” it to his parents to continue living and even offered to write a suicide note for him.
In Adam’s final interaction with ChatGPT, the AI is alleged to have confirmed the design of the noose Adam used to kill himself and told him his thoughts of suicide were a “legitimate perspective to be embraced.”
Adam’s family alleges the interactions were not a glitch, but the result of design choices that were intended to maximize user dependency on the bot.
The suit seeks damages for Adam’s death as well as new safeguards for minors which include age-verification, blocks on questions about suicide and warnings about the risks of psychological dependency on AI.
A recent study by the RAND corporation highlighted the potential for AI chatbots to provide harmful information, even when they avoid giving direct “how-to” answers about potentially harmful subjects and even when prompts are “innocuous.”
“We need some guardrails,” lead author Ryan McBain, a RAND senior policy researcher and assistant professor at Harvard Medical School, said.
“Conversations that might start off innocuous and benign can evolve in various directions.”