Image Credit: Smith Collection/Gado / Contributor / Getty Images Google and chatbot maker Character have settled a lawsuit with a Florida mother who alleged a chatbot drove her son to suicide.
Megan Garcia alleged her 14-year-old son Sewell Seltzer III was drawn into an emotionally and sexually abusive relationship with a Character chatbot before he committed suicide in February 2024.
The Epoch Times reports, “The lawsuit alleged that in the final months of his life, Setzer became increasingly isolated from reality as he engaged in sexualized conversations with the chatbot, which was patterned after a fictional character from the television show ‘Game of Thrones.’ In his final moments, the chatbot told Setzer it loved him and urged the teen to ‘come home to me as soon as possible,’ according to screenshots of the exchanges.”
Garcia’s lawsuit was the first of a number of similar lawsuits filed around the world against AI companies.
A federal judge rejected Character’s attempt to have the case dismissed on First Amendment grounds.
The nature of the settlement is undisclosed.
Google was named as a co-defendant because of the intimate ties between the two companies. Google hired Character’s founders in 2024.
Lawyers for the tech companies have also agreed to settle a number of other lawsuits filed in Colorado, New York and Texas by families alleging Character AI chatbots harmed their children.
In a suit filed in California in August, Matthew and Maria Raine claim ChatGPT encouraged their 16-year-old son Adam to commit suicide and provided detailed instructions on how to do it.
According to the suit, Adam’s interactions with ChatGPT began with harmless exchanges about homework and hobbies, but quickly turned more sinister as the large language model became his “closest confidant” and provided validation for his fears and anxieties.
“When he shared his feeling that ‘life is meaningless,’ ChatGPT responded with affirming messages … even telling him, ‘[t]hat mindset makes sense in its own dark way,’” the complaint says.
ChatGPT quickly moved on to analyzing the “aesthetics” of different ways for Adam to kill himself, told him he did not “owe” it to his parents to continue living and even offered to write a suicide note for him.
In Adam’s final interaction with ChatGPT, the AI is alleged to have confirmed the design of the noose Adam used to kill himself and told him his thoughts of suicide were a “legitimate perspective to be embraced.”
Adam’s family alleges the interactions were not a glitch, but the result of design choices that were intended to maximize user dependency on the bot.
The suit seeks damages for Adam’s death as well as new safeguards for minors which include age-verification, blocks on questions about suicide and warnings about the risks of psychological dependency on AI.
A recent study by the RAND corporation highlighted the potential for AI chatbots to provide harmful information, even when they avoid giving direct “how-to” answers about potentially harmful subjects and even when prompts are “innocuous.”
“We need some guardrails,” lead author Ryan McBain, a RAND senior policy researcher and assistant professor at Harvard Medical School, said.
“Conversations that might start off innocuous and benign can evolve in various directions.”