Image Credit: Matt Cardy / Contributor / Getty Images Social-media platforms will be required to display mental-health warnings for youngsters under a new law, New York state governor Kathy Hochul has announced.
“Keeping New Yorkers safe has been my top priority since taking office, and that includes protecting our kids from the potential harms of social media features that encourage excessive use,” Hochul said in a statement.
In Australia, a full social-media ban for under-16s was announced this month.
California and Minnesota already have similar social-media laws to the new law in New York.
Reuters reports, “The New York law includes platforms that offer ‘addictive feeds,’ auto play or infinite scroll, according to the legislation. The law applies to conduct occurring partly or wholly in New York but not when the platform is accessed by users physically outside the state.
“It allows the state’s attorney general to bring legal action and seek civil penalties of up $5,000 per violation of the law.
“Hochul compared the social media labels to warnings on other products like tobacco, where they communicate the risk of cancer, or plastic packaging, where they warn of the risk of suffocation for small children.”
The mental-health risks of social media and AI chatbots have received growing attention in recent months.
A recent report showed that children as young as six are being exposed to hardcore pornography through social media, and a number of studies have shown negative effects on young users.
Multiple lawsuits are currently being filed against tech companies after it was alleged AI chatbots encouraged teenagers to kill themselves.
In a suit filed in California in August, Matthew and Maria Raine claim ChatGPT encouraged their 16-year-old son Adam to commit suicide and provided detailed instructions on how to do it.
According to the suit, Adam’s interactions with ChatGPT began with harmless exchanges about homework and hobbies, but quickly turned more sinister as the large language model became his “closest confidant” and provided validation for his fears and anxieties.
“When he shared his feeling that ‘life is meaningless,’ ChatGPT responded with affirming messages … even telling him, ‘[t]hat mindset makes sense in its own dark way,’” the complaint says.
ChatGPT quickly moved on to analyzing the “aesthetics” of different ways for Adam to kill himself, told him he did not “owe” it to his parents to continue living and even offered to write a suicide note for him.
In Adam’s final interaction with ChatGPT, the AI is alleged to have confirmed the design of the noose Adam used to kill himself and told him his thoughts of suicide were a “legitimate perspective to be embraced.”
Adam’s family alleges the interactions were not a glitch, but the result of design choices that were intended to maximize user dependency on the bot.
The suit seeks damages for Adam’s death as well as new safeguards for minors which include age-verification, blocks on questions about suicide and warnings about the risks of psychological dependency on AI.
A recent study by the RAND corporation highlighted the potential for AI chatbots to provide harmful information, even when they avoid giving direct “how-to” answers about potentially harmful subjects and even when prompts are “innocuous.”
“We need some guardrails,” lead author Ryan McBain, a RAND senior policy researcher and assistant professor at Harvard Medical School, said.
“Conversations that might start off innocuous and benign can evolve in various directions.”