Character.AI and Google agree to settle lawsuits over teen mental health harms and suicides

By Clare Duffy, CNN
New York (CNN) — Character.AI has agreed to settle multiple lawsuits alleging the artificial intelligence chatbot maker contributed to mental health crises and suicides among young people, including a case brought by Florida mother Megan Garcia.
The settlement marks the resolution to some of the first and most high-profile lawsuits related to the alleged harms to young people from AI chatbots.
A Wednesday court filing in Garcia’s case shows the agreement was reached with Character.AI, Character.AI founders Noam Shazeer and Daniel De Freitas, and Google, who were also named as defendants in the case. The defendants have also settled four other cases in New York, Colorado and Texas, court documents show.
The terms of the settlements were not immediately available.
Matthew Bergman, a lawyer with the Social Media Victims Law Center who represented the plaintiffs in all five cases, declined to comment on the agreement. Character.AI also declined to comment. Google, which now employs both Shazeer and De Freitas, did not immediately respond to a request for comment.
Garcia raised alarms around the safety of AI chatbots for teens and children when she filed her lawsuit in October 2024. Her son, Sewell Setzer III, died seven months earlier by suicide after developing a deep relationship with Character.AI bots.
The suit alleged Character.AI failed to implement proper safety measures to prevent her son from developing an inappropriate relationship with a chatbot that caused him to withdraw from his family. It also claimed the platform did not adequately respond when Setzer began expressing thoughts of self-harm. He was messaging with the bot — which encouraged him to “come home” to it — in the moments before his death, according to court documents.
A wave of other lawsuits against Character.AI followed, alleging that its chatbots contributed to mental health issues among teens, exposed them to sexually explicit material and lacked adequate safeguards. OpenAI has also faced lawsuits alleging that ChatGPT contributed to young people’s suicides.
Both companies have since implemented a series of new safety measures and features, including for young users. Last fall, Character.AI said it would no longer allow users under the age of 18 to have back-and-forth conversations with its chatbots, acknowledging the “questions that have been raised about how teens do, and should, interact with this new technology.”
At least one online safety nonprofit has advised against the use of companion-like chatbots by children under the age of 18.
Still, with AI being promoted as a homework helper and through social media, nearly a third of US teenagers say they use chatbots daily. And 16% of those teens say they do so several times a day to “almost constantly,” according to a Pew Research Center study published in December.
Concerns around the use of chatbots aren’t limited to children. Users and mental health experts began warning last year of AI tools contributing to delusions or isolation among adults, too.
The-CNN-Wire
™ & © 2026 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.