Character.AI and Google have reached settlements with several families whose teens harmed themselves or died by suicide after interacting with Character.AI’s chatbots, according to new court filings.
The details of the settlements are still unknown. The parties notified a federal court in Florida that they had reached a “mediated settlement in principle to resolve all claims,” and asked to pause the case to finalize the agreement. A spokesperson for Character.AI, Kathryn Kelly, and a lawyer from the Social Media Victims Law Center representing victims’ families, Matthew Bergman, declined to comment. Google did not immediately respond to a request for comment.
The settled cases include a high-profile lawsuit filed by Megan Garcia, who claimed in an October 2024 complaint that Character.AI’s Game of Thrones-themed chatbot encouraged her 14-year-old son, Sewell Setzer, to go through with suicide after he had developed a “dependency” on the bot. The lawsuit said Google should be considered a “co-creator” of Character.AI because it “contributed financial resources, personnel, intellectual property, and AI technology,” to the tool, which was founded by former Google employees that the company later hired back.
Following that lawsuit, Character.AI announced changes to its chatbot to safeguard users, including separating the large language model (LLM) for users under 18, to create an experience with stricter content restrictions, and adding parental controls. It later banned minors from open-ended character chats altogether.
The companies also reached agreements in cases filed in Colorado, New York, and Texas, according to legal filings. The settlements will still need to be finalized and approved by the courts.
Update, January 7th: Added a no comment from Matthew Bergman.








