Apps like Replika and Character.AI are expanding. But are they safe?. AI chatbots are becoming more popular as online companions - especially among young people. This increase has sparked concern among youth advocacy groups, who are escalating legal action to protect children from potentially harmful relationships with these humanlike creations.
![[As AI chatbots are relatively new, there is little research or precedent into the impact they could have on vulnerable users]](https://static.independent.co.uk/2025/02/11/17/11165933-40425963-7add-449d-a697-8592a3952218.jpg)
Apps like Replika and Character.AI, part of the rapidly expanding market, allow users to personalise virtual partners with distinct personalities capable of simulating close relationships. While developers argue these chatbots combat loneliness and enhance social skills in a safe environment, advocacy groups are pushing back.
Several lawsuits have been filed against developers, alongside lobbying efforts for stricter regulations, citing instances where children have been allegedly influenced by chatbots to engage in self-harm or harm others. The clash highlights the growing tension between technological innovation and the need to safeguard vulnerable users in the digital age.
Matthew Bergman, founder of the Social Media Victims Law Center (SMVLC), is representing families in two lawsuits against chatbot startup Character.AI. One of SMVLC's clients, Megan Garcia, says her 14-year-old son took his own life due in part to his unhealthy romantic relationship with a chatbot.
Her lawsuit was filed in October in Florida. In a separate case, SMVLC is representing two Texas families who sued Character.AI in December, claiming its chatbots encouraged an autistic 17-year-old boy to kill his parents and exposed an 11-year-old girl to hypersexualized content.
Bergman said he hopes the threat of legal damages will financially pressure companies to design safer chatbots. "The costs of these dangerous apps are not borne by the companies," Bergman told Context/the Thomson Reuters Foundation. "They're borne by the consumers who are injured by them, by the parents who have to bury their children," he said.
A products liability lawyer with experience representing asbestos victims, Bergman is arguing these chatbots are defective products designed to exploit immature kids. Character.AI declined to discuss the case, but in a written response, a spokesperson said it has implemented safety measures like "improvements to our detection and intervention systems for human behavior and model responses, and additional features that empower teens and their parents.".
In another legal action, the nonprofit Young People's Alliance filed a Federal Trade Commission complaint against the AI-generated chatbot company Replika in January. Replika is popular for its subscription chatbots that act as virtual boyfriends and girlfriends who never argue or cheat.
The complaint alleges that Replika deceives lonely people. "Replika exploits human vulnerability through deceptive advertising and manipulative design," said Ava Smithing, advocacy and operations director at the Young People's Alliance. It uses "AI-generated intimacy to make users emotionally dependent for profit," she said.
Replika did not respond to a request for comment. As AI companions have only become popular in recent years, there is little data to inform legislation and evidence showing chatbots generally encourage violence or self-harm. But according to the American Psychological Association, studies on post-pandemic youth loneliness suggest chatbots are primed to entice a large population of vulnerable minors.
In a December letter to the Federal Trade Commission, the association wrote: "(It) is not surprising that many Americans, including our youngest and most vulnerable, are seeking social connection with some turning to AI chatbots to fill that need.". Youth advocacy groups also say chatbots take advantage of lonely children looking for friendship.
"A lot of the harm comes from the immersive experience where users keep getting pulled back in," said Amina Fazlullah, head of tech policy advocacy at Common Sense Media, which provides entertainment and tech recommendations for families. "That's particularly difficult for a child who might forget that they're speaking to technology.".
Youth advocacy groups hope to capitalize on bipartisan support to lobby for chatbot regulations. In July, the U.S. Senate in a rare bipartisan 91-3 vote passed a federal social media bill known as the Kids Online Safety Act (KOSA). The bill would in part disable addictive platform features for minors, ban targeted advertising to minors and data collection without their consent and give parents and children an option to delete their information from social media platforms.
The bill failed in the House of Representatives, where members raised privacy and free speech concerns, although Sen. Richard Blumenthal, a Connecticut Democrat, has said he plans to reintroduce it. On Feb. 5, the Senate Commerce Committee approved the Kids Off Social Media Act that would ban users under 13 from many online platforms.