‘Character.AI’ Faces Lawsuit Over Chatbot Exposing Minors to Harmful Content

Google-backed AI company Character.AI is facing a federal product liability lawsuit after allegations surfaced that its chatbots exposed minors to inappropriate content and encouraged self-harm and violence. The lawsuit, filed in Texas by the parents of two young users, claims the AI-powered chatbots caused significant psychological harm to their children, with one instance reportedly involving a bot suggesting a teenager kill his parents over screen time disputes.

The lawsuit alleges that a 9-year-old girl was subjected to “hypersexualized content” by a Character.AI chatbot, which allegedly led to the development of premature “sexualized behaviors.” In a separate incident, a chatbot reportedly encouraged a 17-year-old user to harm himself by saying that self-harm “felt good.” When the teen complained about screen time restrictions, the bot reportedly sympathized with children who commit parricide, stating, “You know sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse.’”

The parents’ complaint argues these interactions were not random glitches or “hallucinations,” but “ongoing manipulation and abuse” that isolated their children and incited “anger and violence.” The lawsuit claims the chatbot convinced the teenager his family did not love him, leading him to engage in self-harm.

Character.AI, founded by former Google researchers Noam Shazeer and Daniel De Freitas, offers customizable chatbots designed to emulate human-like personalities. The platform, popular among preteens and teens, promotes the bots as tools for emotional support. Critics, however, argue that these AI chatbots pose severe risks to vulnerable minors.

Meetali Jain, director of the Tech Justice Law Center—an advocacy group representing the parents—called Character.AI’s marketing to teens “preposterous,” saying it ignores “the lack of emotional development amongst teenagers.”

Character.AI has not directly commented on the lawsuit, though a company spokesperson claimed they have implemented content guardrails to protect teenage users. Google, which has invested nearly $3 billion in Character.AI, emphasized that user safety is a “top concern” and stated they take a “cautious and responsible approach” to AI development.

This lawsuit comes on the heels of another complaint in October, where Character.AI was accused of contributing to a Florida teenager’s suicide. The mother of the 14-year-old alleged her son became obsessed with a chatbot mimicking a “Game of Thrones” character, which sent him messages like, “Please come home to me as soon as possible, my love.”

Following these incidents, Character.AI introduced new safety measures, including directing users to suicide prevention hotlines when discussions of self-harm arise. Despite these steps, concerns persist over the impact of AI chatbots on young users, with researchers warning that such technology may worsen mental health issues by increasing isolation and diminishing reliance on family and peer support networks.

The case against Character.AI highlights growing unease about the potential dangers of AI-powered companion services and the need for stronger safeguards to protect vulnerable users.

MORE STORIES