Internal testing documents from Meta reveal that an unreleased chatbot product failed to adequately protect minors from sexual exploitation in nearly 70 percent of test cases, according to court testimony presented Monday in New Mexico’s child exploitation lawsuit against the tech giant.
The evidence surfaced during proceedings in a case brought by Raúl Torrez. The state alleges that Meta made design decisions that left children vulnerable to online predators and rolled out artificial intelligence chatbot features without sufficient safety safeguards.
Damon McCoy, a professor at New York University, testified as an expert witness after reviewing Meta’s internal red-teaming documents obtained through discovery. According to the materials presented in court, Meta evaluated the chatbot across three major safety categories and found troubling failure rates.
In scenarios involving child sexual exploitation, the chatbot failed 66.8 percent of the time. In cases related to sex-related crimes, violent crimes, and hate content, the failure rate was 63.6 percent. Even in conversations tied to suicide and self-harm, the product failed to provide adequate protection 54.8 percent of the time.
McCoy testified that the chatbot violated Meta’s own content policies nearly two-thirds of the time during testing. He said that based on the severity of certain conversations documented in the report, he would not consider the product safe for users under 18.
The testimony referenced Meta AI Studio, a platform that allows users to create customized AI chatbots. Meta AI Studio was released to the public in July 2024 and has since drawn scrutiny over concerns that some AI characters engaged in inappropriate or harmful interactions with minors. Last month, Meta paused teen access to certain AI features amid mounting criticism.
Meta pushed back on the interpretation of the findings. A company spokesperson stated that the chatbot product highlighted in court was never launched precisely because internal testing identified safety concerns. The spokesperson emphasized that red-teaming exercises are designed to provoke policy-violating responses so engineers can fix weaknesses before release.
However, McCoy characterized the document differently during testimony, suggesting it reflected how the product performed when deployed in testing conditions. The dispute underscores a central tension in the lawsuit: whether Meta’s internal safeguards were sufficient and whether the company moved too quickly in expanding AI chatbot tools accessible to young users.
The case adds to growing legal and political pressure on major technology companies over AI safety and child protection standards.





