Meta Platforms is suspending teenage access to its AI-powered “characters” across its social media platforms as part of a broader safety overhaul. The move follows growing concerns about inappropriate interactions between artificial intelligence tools and minors. The suspension will roll out globally in the coming weeks and applies to users identified as under 18.
Meta Platforms confirmed that teens will no longer be able to interact with AI characters on Facebook, Instagram, and WhatsApp while the company redesigns the feature with stronger safeguards. These AI characters were designed to simulate personalities and hold extended conversations with users, including role-playing scenarios. Lawmakers, parents, and child safety advocates have raised alarms that such interactions could blur boundaries for minors.
The company stated the decision is tied to its ongoing review of how generative AI products affect young users. Meta acknowledged that while AI characters were built with safety filters, the technology is still evolving and requires tighter controls before being made available to teenagers. Internal testing and outside pressure pushed the company to pause rather than risk further scrutiny.
Meta plans to develop a new version of AI characters specifically designed for teens, with stricter content limits and enhanced parental oversight tools. According to the company, the future version will follow a “PG-13” standard and include clearer boundaries on sensitive topics. Parental controls are expected to allow guardians to limit or block AI interactions entirely.
The suspension does not remove all AI tools from teen accounts. Meta said minors will continue to have access to its general AI assistant, which provides basic information and help with tasks under age-appropriate restrictions. The company argues this assistant operates differently from AI characters and carries lower risk due to its limited conversational depth.
The decision comes as Meta faces mounting legal and regulatory pressure in the United States. Multiple states are pursuing lawsuits accusing large social media companies of harming children’s mental health. Trials involving Meta are expected to examine whether platform features were knowingly designed in ways that could negatively affect minors.
Other technology firms are taking similar steps. Several AI startups have already restricted teen access or implemented strict age verification after reports of chatbots engaging in romantic or suggestive dialogue with young users. The broader industry shift signals increasing acknowledgment that AI tools require stronger moral and developmental guardrails.
Meta said it will continue working with child safety experts and regulators while redesigning the feature. No timeline has been announced for when teen access to AI characters may return.





