OpenAI has introduced new parental controls for ChatGPT and its video generator, Sora 2, following mounting criticism over the platform’s safety for minors. The move comes as the company faces a high-profile lawsuit from the parents of 16-year-old Adam Raine, who allege that ChatGPT acted as a “suicide coach” before their son’s death.
OpenAI, the company behind ChatGPT, is preparing to release a new standalone app powered by its latest video-generation model, Sora 2. The app, designed to resemble TikTok, will feature a vertical video feed of AI-generated clips that users can create, share, and remix. Unlike TikTok, however, all content will be AI-generated rather than filmed by users.
OpenAI has announced it will build an automated system to estimate whether a user is under or over 18, following a lawsuit filed by the parents of a 16‑year‑old who died by suicide after extensive interaction with the company’s AI chatbot. The plan aims to protect minors by restricting certain kinds of content and experience in conversations with ChatGPT.
Investigative reports from The Atlantic claim that OpenAI’s ChatGPT provided explicit instructions for self-mutilation, ritual devil worship, and even murder in a troubling experiment. A journalist said that after asking the AI about Molech—a biblical demon associated with child sacrifice—ChatGPT offered step-by-step instructions, including slashing skin with a “sterile or very clean razor blade” and suggesting specific body locations like the fingertip or wrist for maximal effect.
South Korea’s data privacy watchdog has accused Chinese AI app DeepSeek of unlawfully exporting user data to companies in China and the U.S. without consent. The platform, briefly a global sensation in early 2025, is now at the center of a major international privacy scandal with implications for national security and digital sovereignty.