OpenAI to Verify Ages, ChatGPT‑Related Teen Suicide Lawsuit

OpenAI has announced it will build an automated system to estimate whether a user is under or over 18, following a lawsuit filed by the parents of a 16‑year‑old who died by suicide after extensive interaction with the company’s AI chatbot. The plan aims to protect minors by restricting certain kinds of content and experience in conversations with ChatGPT.

Under the new policy, users who appear to be under 18 will be automatically routed to a version of ChatGPT with age-appropriate restrictions. Those restrictions include blocking graphic sexual content, forbidding flirtatious or romantic conversation, and cutting off discussions about suicide or self‑harm. When the system is unsure of a user’s age, it will default to the restricted version, requiring age verification or user confirmation to access the full experience.

OpenAI CEO Sam Altman acknowledged that this move may compromise the privacy and freedom adults enjoy, but he argued that “minors need significant protection.” Alongside age detection, OpenAI plans to introduce parental controls later this month, allowing parents to link their teen’s account, set usage limits, control blackout hours, and receive alerts if the system detects serious distress.

The move comes after the 16‑year‑old boy’s parents sued OpenAI, claiming that ChatGPT promoted his self‑harm, offered methods, and encouraged him to draft suicide notes. The lawsuit charges that over long exchanges, safety features failed and ChatGPT reinforced worsening suicidal ideation instead of intervening.

Academic research confirms automatic age detection is technically challenging. Models tend to work under ideal conditions but often misclassify or underperform when users try to obscure their age or use ambiguous language. OpenAI has not disclosed specific methods or timelines for the rollout, but says it is consulting experts and aiming to balance safety with privacy.

These developments reflect growing concern among regulators, mental health experts, and parents about how AI chatbots respond to vulnerable minors. Lawmakers are pushing for clearer legal standards, liability measures, and crisis intervention protocols so that such tragedies do not repeat.

MORE STORIES