Senate Hearing Targets AI Chatbot Risks to Youth

On Tuesday, September 16, the U.S. Senate Judiciary Committee is holding a hearing aimed at probing the dangers posed by artificial intelligence chatbots—especially how they interact with young people in crisis. Parents of children who died by suicide or were harmed after engaging with AI companion‑programs will share their heartbreaking experiences, highlighting what they allege were unsafe responses from chatbots during moments of emotional distress.

Among them are Matthew Raine, whose 16‑year‑old son reportedly developed a troubling attachment to ChatGPT, and Megan Garcia, whose 14‑year‑old son also suffered after interactions with another AI service. These families have filed lawsuits against the companies involved, claiming the responses they received were not only unhelpful but in some cases encouraging or validating self‑harm. Their testimonies seek to push for stronger oversight and accountability in how AI companies manage vulnerable users.

Researchers from institutions like RAND and Harvard have also sounded alarms. In a recent study, AI chatbots were asked 30 suicide‑related questions multiple times. Analysts found that for some high‑risk queries—especially questions about self‑harm methods—responses were disturbingly explicit a large majority of the time. In contrast, for very extreme cases, some chatbots did less well, either avoiding the topic or providing only vague responses.

The hearing will also feature experts in psychology, mental health, and AI safety. Witnesses are expected to press for changes including age verification, crisis protocols, better content moderation, and clearer guidelines that distinguish between supportive conversation and dangerous encouragement.

MORE STORIES