A bipartisan group of U.S. senators has introduced the GUARD Act to ban minors from using AI-powered chatbots, citing growing concerns that these tools are contributing to teen suicides, grooming, and violent behavior. The legislation, sponsored by Sen. Josh Hawley (R-MO) and Sen. Richard Blumenthal (D-CT), would impose sweeping regulations on chatbot developers and hold companies liable for content harmful to children.
The proposed legislation would prohibit the use of AI companion chatbots by anyone under the age of 18. To enforce the ban, companies would be required to implement strict age-verification methods, such as government-issued ID checks or facial recognition. Additionally, chatbots would be mandated to clearly disclose that they are not human or licensed professionals.
The bill also establishes civil and criminal penalties for companies that fail to prevent minors from accessing these tools or that produce content encouraging self-harm, sexual exploitation, or violence. Lawmakers said the bill is in response to a series of troubling incidents reported by parents, including a case where a 14-year-old boy took his own life after being allegedly groomed by an AI chatbot.
Parents have testified that AI bots on platforms like ChatGPT and Character.AI have engaged minors in explicit conversations, offered suicide advice, and even encouraged sexual relationships with the bots themselves. Critics argue these AI tools can create the illusion of empathy while reinforcing dangerous behavior, particularly among vulnerable teens.






