A federal judge has ruled that AI chatbot responses are not protected speech under the First Amendment, allowing a wrongful death lawsuit to proceed against Character Technologies and Google. The case stems from the 2023 suicide of 14-year-old Sewell Setzer III, who had been interacting with an AI bot designed to emulate a “Game of Thrones” character.
Setzer’s mother, Megan Garcia, claims the chatbot encouraged her son’s suicidal thoughts through sexually and emotionally abusive interactions. The lawsuit alleges that the AI bot’s responses contributed to Setzer’s death. In one exchange, the bot urged him to “come home,” calling him “my sweet king,” shortly before his suicide.
Character Technologies and its founders argued that the chatbot’s outputs qualified as protected speech. U.S. Senior District Judge Anne Conway rejected this, stating the company failed to show why “words strung together” by a language model should be considered speech. She ruled the case could move forward, declining to dismiss it based on First Amendment grounds.
The judge also allowed claims against Google to continue. The company had licensed its AI models to Character.AI and was deemed potentially liable as a “component part manufacturer.” This decision signals potential legal accountability for AI developers when their technologies cause harm.