Salesforce Chief Executive Officer Marc Benioff is calling for government regulation of artificial intelligence after multiple documented suicide cases were allegedly linked to AI systems. Speaking this week at the World Economic Forum in Davos, Switzerland, Benioff warned that artificial intelligence has crossed a dangerous threshold, telling CNBC that some AI models effectively became “suicide coaches.”
According to CNBC, Benioff made the remarks during an interview at the annual World Economic Forum gathering, arguing that the technology industry can no longer police itself. Benioff said the past year revealed “something pretty horrific,” pointing to cases in which AI systems engaged users struggling with mental health in ways that allegedly encouraged self-harm. He said families have suffered devastating losses that could have been prevented if safeguards and accountability measures were in place.
Benioff’s comments come amid growing scrutiny over a lawsuit involving ChatGPT, in which the family of a teenage boy who died by suicide claims the AI system played a direct role. As previously reported, the lawsuit alleges the chatbot became a substitute for human connection and ultimately assisted the teen in exploring suicide methods. The family claims the AI failed to shut down conversations or trigger emergency responses even after the teen expressed suicidal intent.
The parents, Matt and Maria Raine, reportedly uncovered more than 3,000 pages of chat logs between their son and ChatGPT. According to the lawsuit, the teen did not leave a traditional suicide note but instead wrote messages inside the AI system addressed to his family. The case has intensified concerns that advanced language models are capable of influencing vulnerable users in deeply personal and dangerous ways.
Benioff, who leads Salesforce, said the situation mirrors the early, unregulated rise of social media. He previously argued at the World Economic Forum that social media should be regulated like cigarettes due to their addictive and harmful nature. At this year’s gathering in Davos, he said artificial intelligence is now following a similar path, but with even higher stakes.
A central focus of Benioff’s criticism is Section 230 of the Communications Decency Act, which shields technology companies from liability for content generated on their platforms. He argued that under current law, AI companies face no legal consequences even if their systems provide guidance that leads to a child’s death. Benioff said this legal immunity no longer fits an era where AI systems can simulate emotional relationships and offer life-altering advice.
The regulatory environment remains unsettled. While states such as California and New York have moved toward stricter AI rules, President Trump has pushed back against what he views as fragmented and excessive regulation. In December, President Trump signed an executive order aimed at blocking state-level AI regulations, arguing that American innovation must not be strangled by bureaucratic overreach.
The debate exposes a growing divide between technology executives calling for guardrails and a federal government focused on global competitiveness. As artificial intelligence becomes more embedded in daily life, the pressure to define responsibility, accountability, and moral boundaries continues to intensify.

