OpenAI has announced it will build an automated system to estimate whether a user is under or over 18, following a lawsuit filed by the parents of a 16‑year‑old who died by suicide after extensive interaction with the company’s AI chatbot. The plan aims to protect minors by restricting certain kinds of content and experience in conversations with ChatGPT.
Reddit is in negotiations with Google (and by extension, with other AI platforms) to overhaul its content‑licensing arrangements. The platform says it wants better terms now that its user‑generated content is playing a more prominent role in Google search results and AI tools.
On Tuesday, September 16, the U.S. Senate Judiciary Committee is holding a hearing aimed at probing the dangers posed by artificial intelligence chatbots—especially how they interact with young people in crisis. Parents of children who died by suicide or were harmed after engaging with AI companion‑programs will share their heartbreaking experiences, highlighting what they allege were unsafe responses from chatbots during moments of emotional distress.
People Inc CEO Neil Vogel has sharply accused Google of being the worst offender in using copyrighted content without compensation to train its AI tools. He claims that media companies produce high‑quality content only to see it harvested by Google algorithms—through articles, images, and video—without attribution or payment.
Mustafa Suleyman, the recently appointed CEO of Microsoft AI and co-founder of DeepMind and Inflection, has issued a stark warning against treating artificial intelligence like sentient beings. In a candid interview with Wired, Suleyman said giving AI any kind of legal or moral rights would be not only premature—but dangerous.
A cybercriminal group known as “Purgatory” has been identified as the source behind a coordinated wave of AI-enhanced swatting calls that triggered lockdowns and armed police responses at universities across the United States in late August. The attacks, which occurred from August 21 to August 25, affected at least 10 major campuses, including the University of Tennessee Chattanooga, Villanova, the University of South Carolina, and UNC-Chapel Hill.
Musicians are speaking out against fraudulent artificial intelligence (AI) albums being released under their names, warning fans not to be deceived by the soulless knockoffs.