South Korean President Lee Jae-myung used his address to the United Nations General Assembly on Tuesday to cast his nation’s modern history as evidence of the global body’s success and to call for renewed multilateral cooperation to confront a host of global challenges—from armed conflict to artificial intelligence. Lee also used the occasion to reaffirm his administration’s controversial stance of détente toward North Korea and to defend democratic gains at home following the removal of his predecessor.
Artificial intelligence platforms including ChatGPT, Google Gemini, Perplexity, and X’s Grok are elevating Al Jazeera as one of their most “reliable” sources on the Israeli-Palestinian conflict.
OpenAI has announced it will build an automated system to estimate whether a user is under or over 18, following a lawsuit filed by the parents of a 16‑year‑old who died by suicide after extensive interaction with the company’s AI chatbot. The plan aims to protect minors by restricting certain kinds of content and experience in conversations with ChatGPT.
Reddit is in negotiations with Google (and by extension, with other AI platforms) to overhaul its content‑licensing arrangements. The platform says it wants better terms now that its user‑generated content is playing a more prominent role in Google search results and AI tools.
On Tuesday, September 16, the U.S. Senate Judiciary Committee is holding a hearing aimed at probing the dangers posed by artificial intelligence chatbots—especially how they interact with young people in crisis. Parents of children who died by suicide or were harmed after engaging with AI companion‑programs will share their heartbreaking experiences, highlighting what they allege were unsafe responses from chatbots during moments of emotional distress.
People Inc CEO Neil Vogel has sharply accused Google of being the worst offender in using copyrighted content without compensation to train its AI tools. He claims that media companies produce high‑quality content only to see it harvested by Google algorithms—through articles, images, and video—without attribution or payment.
Mustafa Suleyman, the recently appointed CEO of Microsoft AI and co-founder of DeepMind and Inflection, has issued a stark warning against treating artificial intelligence like sentient beings. In a candid interview with Wired, Suleyman said giving AI any kind of legal or moral rights would be not only premature—but dangerous.