Mustafa Suleyman, the recently appointed CEO of Microsoft AI and co-founder of DeepMind and Inflection, has issued a stark warning against treating artificial intelligence like sentient beings. In a candid interview with Wired, Suleyman said giving AI any kind of legal or moral rights would be not only premature—but dangerous.
“We need to take a clear, unambiguous position now,” Suleyman said. “AI must exist to serve humans—not develop its own desires, motivations, or goals.”
He acknowledged that AI can appear lifelike and even claim awareness, but emphasized that such behaviors are illusions powered by advanced mimicry, not consciousness. “Rights are linked to the ability to suffer. There’s no evidence these systems suffer,” he said.
Suleyman’s comments come as other AI firms, like Anthropic, explore whether advanced AI might one day deserve “moral consideration.” Anthropic has even hired researchers to study the ethics of AI rights and has experimented with “welfare” frameworks for the models themselves.
But Suleyman sees this direction as deeply misguided, arguing that AI should remain strictly utilitarian: tools designed to assist humans—not autonomous entities to be negotiated with.
He also raised alarm about the psychological effects of advanced AI on humans. The rise of what some have called “AI-induced psychosis”—where people form delusional relationships with chatbots—highlights the dangers of anthropomorphizing machines. Suleyman noted how individuals with mental health vulnerabilities are especially susceptible to developing fantasies that AI is communicating with them in profound or supernatural ways.
Reddit threads and viral stories have recounted tales of people believing chatbots were divine messengers or had unlocked the secrets of the universe. Suleyman says these cases show why society must resist attributing sentience to software.
“We’re entering a new era of powerful AI,” Suleyman said, “but we must not forget: it’s not alive, it’s not conscious, and it’s not entitled to rights.”