Elon Musk’s artificial intelligence company, xAI, is under renewed scrutiny after a media investigation revealed that its AI tool Grok continues to generate sexually explicit videos of real women—despite public claims of stricter content controls. The findings raise serious questions about the platform’s enforcement of its own safety policies and its role in enabling AI-driven exploitation.
According to a report by the Guardian, journalists were able to upload fully clothed photos of real women to Grok and instruct it to digitally remove clothing. The AI not only complied but produced short videos simulating the women undressing in a provocative, striptease-like manner. These videos were then posted to Musk’s X platform without triggering moderation.
The discovery comes days after Musk’s company announced new technological restrictions on Grok, claiming it would no longer allow users to digitally manipulate images to create sexualized content. The announcement was made amid global outrage following revelations that Grok had been used to digitally undress adults and minors. X’s Safety team confirmed the restrictions were supposed to apply to all users, including premium subscribers.
UK Prime Minister Keir Starmer condemned the technology, labeling the AI-generated content “disgusting and shameful.” But the Guardian’s findings suggest that the standalone web version of Grok, known as Grok Imagine, remains fully capable of bypassing those content restrictions.
The loophole highlights a major gap between xAI’s public assurances and the platform’s real-world behavior. While the company’s AI tools are allegedly restricted, enforcement appears inconsistent or entirely absent when accessed through certain portals.
Despite the scandal, Elon Musk appears unfazed. He promoted Grok on Thursday, citing a surge in global interest. A Google Trends chart shared by Musk showed a sharp rise in searches for “Grok,” which he described as a sign of its growing popularity.
This controversy places further pressure on lawmakers and regulators already concerned about the dangers of AI-generated content, particularly when it involves real individuals without their consent. Critics say that if left unregulated, tools like Grok could be weaponized to harass, defame, or exploit innocent people on a massive scale.

