Criminals Exploit OpenAI Chatbot to Spam Over 80,000 Websites

Researchers at cybersecurity firm SentinelOne have uncovered a large-scale spam campaign that used OpenAI’s chatbot to flood more than 80,000 websites with deceptive marketing messages—without being detected for four months.

According to Ars Technica, the campaign was powered by a malicious framework called AkiraBot, which exploited OpenAI’s chat API—specifically, the GPT-4o-mini model—to churn out tailored spam messages. The campaign, which ran from September 2024 to January 2025, was designed to promote sketchy SEO services to small and mid-sized websites by impersonating legitimate marketing.

Researchers from SentinelOne’s SentinelLabs, Alex Delamotte and Jim Walter, explained that the AI-generated spam was effective precisely because it didn’t look like traditional spam. AkiraBot “assigned the role of a ‘helpful assistant that generates marketing messages’” to the OpenAI model, and then inserted customized site names and services into each message. The result? Thousands of messages that “created the illusion of a curated message,” fooling most spam filters and slipping through unnoticed.

While OpenAI eventually revoked the spammers’ access, the damage had already been done. In fact, the only reason the scale of the operation was discovered was due to AkiraBot leaving behind log files. These logs revealed that more than 80,000 websites had been successfully hit by the campaign, while spam targeting about 11,000 domains failed.

The incident is a stark reminder that while AI tools have tremendous benefits, they are also ripe for abuse when safeguards are lacking. As Delamotte and Walter noted, AI-generated spam no longer follows “a consistent approach,” making it far more difficult to detect and block using traditional methods.

OpenAI responded to the findings, stating that this kind of misuse “violates their terms of service.” But the real concern is that enforcement remains reactive—only stepping in after a significant abuse has occurred. As the SentinelLabs team made clear, this case reveals how AI “can also be harnessed for malicious activities with equal ease” as legitimate uses.

The takeaway? While Big Tech continues to celebrate AI’s potential, criminals are already two steps ahead—proving, once again, that innovation without accountability opens the door to exploitation.

MORE STORIES