The latest revelations highlight AI bias in how major platforms report political violence, with multiple chatbots falsely labeling Charlie Kirk’s assassination as motivated by “right-wing ideology.” A Washington Free Beacon analysis found that AI tools from OpenAI’s ChatGPT, Google’s Gemini, and Perplexity cited Kirk’s murder as a prime example of right-wing terrorism, despite prosecutors and court filings describing the shooter, Tyler Robinson, as politically left-leaning.
Gemini’s chatbot went further, claiming the “assassination of conservative activist Charlie Kirk in September 2025 has been identified by some researchers as the only fatal right-wing terrorist incident in the U.S. during the first half of 2025.” Perplexity similarly asserted that “a recent prominent assassination in the U.S. motivated by right-wing ideology is the killing of conservative activist Charlie Kirk.”
Those claims contradict official records. Prosecutors revealed Robinson told his transgender partner he targeted Kirk because he had “enough of [Kirk’s] hatred” and that “some hate can’t be negotiated out.” Shell casings linked to Robinson were engraved with anti-fascist slogans, underscoring his left-wing leanings.
Yet the chatbots continue to portray left-wing violence as “exceptionally rare” while classifying incidents such as the January 6 Capitol riot as right-wing extremism. When asked about the 2020 George Floyd riots, Perplexity described the violence as “part of a complex social context rather than being straightforward examples of ‘left-wing political violence.’”
The findings raise questions about the reliability of AI-driven news sources. With younger Americans increasingly turning to AI instead of traditional outlets, the amplification of partisan narratives risks distorting the truth about political violence in the United States.