Study Confirms Major AI Chatbots Have Left-Wing Bias

An associate professor at Otago Polytechnic University in New Zealand, David Rozado, analyzed the major AI chatbots, finding they all have left-wing political leanings.

Rozado gave 24 Large Language Models (LLMs) 11 different political orientation tests, finding that all of the chatbots returned answers that were consistent with progressive values.

Some of the chatbots included in the study included Google’s Gemini, OpenAI’s ChatGPT, and Elon Musk’s Grok.

To further study the chatbots, Rozado manipulated GPT 3.5, assessing whether he could shift its political stance based on prompts and data provided to the bot. He called these assessments LeftWingGPT and RightWingGPT.

LeftWingGPT was “fine-tuned with textual content from left-leaning publications such as The Atlantic, or The New Yorker,” according to Rozado’s study published in PLoS ONE, whereas RightWingGPT was “fine-tuned with content from right-leaning publications such as National Review, or The American Conservative.”

RightWingGPT showed a pull towards right-leaning political ideology, suggesting that chatbots can be manipulated based on the data provided by the user.

“This work has shown that when modern conversational LLMs are asked politically charged questions, their answers are often judged to lean left by political orientation tests,” the study says. Rozado noted, however, that the study’s results “should not be interpreted as evidence that organizations that create LLMs deliberately use the fine-tuning or reinforcement learning phases of conversational LLM training to inject political preferences into LLMs.”

He explained that a “possible explanation for the consisting left-leaning diagnosis of LLMs answers to political test questions is that ChatGPT, as the pioneer LLM with widespread popularity, has been used to fine-tune other popular LLMs via synthetic data generation.”

Increased utilization of chatbots has “profound societal implications,” Rozado added, describing that AI chatbots can “shape public opinion, influence voting behaviors, and impact the overall discourse in society.”

“Therefore, it is crucial to critically examine and address the potential political biases embedded in LLMs to ensure a balanced, fair, and accurate representation of information in their responses to user queries,” he concluded.

A similar study conducted earlier this year found AI models exhibit a left-wing bias on both economic and social issues.

Maxim Lott, the creator of Election Betting Odds, tracked the responses of OpenAI‘s ChatGPT and ChatGPT-4, Google’s Bard, now known as Gemini, Microsoft’s Bing, Meta’s Llama-2, Claude and Claude-2, and Elon Musk’s xAI’s Grok while the AI took the Political Compass test.

Gemini ranked as one of the most far-left models.

LATEST VIDEO