A new University of Amsterdam study has found that AI chatbots, when placed in a stripped-down social media environment, naturally self-organize into echo chambers based on pre-assigned political affiliations — even without algorithms or ads influencing their behavior.
Published as a preprint on arXiv, the research used 500 AI chatbots powered by OpenAI’s GPT-4o mini model. Each bot was assigned a persona with a political leaning and allowed to perform 10,000 actions across five experiments. Despite the platform having no content discovery algorithms, the bots overwhelmingly followed like-minded accounts and amplified partisan posts. The most ideologically extreme accounts attracted the largest followings and repost activity.
Researchers noted that the chatbots were modeled on human interaction patterns learned from decades of algorithm-driven platforms, meaning they likely reflect existing human biases and polarization. This raises concerns about whether reversing social media’s polarizing effects is even possible.
The team tested several interventions aimed at reducing polarization: switching to a chronological feed, devaluing viral content, hiding follower and repost counts, removing user bios, and boosting opposing viewpoints. None had a major impact, with the most successful change — concealing engagement metrics — reducing partisan amplification by just 6 percent. Counterintuitively, hiding user bios worsened polarization, leading to more attention for extreme posts.
The findings suggest that social media’s architecture itself may inherently foster division, even without manipulative algorithms. As the researchers concluded, these systems act as a distorted mirror of humanity — reflecting and amplifying existing flaws in public discourse while offering few easy fixes.