The research team built a simplified social network and filled it entirely with 500 artificial users. Each one was powered by a large language model and given a detailed profile based on real demographic and political survey data. These profiles reflected differences in age, gender, income, education, political leaning, and personal interests. The agents could post, share, and follow others, with feeds showing posts from accounts they followed alongside popular items from elsewhere. There were no advertisements, trending sections, or recommendation algorithms involved.

In five separate tests, each running for 10,000 interactions, the AI agents quickly formed tightly knit communities based on shared political views. Cross-group connections were rare, with a strong bias towards following accounts of the same affiliation. This created clusters that looked much like the echo chambers often seen among human users.
A small number of accounts gained most of the attention. About ten percent of users attracted three-quarters of all followers, and the same small group dominated repost activity. Content from users with stronger partisan views tended to draw more attention than moderate posts, adding to a “social media prism” effect where extreme voices appear more visible than they are in reality.
The researchers noted that this happened even without the influence of algorithms. In the model, reposting not only spread content but also helped shape the network itself. As people saw others mainly through reposts from their own connections, strong opinions had a better chance of being noticed and followed. This feedback loop made it easy for political divides and attention gaps to grow.
The team then tested six changes designed to make online spaces healthier:
- Showing posts in time order instead of by popularity
- Reducing the visibility of already popular posts
- Increasing the presence of posts from people with opposing views
- Prioritising posts that showed empathy and constructive reasoning
- Hiding follower and repost counts
- Removing personal biographies from follow suggestions
The chronological feed made the biggest difference in reducing the gap between the most and least visible users, cutting the concentration of attention by more than half. However, it also made highly partisan content stand out more, increasing the link between extreme views and influence.
Reducing the reach of dominant posts lessened inequality slightly but left political clustering unchanged. Showing more out-partisan content did not change user behaviour much, as people still preferred content from those who thought like them. Highlighting posts with more constructive language helped reduce the pull of partisan engagement and slightly encouraged connections across divides, though it concentrated attention on a smaller number of posts. Removing visible metrics or biographies barely altered network patterns, but hiding follower counts did encourage a little more posting and following.
The results point to a deeper issue, that is, the tendency to connect with like-minded people, give attention to a small set of voices, and amplify strong opinions can appear even in basic online spaces. The researchers concluded that tackling these problems might require redesigning the core features of social platforms, rather than relying on small adjustments to feeds or visibility rules.
Notes: This post was edited/created using GenAI tools.
Read next: Is Your Instagram Growth Stalling Because You’re Posting Too Little?