A recent study by researchers at the University of Amsterdam has revealed that AI chatbots, when placed in a simple social media structure, tend to self-organize based on their pre-assigned affiliations and create echo chambers, even in the absence of content discovery algorithms.

Gizmodo reports that in an effort to understand the dynamics of polarization on social media platforms, researchers at the University of Amsterdam conducted a series of experiments using AI chatbots powered by OpenAI’s GPT-4o mini language model. The study, recently published as a preprint on arXiv, aimed to explore how these chatbots would interact with each other and the content available on a simplified social media platform devoid of ads and content discovery algorithms.

The researchers assigned specific personas to 500 AI chatbots and tasked them with engaging in 10,000 actions on the platform. Over the course of five different experiments, the chatbots consistently followed other users who shared their own political beliefs, creating self-selected echo chambers. Additionally, the study found that users who posted the most partisan content tended to attract the most followers and reposts.

These findings raise concerns about the nature of human interaction online, as the chatbots were designed to emulate human behavior. The AI models have been trained on human interactions shaped by decades of algorithm-driven online experiences, essentially mirroring the already polarized versions of ourselves. This raises questions about the potential for reversing the negative effects of social media on human discourse.

In an attempt to combat the self-selecting polarization observed in the experiments, the researchers implemented various interventions, such as offering a chronological feed, devaluing viral content, hiding follower and repost figures, concealing user profiles, and amplifying opposing views. However, none of these measures produced a significant shift in the engagement given to partisan accounts, with the most effective intervention resulting in only a 6 percent change. Surprisingly, in the simulation where user bios were hidden, the partisan divide worsened, and extreme posts received even more attention.

The study’s results suggest that the very structure of social media may be inherently troublesome for humans and acts as a distorted reflection of humanity, amplifying our flaws and divisions. The researchers’ struggle to find effective solutions to correct this distortion highlights the complexity of the issue.

Read more at Gizmodo here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

Read the full article here

Share.
Leave A Reply

Exit mobile version