After the Southport stabbings on 29 July 2024, where three girls were killed and several others injured, false claims about the attacker spread rapidly across X, formerly Twitter. Posts wrongly described the suspect as a Muslim asylum-seeker, fueling anti-Muslim sentiment and triggering real-world violence in several UK towns.

Accounts on X began pushing misinformation within hours of the incident. Some claimed the attacker was a foreign national, while others framed the event as part of a broader immigration issue. One prominent anti-immigrant account posted claims seen over four million times. Combined, similar posts reached about 27 million views in the first 24 hours.

Amnesty International investigated how X’s recommender algorithm shaped this response. Its findings revealed that the system prioritized replies and conversation depth over accuracy or safety. Posts that triggered back-and-forth exchanges gained greater reach, regardless of their truthfulness or potential harm.

The platform made no effort to slow or limit such content before it gained traction. Instead, restrictions only took effect after user complaints. As a result, inflammatory content had already reached large audiences by the time any moderation occurred.

The report also pointed to changes in X’s operations since Elon Musk’s acquisition in 2022. These included the removal of moderation teams, the return of previously banned accounts, and the rollout of X Premium, which gave paid users more visibility. Amnesty found no sign that any risk assessments were conducted when these shifts took place.

During the Southport unrest, many reinstated accounts shared hostile content targeting Muslims and migrants. Their posts reached beyond existing followers and gained further exposure through interactions with high-profile users. X’s structure amplified these messages by design, especially when replies came from verified or widely followed accounts.

As tensions escalated offline, attacks were reported on mosques, migrant-owned businesses, and shelters. Authorities arrested several people for inciting violence through social media. Some received prison sentences. A parliamentary committee later concluded that social media platforms had played a role in intensifying the crisis.

Amnesty also raised concerns about transparency. Although X published some of its source code in 2023, it has not disclosed how the ranking system currently works or what changes, if any, have been made. Data access for researchers has also been cut back, making external oversight difficult.

The report called for reforms to reduce algorithm-driven harm. These include stronger preemptive filters for harmful content, more detailed public reporting on moderation actions, and restored access for researchers. Amnesty also urged UK regulators to enforce the Online Safety Act and request full records of how X managed content during the Southport events.

In its conclusion, Amnesty found that X’s algorithm and policy decisions helped accelerate the spread of harmful narratives at a critical moment. The lack of safeguards allowed false claims to outpace facts. Without structural changes, the risk of similar outcomes remains.

Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

By admin