A recent survey has found that environmental defenders frequently face online abuse, with Facebook appearing most often in their reports. The data was gathered by Global Witness, a nonprofit group tracking digital threats and violence against activists. More than 200 people from different regions participated in the survey. Almost every respondent said they had been harassed online because of their environmental or land rights work.
Harassment on Social Platforms Linked to Physical Threats

Three-quarters of those surveyed said they believed that what they experienced online had led to threats or incidents in their everyday lives. Facebook was mentioned by the majority, followed by WhatsApp, Instagram, and X. The findings highlight persistent safety concerns across widely used social media platforms.
Policy Shifts and Poor Moderation Raise Risks
Meta, the company behind Facebook, Instagram, and WhatsApp, ended its third-party fact-checking system earlier this year. It now relies on user-driven moderation tools. Some respondents said this change allowed abusive content to stay visible longer.
Several people also described platforms as slow to respond when reports were submitted. They said content flagged for harassment often remained live, even when it included targeted personal attacks.
Gender-Based Attacks Are Widespread
Roughly one in four participants said they had been targeted because of their gender. Many women reported harassment that included sexual language or false accusations. These incidents often took place alongside public protests or other organizing work. Some said they had been touched without consent during demonstrations and needed others to protect them from further contact.
Digital Tools Often Fail to Remove Harmful Content
Though Meta provides filters and safety features, many users said those tools weren’t effective. In several cases, the company responded to flagged posts by saying the content didn’t break its rules. Some of those posts included accusations that carried serious risks in local political environments.
Respondents said the lack of effective moderation made them feel unsupported and vulnerable.
Algorithms and Bots May Make Abuse Worse
The report also raised concerns about automated content. Respondents said that posts critical of corporate practices or land grabs often triggered waves of abuse. Many pointed to bots and anonymous accounts flooding their pages with comments.
They also said that platform algorithms seemed to reward conflict-heavy content, which made their work more difficult and increased exposure to abuse.
Call for Stronger Moderation and Public Oversight
Global Witness said platforms need to commit more resources to moderation. The group also suggested companies should invite outside input and share more about how decisions are made.
Another report from the organization is expected in September. It will focus on killings of land and environmental defenders. The last one recorded nearly 200 deaths in 2023.
Read next: Illegal Starlink Use Powers Online Scam Networks in Remote Myanmar, Raising Alarms Among U.S. Lawmakers