Two current and two former Meta employees disclosed documents to Congress alleging that the company may have suppressed research on children’s safety, according to a report[1] from The Washington Post.

According to their claims, Meta changed its policies around researching sensitive topics — like politics, children, gender, race, and harassment — six weeks after whistleblower Frances Haugen[2] leaked internal documents that showed how Meta’s own research found that Instagram can damage teen girls’ mental health. These revelations, which were made public in 2021, kicked off years of hearings in Congress over child safety on the internet, an issue that remains a hot topic[3] in global governments today.

As part of these policy changes, the report says, Meta proposed two ways that researchers could limit the risk of conducting sensitive research. One suggestion was to loop lawyers into their research, protecting their communications from “adverse parties” due to attorney-client privilege. Researchers could also write about their findings more vaguely, avoiding terms like “not compliant” or “illegal.”

Jason Sattizahn, a former Meta researcher specializing in virtual reality, told The Washington Post that his boss made him delete recordings of an interview in which a teen claimed that his ten-year-old brother had been sexually propositioned on Meta’s VR platform, Horizon Worlds.

“Global privacy regulations make clear that if information from minors under 13 years of age is collected without verifiable parental or guardian consent, it has to be deleted,” a Meta spokesperson told TechCrunch.

But the whistleblowers claim that the documents they submitted to Congress show a pattern of employees being discouraged from discussing and researching their concerns around how children under 13 were using Meta’s social virtual reality apps.

“These few examples are being stitched together to fit a predetermined and false narrative; in reality, since the start of 2022, Meta has approved nearly 180 Reality Labs-related studies on social issues, including youth safety and well-being,” Meta told TechCrunch.

Techcrunch event

San Francisco | October 27-29, 2025

In a lawsuit filed in February, Kelly Stonelake — a former Meta employee of fifteen years — raised similar concerns to these four whistleblowers. She told TechCrunch[4] earlier this year that she led “go-to-market” strategies to bring Horizon Worlds to teenagers, international markets, and mobile users, but she felt that the app did not have adequate ways to keep out users under 13; she also flagged that the app had persistent issues with racism.

“The leadership team was aware that in one test, it took an average of 34 seconds of entering the platform before users with Black avatars were called racial slurs, including the ‘N-word’ and ‘monkey,’” the suit alleges.

Stonelake has separately sued Meta for alleged sexual harassment and gender discrimination.

While these whistleblowers’ allegations center on Meta’s VR products, the company is also facing criticism for how other products, like AI chatbots, affect minors. Reuters reported[5] last month that Meta’s AI rules previously allowed chatbots to have “romantic or sensual” conversations with children.

References

  1. ^ report (www.washingtonpost.com)
  2. ^ Frances Haugen (techcrunch.com)
  3. ^ remains a hot topic (techcrunch.com)
  4. ^ told TechCrunch (techcrunch.com)
  5. ^ reported (www.reuters.com)

By admin