TikTok is under renewed pressure in the United Kingdom after an investigation by Global Witness revealed that the platform’s search system directed newly created accounts, registered as 13-year-olds, toward sexually explicit material. The findings raise concerns that the company has breached the requirements of the UK’s Online Safety Act, which took effect for under-18 protections earlier this year.

The Investigation Setup

Researchers from Global Witness created seven fresh TikTok accounts using factory-reset smartphones, each account self-declared as belonging to a 13-year-old. The team enabled the app’s Restricted Mode, a feature promoted as limiting exposure to mature themes, including sexually suggestive content. Despite this setting, none of the accounts faced age verification checks at any point during registration or while browsing content.

The first round of tests took place in the spring of 2025, prior to the Online Safety Act applying to TikTok, with three accounts created. A second round followed after July 25, when the new law came into force, involving four more accounts.

What the Accounts Encountered

Within only a few clicks, all seven accounts were shown pornographic material. The content ranged from women exposing underwear to full-scale explicit videos. In many cases, the material appeared to bypass moderation by embedding explicit clips inside unrelated images or videos.

The most alarming discovery was how quickly the system suggested sexualised searches. For three of the accounts, the very first suggestions that appeared in the search bar included phrases linked to pornography, such as “very rude babes” and “TikTok late night for adults.” After clicking on some of these prompts, the suggested terms escalated further, including disguised phrases like “corn” instead of porn, which researchers interpreted as attempts to evade automated detection.

Some recommendations carried misogynistic undertones, and a handful appeared to reference young children. Two videos raised particular concern, as the individuals involved looked underage. Global Witness reported these to the UK’s Internet Watch Foundation, which is authorised to assess potential child sexual abuse material.

Legal and Regulatory Context

The UK’s Online Safety Act, introduced in July 2025, requires social media platforms to prevent minors from accessing pornography. Ofcom, the national regulator, has set out that personalised recommendations are one of the main ways children encounter harmful content online. Platforms classified as medium or high risk must configure algorithms to block such content from appearing in young users’ feeds.

Despite this obligation, the Global Witness findings suggest TikTok’s algorithms not only failed to block pornography but actively steered underage users toward it.

TikTok’s Response

In reaction to the report, TikTok confirmed that more than 90 pieces of content had been removed, alongside several problematic search suggestions in multiple languages. The company said it had launched improvements to its search system and was reviewing its youth safety strategies.

TikTok maintains that its guidelines prohibit explicit content, that its platform enforces a minimum age of 13, and that features like Restricted Mode are meant to provide added protection. The company also highlighted the use of “highly effective” age assurance measures, though these did not appear in the tests conducted.

Calls for Action

Global Witness has urged Ofcom to investigate TikTok’s compliance with the new safety law, arguing that the issue lies not only in moderation but in the structure of the recommendation system itself.

The group previously raised similar concerns with TikTok earlier this year, but the recurrence of explicit search terms suggests the platform has not fully resolved the problem. Researchers also noted that ordinary TikTok users have shared screenshots of sexualised search suggestions on the app, reinforcing that the issue extends beyond controlled tests.

As TikTok grows into a major search tool among younger users, the pressure on regulators to ensure compliance with child safety requirements is mounting. The outcome of Ofcom’s review could set a precedent for how social platforms manage algorithmic risks under the UK’s Online Safety Act.

Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: Gmail Users Face Security Cut-Off as Google Retires Old Email Tools in 2026[1]

By admin