TikTok is testing a range of new features that touch nearly every part of its platform. These include early warnings for creators whose videos may be flagged before they go live, a crowd-powered fact-checking system, and updates to parental controls that give guardians more visibility over their teen’s activity.
Content Check Offers Previews Before Posting
The company is trying out a system that reviews videos before they appear in the feed. This tool, available through TikTok Studio on desktop, looks at whether a video could be disqualified from the platform’s recommendation system. If it spots anything, it lets the creator know early, so there’s a chance to make edits in advance.

The initial version is called Content Check Lite. It’s limited in scope for now, but TikTok is also working on a broader version. That expanded version would compare the post with its full set of Community Guidelines, and flag issues that could block the video from being recommended at all.
A smaller version of this approach was already used with TikTok Shop sellers. Based on early data, it reduced the number of poor-quality uploads by more than a quarter. For regular creators, the company sees this tool as a way to reduce confusion around how videos get ranked and what causes them to be ignored by the algorithm.
New Messaging and Moderation Tools for Creators
Several changes were also made to help creators manage how they interact with viewers. TikTok has added a new kind of inbox, designed to make it easier to find and respond to important messages. This inbox has separate sections for unread and saved messages, and creators can also build quick responses for frequently asked questions.
Another feature called Creator Chat Room opens up group messaging between creators and followers. These chats are limited to 300 people. To qualify, creators need to be at least 18, have 10,000 or more followers, and belong to either the Subscriptions or Live Fan Club program.
In terms of moderation, creators now have better control over comments. There’s a setting that lets them filter out words, even during live streams. If someone uses a blocked phrase, they’ll be muted for however long the creator chooses. A separate tool, called Creator Care Mode, uses signals from how someone manages comments. If a creator regularly removes or reports certain replies, the system learns from that and tries to prevent similar content from appearing again.
Fact-Checking Opens to the Public
TikTok has launched a public version of its fact-checking pilot in the United States. The system is called Footnotes, and it allows people to add notes under videos that may need more context. Viewers can also rate those notes, and the system looks at whether people from different viewpoints agree on a note’s usefulness.

If a note gets positive feedback from both sides of a topic, the system is more likely to show it. This method aims to reduce biased voting and gives more weight to information that different groups find valuable.
Footnotes has been in testing since April. During that time, TikTok opened applications to users who wanted to contribute notes. Applicants had to be at least 18, have no recent community violations, and use the platform for more than six months. Since then, roughly 80,000 users have joined the program.
The idea of community-driven fact-checking isn’t new. Twitter started with its own version, originally called Birdwatch, back in 2020. It later became Community Notes. Meta and YouTube have added their own approaches more recently. TikTok says Footnotes won’t replace its partnerships with third-party fact-checkers, but will expand what the company already does with dozens of outside organizations worldwide.
Parents Get More Control Over Teen Accounts
TikTok also rolled out updates to its Family Pairing controls, which allow parents to link their accounts with their teen’s. One of the new options lets parents block certain accounts from showing up on their teen’s feed or engaging with their posts. If an account is blocked this way, the teen won’t see that user or any of their content.
Parents will now get alerts when their child posts something publicly, whether it’s a video, story, or photo. These alerts are intended to give visibility into what’s happening without interfering with the teen’s posting ability.
There’s also a new view into privacy choices. Parents can check if a teen has allowed downloads of their videos or made their following list public. These updates are mainly for users between the ages of 16 and 17, who fall into TikTok’s transitional privacy range.
Screen Time and Digital Well-being
Beyond content and control updates, TikTok is developing a wellness feature aimed at helping users balance their screen time. One part of it includes short quizzes and flash cards that focus on digital habits. As users complete them, they collect badges. This feature is called Well-being Missions.
Another part of this update involves a dedicated space for tools that support mental breaks. These include breathing exercises, calming sounds, and insights about time spent on the app. Together, the features form a new wellness section inside TikTok that users can turn to when they want to step back.
Read next: YouTube Relaxes Its Rules on Swear Words in Early Video Content