Meta has announced new parental control features designed to give parents more oversight of how teens interact with AI chatbots across its platforms, including Instagram. The update aims to promote safer online experiences amid growing global concerns about AI and teen safety.

As social media companies face increased scrutiny over teen mental health and online exposure, Meta’s latest move reflects a wider industry trend toward responsible AI integration. Recently, major platforms like OpenAI and YouTube have also introduced similar safety tools to protect younger users.

Starting early next year, parents will be able to block or limit AI character interactions and monitor conversation topics. Meta will also allow parents to turn off chats entirely for teens, except for Meta AI, which will only engage in age-appropriate discussions.

The new tools will first launch in English across the U.S., U.K., Canada, and Australia, with availability expanding to other regions later.

“We recognize parents already have a lot on their plates when it comes to navigating the internet safely with their teens,” said Adam Mosseri, Instagram Head, and Alexandr Wang, Head of Meta AI. “We’re committed to providing them with tools that make things simpler as they think about new technology like AI.”

Meta has confirmed that AI experiences for teens will adhere to PG-13 content standards, excluding themes like violence, nudity, and drug use. The company is also testing time limits and AI-based age verification to ensure compliance.

These upcoming controls are part of Meta’s broader push to balance innovation with accountability, giving parents more confidence as AI becomes a growing part of online communication.

By admin