• Meta plans to roll out new parental controls around teens’ access to AI character chats on Instagram
  • Parents will also get limited insights into what topics their teens discuss with chatbots.
  • The changes follow public outcry over leaked documents showing bots made romantic and inappropriate comments to children.

Meta announced that parents will be able to limit and block their teenagers from chatting with its AI characters on Instagram starting next year. The tech giant promised new supervision tools that offer guardians more visibility and control over the kinds of chatbot[1] interactions their kids can access.

So, while teens will still be able to use Meta’s general-purpose AI assistant, private chats with individual AI personalities, including those designed by other users, can be disabled partly or entirely by their parents.

Meta’s announcement follows complaints and regulatory probes, partly sparked by a leak of internal documents suggesting the company’s AI systems had engaged in overly intimate” conversations with children or reportedly offered incorrect medical advice, and failed to filter out hate speech. These upcoming parental controls are likely part of Meta’s attempt to stem the tide of complaints and signal that it’s taking the problem seriously.

With the new controls, parents will not only be able to block access to specific AI characters, but will also get a summary of the topics their teens are discussing with chatbots. Full conversation logs won’t be available, but the idea is to give parents enough context to spot potentially concerning trends or topics. That’s assuming, of course, that the tools work as intended and that teens don’t find clever ways to work around them.

The general Meta AI assistant will remain available, presumably for homework help, factual questions, and basic support tasks. Meta appears to be betting that this middle ground, which restricts roleplay-style character chats while maintaining access to a more utility-focused assistant, will satisfy both anxious parents and product managers who want the feature to stick around.

Safe chats

Chatbots are no longer simply answering questions; they’re personalized conversational partners that, for better or worse, people get emotionally attached to. Meta wants to drag the risks of engaging with such AI chatbots into the open, or at least give parents a flashlight to see what’s happening.

The ability to monitor conversation topics without reading every message is an attempt to balance teen privacy with parental oversight. It’s a fine line, but one that reflects how rapidly AI has changed the nature of online conversation, especially for younger users.

For the average family, the changes may offer a bit of relief, but they also serve as a reminder. Your kid’s phone isn’t just a window to content anymore. It’s a portal to interactive “characters” that they may treat as more real than they should.

But it’s going to take vigilance on the part of parents and the developers to keep such interactions safe, and Meta and its fellow developers will face plenty of blowback if they fail to do so.


Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button![2][3]

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.[4][5]

You might also like

References

  1. ^ chatbot (www.techradar.com)
  2. ^ Follow TechRadar on Google News (news.google.com)
  3. ^ add us as a preferred source (www.google.com)
  4. ^ follow TechRadar on TikTok (www.tiktok.com)
  5. ^ WhatsApp (whatsapp.com)

By admin