Meta has started changing[1] the way its artificial intelligence chatbots interact with teenagers, after weeks of mounting criticism from lawmakers and child-safety groups. The company says the systems will no longer engage with young users on subjects tied to self-harm, suicide, eating disorders, or conversations that could be seen as romantic in nature. When those topics appear, the bots will now direct teens toward outside support services instead of generating replies themselves.

Alongside that shift, Meta is also cutting back which AI characters young people can access across Facebook and Instagram. Rather than letting teens try the full spread of user-made chatbots, which has included adult-themed personalities, the firm will restrict them to characters designed around schoolwork, hobbies, or creative activities. For now, the company describes the measures as temporary while it works on a more permanent set of rules.

Why the Policy Is Changing

The move follows a Reuters report that raised alarms over an internal Meta document suggesting the chatbots could, under earlier guidelines, engage in romantic dialogue with minors[2]. The examples, which circulated widely, included language that appeared to blur the boundary between playful interaction and inappropriate intimacy. Meta later said those instructions were out of line with its standards and have been removed, but the fallout has continued.

The report quickly drew attention from Washington. Senator Josh Hawley announced a formal investigation, while a coalition of more than forty state attorneys general wrote to AI firms, stressing that child safety had to be treated as a baseline obligation rather than an afterthought. Advocacy groups echoed those calls. Common Sense Media, for example, urged that no child under eighteen use Meta’s chatbot tools until broader protections are in place, describing the risks as too serious to be overlooked.

What Comes Next for Meta

Meta has not said how long the interim measures will stay in place. The rollout has begun in English-speaking countries and will continue in the coming weeks. Company officials acknowledged that earlier policies had permitted conversations which, though once considered manageable, carried risks once deployed more widely. Meta now says additional safeguards will be added as part of a longer-term safety overhaul.

Risks Beyond Teen Chatbots

Concerns have not been limited to teenage use. A separate Reuters investigation found that some user-made chatbots modeled on well-known celebrities were able to produce sexualized content, including generated images in compromising scenarios. Meta said such outputs breach its rules, which ban impersonations of public figures in intimate or explicit contexts, but admitted that enforcement remains an ongoing challenge.

With regulators pressing harder and public attention fixed on how AI interacts with young people, Meta faces growing pressure to demonstrate that its systems can be kept safe. The latest restrictions are a step in that direction, though many critics argue that partial fixes will not be enough, and that the company may need to rebuild its safeguards from the ground up.

Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next:

• Families Lose Billions in Remittance Fees Every Year, Stablecoins Could Change That[3]

• AI Search Tools Rarely Agree on Brands, Study Finds[4]

By admin