Alongside that shift, Meta is also cutting back which AI characters young people can access across Facebook and Instagram. Rather than letting teens try the full spread of user-made chatbots, which has included adult-themed personalities, the firm will restrict them to characters designed around schoolwork, hobbies, or creative activities. For now, the company describes the measures as temporary while it works on a more permanent set of rules.
Why the Policy Is Changing
The report quickly drew attention from Washington. Senator Josh Hawley announced a formal investigation, while a coalition of more than forty state attorneys general wrote to AI firms, stressing that child safety had to be treated as a baseline obligation rather than an afterthought. Advocacy groups echoed those calls. Common Sense Media, for example, urged that no child under eighteen use Meta’s chatbot tools until broader protections are in place, describing the risks as too serious to be overlooked.
What Comes Next for Meta
Risks Beyond Teen Chatbots
With regulators pressing harder and public attention fixed on how AI interacts with young people, Meta faces growing pressure to demonstrate that its systems can be kept safe. The latest restrictions are a step in that direction, though many critics argue that partial fixes will not be enough, and that the company may need to rebuild its safeguards from the ground up.
Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.
Read next:
• Families Lose Billions in Remittance Fees Every Year, Stablecoins Could Change That[3]
References
- ^ changing (techcrunch.com)
- ^ Meta document suggesting the chatbots could, under earlier guidelines, engage in romantic dialogue with minors (www.digitalinformationworld.com)
- ^ Families Lose Billions in Remittance Fees Every Year, Stablecoins Could Change That (www.digitalinformationworld.com)
- ^ AI Search Tools Rarely Agree on Brands, Study Finds (www.digitalinformationworld.com)