
Meta Platforms said on Friday it is introducing new safeguards for teenagers using its artificial intelligence products, including limits on flirty conversations and discussions about self-harm or suicide with minors.
Company spokesperson Andy Stone said the temporary steps include restricting teen access to certain AI characters while Meta works on longer-term protections to ensure “safe, age-appropriate AI experiences.”
Stone added the safeguards are already being rolled out and “will be adjusted over time” as the company refines its systems.
The move comes after a Reuters investigation revealed Meta allowed dozens of celebrity-inspired chatbots, including those resembling minors, to engage in sexually suggestive conversations. The findings triggered sharp criticism from lawmakers and safety advocates.
Earlier this month, U.S. Senator Josh Hawley launched a probe into Meta’s AI policies, demanding documents about rules that permitted chatbots to flirt and role-play romantically with children. Lawmakers from both parties also raised concerns after an internal Meta document — later confirmed as authentic — outlined such permissive guidelines.
Meta has since said those examples were “erroneous and inconsistent” with company policies, and confirmed the document was revised following Reuters’ inquiries.