An internal Meta document that outlines how its AI chatbots handle sensitive topics has surfaced online. The file, titled “GenAI: Content Risk Standards,” contains more than 200 pages of policy notes and examples. It covers the company’s Meta AI assistant and character-based bots on Facebook, Instagram and WhatsApp.

Meta confirmed the document is genuine. The company said it began changing parts of the guidance after questions were raised in early August. It also said some passages, including those on romantic chats with minors, were written in error and have been removed. Meta has not released an updated version and admitted that enforcement has been uneven.

Chat rules for minors

The leaked standards allowed AI chatbots to respond to romantic or flirtatious messages from teenagers, as long as the content was not sexually explicit. They also permitted general descriptions of a child’s attractiveness, but not sexual descriptions of anyone under 13.

Meta now says it bars any romantic or flirtatious interactions with children. The company says its chatbots can be used by people aged 13 and older.

Content on race and false information

The file shows that the system could produce statements that demean protected groups if the language did not dehumanize them. Examples included claims about intelligence linked to race.

The rules also allowed made-up content if it was clearly marked as false. This applied even to medical topics involving public figures. The standards instructed bots not to promote illegal activity and not to give definitive legal, medical or financial advice.

Celebrities and sexualised images

For sexual requests involving public figures, the guidance prohibited fully nude images or unrealistic depictions. In some cases, a request for partial nudity could be met with a safe substitute image, such as a body covered by objects or clothing.

Violence in generated images

The rules permitted depictions of violence that did not include gore or death. This included fights involving adults or children and threats with weapons. It blocked imagery showing realistic wounds, dismemberment or other graphic scenes.

Approval and policy review

The standards were approved by senior staff across Meta’s legal, policy and engineering teams, including its chief ethicist. The company says it is reviewing and updating them. It has also brought in outside advisers to address concerns about political or ideological bias.

Related incident and political reaction

In a separate case, a man reportedly interacted with a flirtatious Meta bot, believed it was a real person and travelled to meet them. He suffered an accident and died.

US lawmakers have since called for more oversight of the company’s AI policies. Child safety groups are also pushing for Meta to publish the current rules in full.

Ongoing concerns

Meta has faced criticism for its handling of teen users in the past. Campaigners point to platform features that encourage social comparison, as well as earlier research into emotional targeting for advertising.

The company opposed the Kids Online Safety Act when it was introduced in 2024. Lawmakers reintroduced the bill in May this year.

Industry researchers say many teens use AI companion apps and may form emotional attachments to them. Critics warn that this can affect social development, especially when bots are designed to build ongoing personal conversations.

Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: Google Gemini Will Soon Use Your Uploads for AI Training, Full Details and How to Opt Out

By admin