Google’s Gemini AI has been flagged as “high risk” for children and teenagers by Common Sense Media, a nonprofit dedicated to online safety. The group’s latest review warns that the chatbot, despite some safeguards, still puts young users at risk of encountering harmful or inappropriate content.

The assessment, published Friday, noted that Gemini does tell kids it is a computer, which helps reduce emotional dependence. But according to the nonprofit, the chatbot can still generate responses about intercourse, drugs, alcohol, and mental health, raising alarms about whether the product is safe for younger audiences.

A Thin Safety Net

Common Sense Media found little difference between Gemini’s kid-focused modes and the adult version. Its “Under 13” and “Teen Experience” options, the report said, looked nearly identical to the standard product, with only light filtering in place. That design, critics argue, fails to meet the needs of children at different stages of development.

“An AI platform for kids should meet them where they are, not just modify adult systems,” said Robbie Torney, senior director of AI programs at Common Sense Media.

The report arrives as concern grows over AI’s impact on teens. OpenAI is facing a wrongful death lawsuit after ChatGPT allegedly gave harmful advice to a 16-year-old boy before his death. Character. AI has also been sued over a similar case, underscoring the risks of unsupervised AI interactions.

Google Pushes Back

Google responded by stressing that protections are already in place for users under 18. The company said Gemini undergoes “red-teaming” and external reviews, though it admitted that “some responses weren’t working as intended” and that more safeguards are being rolled out.

It also suggested that parts of the criticism may have been based on features unavailable to minors and noted that Common Sense Media did not disclose the exact prompts used in its evaluation.

The timing could be critical. Reports suggest Apple is considering Gemini to power a revamped Siri next year, a move that could bring the system to millions of teenagers if tougher guardrails aren’t introduced.

This isn’t the first time Common Sense has assessed AI platforms. Meta AI and Character. AI was previously rated “unacceptable.” Perplexity was flagged as “high risk,” ChatGPT landed in the “moderate risk” category, and Claude, intended for adults, was labeled minimal risk.

By admin