A new assessment from the nonprofit Common Sense Media[1] has[2] flagged[3] Google’s Gemini AI system as high risk for children and teenagers. The report, published on Friday, looked at how the chatbot functions across different age tiers and found that the protections in place were limited.

The study noted that Gemini’s versions designed for under-13s and teens were essentially adapted from its main adult product with added filters. Common Sense said a safer approach would be to create systems for younger audiences from the start rather than modifying adult models.

Concerns focused on the chatbot’s ability to generate material that children may not be ready for. This included references to sex, drugs, alcohol, and mental health advice that could be unsafe or unsuitable for young users. Mental health was singled out as a particular area of risk, given recent cases linking chatbots to teen suicides. In the past year, legal action has been taken against OpenAI and Character.AI after reports of teenagers dying by suicide while interacting with their services.

The timing of the report is significant. Leaks have suggested Apple may adopt Gemini to power its next version of Siri, expected next year. If confirmed, that move could bring the technology to millions of new users, including many teenagers, unless additional protections are put in place.

The evaluation also said Gemini does not account for differences in how younger and older children process information. Both the child and teen versions of the tool were given the same high-risk rating.

Google responded by pointing to its existing safeguards for users under 18, which include policies, testing with external experts, and updates designed to stop harmful replies. The company accepted that some answers had fallen short of expectations and said extra protections had since been added. It also questioned parts of the Common Sense review, suggesting the tests may have involved features that are not available to younger users.

Common Sense has carried out similar assessments on other major AI services. Meta AI and Character.AI were classed as unacceptable risks, Perplexity and Gemini were placed in the high-risk category, ChatGPT was rated moderate, and Anthropic’s Claude, which is built for adults, was rated as minimal risk.

Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: Anthropic Settles Author Lawsuit With $1.5 Billion Deal[4]

References

  1. ^ Common Sense Media (www.commonsensemedia.org)
  2. ^ has (www.commonsensemedia.org)
  3. ^ flagged (www.commonsensemedia.org)
  4. ^ Anthropic Settles Author Lawsuit With $1.5 Billion Deal (www.digitalinformationworld.com)

By admin