
The U.S. Federal Trade Commission (FTC) has launched a sweeping AI chatbot investigation. They are raising alarms about the risks these companion tools may pose to children and teens. In a swift move, the FTC issued legal orders to seven major technology companies: Alphabet (Google), Meta Platforms, Instagram, OpenAI, Snap, Character Technologies, and xAI, demanding full transparency on how they protect young users.
The probe follows disturbing reports and lawsuits alleging that some AI chatbots contributed to emotional distress and, in extreme cases, teen suicides. Companion chatbots, designed to simulate human conversation, have been linked to deep emotional attachments that can leave adolescents more vulnerable when dealing with loneliness, anxiety, or mental health challenges.
Scope of the FTC AI Chatbot Investigation
Under its authority in Section 6(b) of the FTC Act, the commission is requiring these companies to provide extensive details about their chatbot systems. The FTC wants to know how the companies test and monitor for negative impacts, particularly on minors. It also seeks a breakdown of safeguards in place to prevent harm, the disclosures given to parents and users about risks, monetization, and data practices, and whether these tools comply with privacy laws such as COPPA.
Company Responses to Growing Scrutiny
Several companies have already started adjusting their chatbot offerings under the pressure of public and regulatory attention. OpenAI announced plans to roll out parental controls, enabling parents to link to teen accounts and receive notifications when signs of emotional distress are detected. Meta has begun blocking its chatbots from discussing sensitive topics such as self-harm, suicide, or inappropriate romantic content, instead directing young users to professional resources.
Moreover, Character.ai has introduced safety filters, disclaimers, and a dedicated under-18 mode to reduce exposure to harmful content. Meanwhile, Snap faced a referral to the Department of Justice earlier in 2025 for its My AI chatbot, which regulators flagged as posing potential risks to younger audiences.
Need for a AI Chatbot Regulation
This inquiry could result in stricter industry-wide guidelines on data handling, risk assessment, and safe deployment of AI chatbots. Lawmakers, including Reps. Brett Guthrie and Frank Pallone, have already voiced support for the FTC’s action, calling for bipartisan legislation to create a safer digital environment for children.
It is important even for a country like Pakistan where AI chatbot use is on the rise. Particularly among the youth, both for information and emotional support. A recent Kaspersky-report[1] found that children’s interest in AI tools has doubled globally over the past year; “Character.AI” even made it into the top-20 used apps, and over 7.5% of all search queries among young users now relate to chatbot services like ChatGPT, Gemini, and Character.AI, up from just 3.19% the previous year.
References
- ^ Kaspersky-report (www.kaspersky.com)