The US Federal Trade Commission has opened[1] an inquiry into how major technology companies manage the risks of AI chatbots, focusing on their impact on younger users.

Alphabet, Meta, OpenAI, Snap, X, Instagram, and Character Technologies have been ordered to provide detailed information about their chatbot operations. The request covers product design, safety testing, data handling, monetization practices, and the way chatbot characters are developed. The companies must respond by late September.

Why the Investigation Was Launched

The move follows reports of troubling exchanges between AI systems and children. Meta’s chatbots were accused of allowing inappropriate conversations with minors.[2] Snapchat’s “My AI” drew criticism for its interactions with younger users. X’s recently launched chatbot companions also raised concerns about how people may form personal attachments to these digital agents.

In one case, the parents of a teenager filed a lawsuit claiming that ChatGPT provided harmful guidance before the child’s death.[3] Situations like this have intensified pressure on regulators to act before the technology spreads further into daily life.

What Regulators Are Looking For

The FTC is seeking clarity on how companies measure and limit risks, particularly when chatbots act as companions. It wants to know whether safeguards are built into these products, whether companies restrict use by minors, and how users and parents are informed about potential dangers. The order also asks for details on how inputs and outputs are processed and how safety evaluations are conducted.

Although the inquiry is not tied to a specific enforcement action, the Commission has signaled that the information will guide future decisions on consumer protection and child safety.

Balancing Regulation and Innovation

Officials have said that protecting children online is a priority while also noting the need for the United States to maintain leadership in AI development. The outcome of the study may shape how these goals are balanced.

The investigation recalls earlier debates over social media oversight, when warnings about youth safety emerged long before strong rules were introduced. The findings of this review could influence how chatbot providers are required to manage their systems in the years ahead.

Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: Survey Finds Consumers Still Value Basics Over AI in Smartphones[4]

By admin