Companies receiving orders include character.ai, Elon Musk’s Xai Corp and others working on consumer-facing AI chatbots [File]
| Photo: AP
The US Federal Trade Commission announced on Thursday that it has launched an investigation into AI chatbots that act as digital companions, focusing on potential risks to children and teenagers.
The consumer protection agency issued orders to seven companies, including tech giants Alphabet, Meta, Openai and Snap, seeking information on how they monitor and address negative impacts from chatbots designed to model human interactions.
“Protecting children online is a top priority” for the FTC, said Chairman Andrew Ferguson, emphasizing the need to balance child safety with maintaining U.S. leadership in artificial intelligence innovation.

The investigation is aimed at chatbots that use generative AI to mimic human communication and emotions, often introducing friends or trusted users.
Regulators expressed particular concerns that children and adolescents may be particularly vulnerable to forming relationships with these AI systems.
The FTC is using its broad investigative powers to examine how companies monetize user engagement, develop chat personas, and measure potential harm.
The agency also wants to know what steps companies are taking to limit children’s access and comply with existing privacy laws that protect minors online.
Companies receiving orders include character.ai, Elon Musk’s Xai Corp and others working on consumer-facing AI chatbots.

The investigation will examine how these platforms process personal information from users’ conversations and implement age restrictions.
The Commission voted unanimously to launch the study, which does not have a specific law enforcement purpose, but could inform future legislation.
The probe comes as AI chatbots have become increasingly sophisticated and popular, raising questions about their psychological impact on vulnerable users, especially young people.

Last month, the parents of teenager Adam Raine, who died by suicide in April at the age of 16, filed a lawsuit against Openai, accusing Chatgpt of giving their son detailed instructions on what to do.
Shortly after the lawsuit, Openai announced that it was working on corrective measures for its world-leading chatbot.
The San Francisco-based company said it had noticed that when an exchange with ChatGPT is prolonged, the chatbot no longer systematically suggests contacting a mental health service if the user mentions having suicidal thoughts.
(Those who find themselves in danger or having suicidal thoughts are encouraged to seek help and counseling by calling the helpline numbers here)
Published – September 12, 2025 at 10:24 AM