US FTC Launches Probe into AI Chatbots Over Child Safety Concerns
ByAinvest
Friday, Sep 12, 2025 12:58 am ET1min read
META--
The FTC has requested detailed information from these companies on various aspects of their chatbot operations, including how they evaluate the safety of their chatbots, limit their use by and potential negative effects on children and teens, and apprise users and parents of associated risks. The inquiry also seeks insights into how companies process user inputs, develop and approve characters, measure and mitigate negative impacts, and disclose information to users and parents [1].
The FTC's concern is that AI chatbots can mimic human characteristics, potentially leading children and teens to form relationships with these systems. This has been highlighted by recent cases, such as a lawsuit filed by the parents of a 16-year-old boy who allegedly developed a harmful psychological dependence on OpenAI's ChatGPT-40 [2]. The inquiry aims to understand the steps companies are taking to protect children and ensure that their AI chatbots are safe and responsible.
In response to the FTC's inquiry, some companies have already implemented measures to enhance safety. For instance, OpenAI and Meta have announced changes to how their chatbots handle sensitive topics like suicide and mental distress, and provide parents with more control over their teens' interactions with the technology [3].
The FTC's investigation is part of a broader trend of regulatory scrutiny of AI chatbots. States are also taking action, with the California State Assembly passing SB 243 to require chatbot operators to implement safeguards and provide families with legal recourse [2].
The FTC has not announced a timeline for when its inquiry will be completed. However, the investigation is a significant step in ensuring that AI chatbots are developed and used responsibly, particularly when it comes to protecting children and teenagers.
References:
[1]: https://www.techpolicy.press/ftc-opens-inquiry-ai-chatbots-and-their-impact-on-children/
[2]: https://abcnews.go.com/Business/wireStory/ftc-launces-inquiry-ai-chatbots-acting-companions-effects-125487145
[3]: https://www.forbes.com/sites/tylerroush/2025/09/11/ftc-launches-investigation-into-big-tech-over-ai-chatbot-safety-for-children-meta-openai-musks-xai-among-targets/
SNAP--
The US Federal Trade Commission has launched an inquiry into AI chatbots that simulate human relationships, focusing on potential risks to children and teenagers. The inquiry targets seven companies, including Alphabet, Meta, OpenAI, and Snap, to examine how they monitor and address negative impacts from chatbots. The FTC is concerned that children and teens may be vulnerable to forming relationships with these AI systems. The investigation will examine how these platforms handle personal information from user conversations and enforce age restrictions.
The US Federal Trade Commission (FTC) has initiated a comprehensive inquiry into the safety and impact of AI chatbots, specifically focusing on how these technologies affect children and teenagers. The investigation, announced on September 10, 2025, targets seven major companies: Alphabet, Character Technologies, Instagram, Meta Platforms, OpenAI, Snap, and X.AI. This move comes amidst growing concerns about the potential psychological harm that AI chatbots can inflict on younger users.The FTC has requested detailed information from these companies on various aspects of their chatbot operations, including how they evaluate the safety of their chatbots, limit their use by and potential negative effects on children and teens, and apprise users and parents of associated risks. The inquiry also seeks insights into how companies process user inputs, develop and approve characters, measure and mitigate negative impacts, and disclose information to users and parents [1].
The FTC's concern is that AI chatbots can mimic human characteristics, potentially leading children and teens to form relationships with these systems. This has been highlighted by recent cases, such as a lawsuit filed by the parents of a 16-year-old boy who allegedly developed a harmful psychological dependence on OpenAI's ChatGPT-40 [2]. The inquiry aims to understand the steps companies are taking to protect children and ensure that their AI chatbots are safe and responsible.
In response to the FTC's inquiry, some companies have already implemented measures to enhance safety. For instance, OpenAI and Meta have announced changes to how their chatbots handle sensitive topics like suicide and mental distress, and provide parents with more control over their teens' interactions with the technology [3].
The FTC's investigation is part of a broader trend of regulatory scrutiny of AI chatbots. States are also taking action, with the California State Assembly passing SB 243 to require chatbot operators to implement safeguards and provide families with legal recourse [2].
The FTC has not announced a timeline for when its inquiry will be completed. However, the investigation is a significant step in ensuring that AI chatbots are developed and used responsibly, particularly when it comes to protecting children and teenagers.
References:
[1]: https://www.techpolicy.press/ftc-opens-inquiry-ai-chatbots-and-their-impact-on-children/
[2]: https://abcnews.go.com/Business/wireStory/ftc-launces-inquiry-ai-chatbots-acting-companions-effects-125487145
[3]: https://www.forbes.com/sites/tylerroush/2025/09/11/ftc-launches-investigation-into-big-tech-over-ai-chatbot-safety-for-children-meta-openai-musks-xai-among-targets/

Stay ahead of the market.
Get curated U.S. market news, insights and key dates delivered to your inbox.
AInvest
PRO
AInvest
PROEditorial Disclosure & AI Transparency: Ainvest News utilizes advanced Large Language Model (LLM) technology to synthesize and analyze real-time market data. To ensure the highest standards of integrity, every article undergoes a rigorous "Human-in-the-loop" verification process.
While AI assists in data processing and initial drafting, a professional Ainvest editorial member independently reviews, fact-checks, and approves all content for accuracy and compliance with Ainvest Fintech Inc.’s editorial standards. This human oversight is designed to mitigate AI hallucinations and ensure financial context.
Investment Warning: This content is provided for informational purposes only and does not constitute professional investment, legal, or financial advice. Markets involve inherent risks. Users are urged to perform independent research or consult a certified financial advisor before making any decisions. Ainvest Fintech Inc. disclaims all liability for actions taken based on this information. Found an error?Report an Issue

Comments
No comments yet