US FTC Launches Probe into AI Chatbots Over Child Safety Concerns
PorAinvest
viernes, 12 de septiembre de 2025, 12:58 am ET1 min de lectura
META--
The FTC has requested detailed information from these companies on various aspects of their chatbot operations, including how they evaluate the safety of their chatbots, limit their use by and potential negative effects on children and teens, and apprise users and parents of associated risks. The inquiry also seeks insights into how companies process user inputs, develop and approve characters, measure and mitigate negative impacts, and disclose information to users and parents [1].
The FTC's concern is that AI chatbots can mimic human characteristics, potentially leading children and teens to form relationships with these systems. This has been highlighted by recent cases, such as a lawsuit filed by the parents of a 16-year-old boy who allegedly developed a harmful psychological dependence on OpenAI's ChatGPT-40 [2]. The inquiry aims to understand the steps companies are taking to protect children and ensure that their AI chatbots are safe and responsible.
In response to the FTC's inquiry, some companies have already implemented measures to enhance safety. For instance, OpenAI and Meta have announced changes to how their chatbots handle sensitive topics like suicide and mental distress, and provide parents with more control over their teens' interactions with the technology [3].
The FTC's investigation is part of a broader trend of regulatory scrutiny of AI chatbots. States are also taking action, with the California State Assembly passing SB 243 to require chatbot operators to implement safeguards and provide families with legal recourse [2].
The FTC has not announced a timeline for when its inquiry will be completed. However, the investigation is a significant step in ensuring that AI chatbots are developed and used responsibly, particularly when it comes to protecting children and teenagers.
References:
[1]: https://www.techpolicy.press/ftc-opens-inquiry-ai-chatbots-and-their-impact-on-children/
[2]: https://abcnews.go.com/Business/wireStory/ftc-launces-inquiry-ai-chatbots-acting-companions-effects-125487145
[3]: https://www.forbes.com/sites/tylerroush/2025/09/11/ftc-launches-investigation-into-big-tech-over-ai-chatbot-safety-for-children-meta-openai-musks-xai-among-targets/
SNAP--
The US Federal Trade Commission has launched an inquiry into AI chatbots that simulate human relationships, focusing on potential risks to children and teenagers. The inquiry targets seven companies, including Alphabet, Meta, OpenAI, and Snap, to examine how they monitor and address negative impacts from chatbots. The FTC is concerned that children and teens may be vulnerable to forming relationships with these AI systems. The investigation will examine how these platforms handle personal information from user conversations and enforce age restrictions.
The US Federal Trade Commission (FTC) has initiated a comprehensive inquiry into the safety and impact of AI chatbots, specifically focusing on how these technologies affect children and teenagers. The investigation, announced on September 10, 2025, targets seven major companies: Alphabet, Character Technologies, Instagram, Meta Platforms, OpenAI, Snap, and X.AI. This move comes amidst growing concerns about the potential psychological harm that AI chatbots can inflict on younger users.The FTC has requested detailed information from these companies on various aspects of their chatbot operations, including how they evaluate the safety of their chatbots, limit their use by and potential negative effects on children and teens, and apprise users and parents of associated risks. The inquiry also seeks insights into how companies process user inputs, develop and approve characters, measure and mitigate negative impacts, and disclose information to users and parents [1].
The FTC's concern is that AI chatbots can mimic human characteristics, potentially leading children and teens to form relationships with these systems. This has been highlighted by recent cases, such as a lawsuit filed by the parents of a 16-year-old boy who allegedly developed a harmful psychological dependence on OpenAI's ChatGPT-40 [2]. The inquiry aims to understand the steps companies are taking to protect children and ensure that their AI chatbots are safe and responsible.
In response to the FTC's inquiry, some companies have already implemented measures to enhance safety. For instance, OpenAI and Meta have announced changes to how their chatbots handle sensitive topics like suicide and mental distress, and provide parents with more control over their teens' interactions with the technology [3].
The FTC's investigation is part of a broader trend of regulatory scrutiny of AI chatbots. States are also taking action, with the California State Assembly passing SB 243 to require chatbot operators to implement safeguards and provide families with legal recourse [2].
The FTC has not announced a timeline for when its inquiry will be completed. However, the investigation is a significant step in ensuring that AI chatbots are developed and used responsibly, particularly when it comes to protecting children and teenagers.
References:
[1]: https://www.techpolicy.press/ftc-opens-inquiry-ai-chatbots-and-their-impact-on-children/
[2]: https://abcnews.go.com/Business/wireStory/ftc-launces-inquiry-ai-chatbots-acting-companions-effects-125487145
[3]: https://www.forbes.com/sites/tylerroush/2025/09/11/ftc-launches-investigation-into-big-tech-over-ai-chatbot-safety-for-children-meta-openai-musks-xai-among-targets/

Divulgación editorial y transparencia de la IA: Ainvest News utiliza tecnología avanzada de Modelos de Lenguaje Largo (LLM) para sintetizar y analizar datos de mercado en tiempo real. Para garantizar los más altos estándares de integridad, cada artículo se somete a un riguroso proceso de verificación con participación humana.
Mientras la IA asiste en el procesamiento de datos y la redacción inicial, un miembro editorial profesional de Ainvest revisa, verifica y aprueba de forma independiente todo el contenido para garantizar su precisión y cumplimiento con los estándares editoriales de Ainvest Fintech Inc. Esta supervisión humana está diseñada para mitigar las alucinaciones de la IA y garantizar el contexto financiero.
Advertencia sobre inversiones: Este contenido se proporciona únicamente con fines informativos y no constituye asesoramiento profesional de inversión, legal o financiero. Los mercados conllevan riesgos inherentes. Se recomienda a los usuarios que realicen una investigación independiente o consulten a un asesor financiero certificado antes de tomar cualquier decisión. Ainvest Fintech Inc. se exime de toda responsabilidad por las acciones tomadas con base en esta información. ¿Encontró un error? Reportar un problema

Comentarios
Aún no hay comentarios