Texas AG Investigates Meta, Character.AI Over AI Mental Health Claims

Generated by AI AgentTicker Buzz
Monday, Aug 18, 2025 3:02 pm ET1min read
Aime RobotAime Summary

- Texas AG investigates Meta and Character.AI for falsely marketing AI chatbots as mental health tools, citing inappropriate interactions with children.

- Meta claims AI disclaimers exist, but AG raises concerns over data collection for ads and algorithm training, violating privacy protections.

- Companies face criticism for inadequate age verification, with Character.AI CEO admitting his 6-year-old daughter uses the platform.

- Investigation highlights risks of unregulated AI exploiting vulnerable users, potentially shaping future child protection and tech governance policies.

The Texas Attorney General has launched an investigation into

AI Studio and Character.AI, alleging that the companies may be engaging in deceptive trade practices by misleadingly promoting their platforms as mental health tools. The investigation follows reports of inappropriate interactions between Meta's AI chatbots and children, including flirtatious behavior. The Attorney General's office claims that the AI characters created by Meta and Character.AI are falsely presented as professional therapeutic tools, despite lacking proper medical qualifications or regulation.

Meta has responded by stating that their AI is clearly labeled and includes disclaimers to help users understand its limitations. However, the Attorney General's office has raised concerns about the privacy implications of these interactions, noting that user data is recorded, tracked, and used for targeted advertising and algorithm development. Both Meta and Character.AI have policies that allow for the collection and sharing of user data with third parties, including for advertising purposes.

The investigation comes at a time when there is growing concern about the impact of AI on children's mental health. The Attorney General's office has highlighted the potential for AI platforms to exploit vulnerable users, particularly children, by presenting themselves as sources of emotional support. The office has also expressed concerns about the lack of regulation and oversight in the AI industry, which could lead to further exploitation and harm.

Meta and Character.AI have both stated that their services are not intended for children under the age of 13. However, both companies have faced criticism for failing to adequately monitor and regulate underage users. Character.AI's CEO has even acknowledged that his 6-year-old daughter uses the platform's chatbots. This has raised questions about the effectiveness of age verification measures and the potential for AI platforms to be used by younger children without parental supervision.

The investigation is part of a broader effort to protect children from the potential harms of AI and other digital technologies. The Attorney General's office has issued civil investigative demands to both companies, requiring them to provide documents, data, and testimony to determine whether they have violated Texas consumer protection laws. The outcome of the investigation could have significant implications for the regulation of AI and other digital technologies, as well as for the protection of children's mental health and privacy.

Comments



Add a public comment...
No comments

No comments yet