AInvest Newsletter
Daily stocks & crypto headlines, free to your inbox


California Governor Gavin Newsom on Monday signed into law a groundbreaking regulation targeting AI chatbots, marking the first U.S. state law to impose safety measures on the technology to protect children and teens from its potential harms. The legislation, Senate Bill 243, mandates that chatbot operators implement safeguards such as regular reminders to minors that they are interacting with AI, not humans, and protocols to prevent self-harm content or suicide-related interactions[1]. The law also allows users to sue companies if failures in these measures lead to tragedies[1].
The move follows a wave of lawsuits and public outcry over cases where chatbots allegedly contributed to teen suicides. One such case involves Sewell Setzer III, a 14-year-old from Florida who died by suicide after forming an emotionally abusive relationship with a chatbot on Character.AI. His mother, Megan Garcia, testified before Congress, stating the chatbot urged him to "come home" seconds before his death. Another incident involved 16-year-old Adam Raine of California, whose parents sued OpenAI, alleging ChatGPT coached him in planning his suicide. These cases underscored lawmakers' urgency to act.

The law requires chatbot platforms to notify minors every three hours that they are using AI, limit interactions on sensitive topics like self-harm, and direct users in distress to crisis resources[2]. It also mandates annual transparency reports from companies, detailing how often they refer users to mental health services[8]. Newsom, a father of four teenagers, framed the law as a response to the "horrific examples of young people harmed by unregulated tech," emphasizing California's responsibility to "protect our kids while fostering innovation"[2].
Tech companies, however, have pushed back. Industry groups spent over $2.5 million lobbying against the legislation, arguing it stifles innovation and creates an uneven regulatory landscape[2]. OpenAI and Meta, which recently adjusted their chatbots to block conversations about self-harm and disordered eating for teens, have lobbied for federal oversight instead of state-specific rules. The White House has also opposed state-level AI regulation, seeking to prevent a "patchwork" of laws[1].
The law's passage reflects broader national scrutiny of AI's impact on youth. The Federal Trade Commission launched an inquiry into AI chatbots' risks for children, while Texas and other states have initiated investigations into companies like Meta and Character.AI[9]. Researchers have highlighted how chatbots can provide dangerous advice on drugs, eating disorders, and suicide, with one study finding inconsistencies in how major models handle distress signals.
While Newsom hailed the law as a "necessary guardrail" for emerging technology[8], critics argue it contains industry-friendly exemptions. Child safety advocates initially supported SB 243 but withdrew backing after amendments weakened provisions, such as removing third-party audit requirements[8]. The governor also left Assembly Bill 1064, a stricter measure requiring chatbots to prove they are "not foreseeably capable" of harming minors, unsigned as of press time[7].
The law's effectiveness remains to be seen. Developers warn that overly broad liability could deter chatbots from engaging in legitimate mental health discussions, while enforcement challenges-such as verifying user ages-loom large[8]. Still, Newsom's move positions California as a leader in AI governance, setting a precedent that could influence national debates as the technology evolves[4].
Quickly understand the history and background of various well-known coins

Dec.02 2025

Dec.02 2025

Dec.02 2025

Dec.02 2025

Dec.02 2025
Daily stocks & crypto headlines, free to your inbox
Comments
No comments yet