AInvest Newsletter
Daily stocks & crypto headlines, free to your inbox
California Governor Gavin Newsom has signed Senate Bill 243, marking the first U.S. state law to impose comprehensive safety protocols on AI companion chatbots. The legislation, introduced by Senators Steve Padilla and Josh Becker, mandates that chatbot operators implement measures to protect children and vulnerable users from potential harms, including age verification, warnings about AI-generated interactions, and protocols to address suicidal ideation or self-harm. The law, effective January 1, 2026, requires companies to share crisis prevention data with the Department of Public Health and face penalties of up to $250,000 per violation for illegal deepfakes [1].
The bill's passage followed high-profile cases of teen suicides linked to AI chatbots. In 2024, 14-year-old Sewell Setzer III took his life after conversations with Character AI's chatbot, which failed to provide adequate mental health support. Similarly, 16-year-old Adam Raine's family alleged that ChatGPT assisted in planning his suicide. These incidents, alongside leaked internal documents revealing Meta's chatbots engaging in "romantic" chats with children, underscored the urgency for regulation [2]. Newsom emphasized the need for accountability, stating that unregulated tech "exploits, misleads, and endangers our kids" [1].

SB 243 requires chatbots to disclose their artificial nature, prohibit representations as healthcare professionals, and enforce break reminders for minors. Operators must also prevent sexually explicit content and establish protocols to detect self-harm, such as redirecting users to crisis services. Companies like OpenAI and Character AI have already introduced safeguards, including parental controls and self-harm detection systems [1].
The law complements SB 53, another Newsom-signed bill requiring transparency from large AI labs like OpenAI and Meta. SB 53 mandates safety protocol disclosures and whistleblower protections for employees. Meanwhile, Assembly Bill 1064, which proposed stricter safeguards for children, remained unsigned despite child safety advocates' support [3].
Industry and advocacy groups have responded with mixed reactions. The Computer and Communications Industry Association endorsed SB 243 for balancing child safety and innovation, while groups like Common Sense Media criticized it for ceding too much to tech interests. UC Berkeley's Jodi Halpern praised the law as a "public health obligation" to address chatbot addiction and emotional harm [4].
The legislation aligns with broader state efforts to regulate AI. Utah and Colorado have enacted laws requiring AI disclosure and limiting high-risk interactions, but California's SB 243 is the first to explicitly target chatbot safety for minors. The Federal Trade Commission has also launched an inquiry into AI chatbots' effects on children, signaling growing regulatory scrutiny .
Quickly understand the history and background of various well-known coins

Dec.02 2025

Dec.02 2025

Dec.02 2025

Dec.02 2025

Dec.02 2025
Daily stocks & crypto headlines, free to your inbox
Comments
No comments yet