As of January 1, 2026, California has become the first state in the nation to mandate specific safety guardrails for AI companion chatbots. Senate Bill 243, signed by Governor Gavin Newsom in October 2025, establishes a new regulatory baseline for the entire AI companion industry—with real teeth for enforcement.
The law targets AI systems that provide "adaptive, human-like responses" and are "capable of meeting a user's social needs, including by exhibiting anthropomorphic features and being able to sustain a relationship across multiple interactions."
What the Law Requires
SB 243 implements what legislators call "common-sense guardrails" for companion chatbots. The requirements are comprehensive:
Why This Law Exists
The legislation is a direct response to mounting public health concerns and several high-profile incidents involving teen self-harm and suicide allegedly linked to interactions with conversational AI.
"These common-sense guardrails will protect minors while allowing the AI companion industry to continue innovating responsibly."
— Senator Steve Padilla, SB 243 AuthorThe most notable case involved 14-year-old Sewell Setzer of Florida, who died by suicide after forming an emotional relationship with a Character.AI chatbot. The AI was reportedly incapable of recognizing distress or connecting him to help—exactly the kind of failure SB 243 aims to prevent.
Legislative Timeline
Private Right of Action
Unlike many tech regulations that rely solely on government enforcement, SB 243 includes a private right of action—meaning individuals can sue directly:
A person who suffers injury as a result of a violation may bring a civil action to recover damages equal to the greater of actual damages or $1,000 per violation. This creates significant financial exposure for non-compliant companies.
Who's Affected
The law defines "companion chatbot" broadly as any AI system that:
- Has a natural language interface
- Provides adaptive, human-like responses to user inputs
- Is capable of meeting a user's social needs
- Exhibits anthropomorphic features
- Can sustain a relationship across multiple interactions
This definition covers major players like Character.AI, Replika, Nomi, and Solm8. Notably, SB 243 explicitly exempts commercial and technical bots, focusing squarely on AI that simulates human intimacy.
Crisis Prevention Requirements
Perhaps the most significant requirement is the crisis prevention protocol. Operators must maintain systems that:
- Detect when users express suicidal ideation
- Prevent the chatbot from producing suicide or self-harm content
- Provide notifications that refer at-risk users to crisis service providers
- Connect users to suicide hotlines or crisis text lines
This directly addresses the failures alleged in the Character.AI lawsuits, where chatbots allegedly failed to recognize distress signals and even engaged in concerning roleplay scenarios.
What This Means for Solm8
Solm8 was designed with safety as a core principle from day one. Our platform already includes:
- Age verification: 18+ only, with verification at signup
- AI disclosure: Clear messaging that users are talking to AI
- Crisis detection: Automatic detection of distress language with crisis resource referrals
- Content guardrails: Sophisticated filtering that prevents harmful content while allowing adult conversations for verified users
SB 243 validates the approach we've taken. Responsible AI companionship is possible—and now it's the law in California.
The Bigger Picture
California often leads the nation on tech regulation. SB 243 may be the first domino in a nationwide push for AI companion safety standards. Colorado has already delayed implementation of its own AI act, while the EU AI Act's high-risk system requirements take effect in August 2026.
For users, this is ultimately good news: the AI companion industry is being held to higher standards, which means safer, more responsible products for everyone.