California State Capitol building with AI regulation theme

California becomes the first state to mandate specific safety guardrails for AI companion chatbots.

California SB 243: First AI Companion Safety Law Takes Effect January 2026

As of January 1, 2026, California has become the first state in the nation to mandate specific safety guardrails for AI companion chatbots. Senate Bill 243, signed by Governor Gavin Newsom in October 2025, establishes a new regulatory baseline for the entire AI companion industry—with real teeth for enforcement.

Law Now In Effect
California SB 243 - AI Companion Chatbot Safety

The law targets AI systems that provide "adaptive, human-like responses" and are "capable of meeting a user's social needs, including by exhibiting anthropomorphic features and being able to sustain a relationship across multiple interactions."

What the Law Requires

SB 243 implements what legislators call "common-sense guardrails" for companion chatbots. The requirements are comprehensive:

🔞
Minor Protection
Prevents chatbots from exposing minors to sexual content. Age verification and content filtering required.
🤖
AI Disclosure
Requires notifications and reminders for minors that chatbots are AI-generated, not real people.
⚠️
Suitability Warning
Disclosure statement that companion chatbots may not be suitable for minor users.
🆘
Crisis Protocol
Protocol for preventing suicidal ideation content and referring at-risk users to crisis services.

Why This Law Exists

The legislation is a direct response to mounting public health concerns and several high-profile incidents involving teen self-harm and suicide allegedly linked to interactions with conversational AI.

"These common-sense guardrails will protect minors while allowing the AI companion industry to continue innovating responsibly."

— Senator Steve Padilla, SB 243 Author

The most notable case involved 14-year-old Sewell Setzer of Florida, who died by suicide after forming an emotional relationship with a Character.AI chatbot. The AI was reportedly incapable of recognizing distress or connecting him to help—exactly the kind of failure SB 243 aims to prevent.

Legislative Timeline

September 10, 2025
Assembly Vote: 59-1
SB 243 passes the California Assembly with overwhelming bipartisan support.
September 11, 2025
Senate Vote: 33-3
The bill passes the Senate with bipartisan support, moving to the Governor's desk.
October 13, 2025
Governor Signs
Governor Gavin Newsom signs SB 243 into law, making California the first state with AI companion regulations.
January 1, 2026
Law Takes Effect
All requirements become enforceable. Companies must be in compliance.
July 1, 2027
First Annual Report Due
Companies must submit their first annual report to the Office of Suicide Prevention.

Private Right of Action

Unlike many tech regulations that rely solely on government enforcement, SB 243 includes a private right of action—meaning individuals can sue directly:

Civil Liability

A person who suffers injury as a result of a violation may bring a civil action to recover damages equal to the greater of actual damages or $1,000 per violation. This creates significant financial exposure for non-compliant companies.

Who's Affected

The law defines "companion chatbot" broadly as any AI system that:

  • Has a natural language interface
  • Provides adaptive, human-like responses to user inputs
  • Is capable of meeting a user's social needs
  • Exhibits anthropomorphic features
  • Can sustain a relationship across multiple interactions

This definition covers major players like Character.AI, Replika, Nomi, and Solm8. Notably, SB 243 explicitly exempts commercial and technical bots, focusing squarely on AI that simulates human intimacy.

33-3
Senate vote
59-1
Assembly vote
$1,000
Min. per violation

Crisis Prevention Requirements

Perhaps the most significant requirement is the crisis prevention protocol. Operators must maintain systems that:

  1. Detect when users express suicidal ideation
  2. Prevent the chatbot from producing suicide or self-harm content
  3. Provide notifications that refer at-risk users to crisis service providers
  4. Connect users to suicide hotlines or crisis text lines

This directly addresses the failures alleged in the Character.AI lawsuits, where chatbots allegedly failed to recognize distress signals and even engaged in concerning roleplay scenarios.

What This Means for Solm8

Solm8 was designed with safety as a core principle from day one. Our platform already includes:

  • Age verification: 18+ only, with verification at signup
  • AI disclosure: Clear messaging that users are talking to AI
  • Crisis detection: Automatic detection of distress language with crisis resource referrals
  • Content guardrails: Sophisticated filtering that prevents harmful content while allowing adult conversations for verified users

SB 243 validates the approach we've taken. Responsible AI companionship is possible—and now it's the law in California.

The Bigger Picture

California often leads the nation on tech regulation. SB 243 may be the first domino in a nationwide push for AI companion safety standards. Colorado has already delayed implementation of its own AI act, while the EU AI Act's high-risk system requirements take effect in August 2026.

For users, this is ultimately good news: the AI companion industry is being held to higher standards, which means safer, more responsible products for everyone.