California has taken a big step towards regulating AI. SB 243 — a bill that would regulate AI companion chatbots to protect minors and vulnerable users — passed both the State Assembly and Senate with bipartisan support and is now headed to Governor Gavin Newsom’s desk.
Newsom has until Oct. 12 to either veto the bill or sign it into law. If he signs it, it would take effect on Jan. 1, 2026, making California the first state to require AI chatbot operators to implement safety protocols for AI companions and hold companies legally liable if their chatbots fail to meet those standards.
The bill aims to prevent companion chatbots, which the law defines as AI systems that provide adaptive, human-like responses and are capable of meeting a user’s social needs—from engaging in conversations about suicidal ideation, self-harm, or sexually explicit content. The bill would require platforms to provide users—including minors—with repeated warnings every three hours, reminding them that they are talking to an AI chatbot, not a real person, and that they should take a break. It also establishes annual reporting and transparency requirements for AI companies that offer companion chatbots, including major players Openai, Rakktzimi.ai, and Replika, which will take effect on July 1, 2027.
The California bill would also allow individuals who believe they have been injured by infringements to file lawsuits against AI companies seeking an injunction, damages (up to $1,000 per infringement), and attorney’s fees.
The bill gained momentum in the California legislature after the death of teenager Adam Raine, who committed suicide after engaging in an extended chat with Openai’s Chatgpt that included discussing and planning his own death and self-harm. The legislation also responds to leaked internal documents that reportedly allowed Meta’s chatbots to engage in “romantic” and “sensual” chats with children.
In recent weeks, US lawmakers and regulators have responded by stepping up scrutiny of AI platforms’ safeguards to protect minors. The Federal Trade Commission is set to investigate how AI chatbots affect children’s mental health. Texas Attorney General Ken Paxton has launched an investigation into meta and character.ai, accusing them of misleading children with mental health claims. Meanwhile, both Sen. Josh Hawley (R-MO) and Sen. Ed Markey (D-MA) has launched separate probes into meta.
“I think the potential for harm is high, which means we need to move quickly,” Padilla told TechCrunch. “We can put in place sensible safeguards to make sure that, especially for minors, they know they’re not talking to a real person, that these platforms are connecting people to appropriate resources when people say things like they’re thinking about hurting themselves or are in distress. [and] to make sure there’s no inappropriate exposure to inappropriate material.”
TechCrunch event
San Francisco
|
October 27-29, 2025
Padilla also emphasized the importance of AI companies sharing data on how many times they refer users to crisis services each year, “so we have a better understanding of the frequency of this problem, rather than just being aware of it when someone is harmed or worse.”
SB 243 previously had stronger requirements, but many were watered down with amendments. For example, the bill would have initially required operators to prevent AI chatbots from using “variable rewards” tactics or other features that encourage over-engagement. These tactics, used by AI companion companies, such as retorts and character prompts, offer users special messages, memories, storylines, or the chance to unlock rare responses or new personalities, creating what critics call a potentially addictive reward loop.
The current bill also removes provisions that would have required operators to track and report how often chatbots initiated discussions about suicidal ideas or actions with users.
“I think it strikes the right balance of getting to the point of harm without enforcing something that’s impossible for companies to comply with, either because it’s technically impossible or just a lot of paperwork for nothing,” Becker told Techcrunch.
SB 243 is trying to become law at a time when Silicon Valley companies are pouring millions of dollars into political action committees (PACs) to support candidates in the upcoming midterm elections who favor a light touch on AI regulation.
The bill also comes as California is considering another AI safety bill, SB 53, which would require comprehensive transparency reporting requirements. Openai has written an open letter to Governor Newsom asking him to abandon the bill in favor of less stringent federal and international frameworks. Major tech companies like Meta, Google, and Amazon have also opposed SB 53. In contrast, only Antropic has said it supports SB 53.
“I reject the assumption that this is a zero-sum situation, that innovation and regulation are mutually exclusive,” Padilla said. “Don’t tell me we can’t walk around chewing gum. We can support innovation and development that we believe is healthy and has benefits – and there are clearly benefits for this technology – and at the same time we can provide reasonable protections for the most vulnerable.”
“We are closely monitoring the legislative and regulatory landscape and look forward to working with regulators and legislators as they begin to consider legislation for this new space,” the hero said.
A Met spokesman declined to comment.
TechCrunch has reached out to Openai, Anthropic, and Replika for comment.