California’s state Senate has passed an AI safety bill that would require AI companies working on “frontier models” to disclose their safety protocols and establish whistleblower protections for employees. The legislation, SB 53, now awaits Governor Gavin Newsom’s signature after he previously vetoed a similar bill last year, highlighting the ongoing regulatory tensions surrounding AI oversight in the nation’s tech capital.
What you should know: The bill targets companies developing general-purpose AI models like ChatGPT or Google Gemini, with different requirements based on company size.
• Companies generating over $500 million annually face stricter oversight than smaller firms, though all frontier model developers would face some level of scrutiny.
• The legislation includes provisions for creating CalCompute, a public cloud platform to expand compute access, potentially housed at the University of California.
• Governor Newsom previously vetoed a comparable bill from state Sen. Scott Wiener (D-San Francisco), which was later revised following consultations with a California tech policy group.
Industry reactions split: The bill has divided Silicon Valley, with some major AI companies supporting the measure while others express strong opposition.
• Anthropic, maker of the Claude chatbot, endorsed the legislation through co-founder Jack Clark, who said it “creates a solid blueprint for AI governance that cannot be ignored” in the absence of federal standards.
• Andreessen Horowitz, one of the largest tech investors, criticized the bill through its head of AI policy and chief legal officer, arguing it requires “complex, costly administrative processes that don’t meaningfully improve safety, and may even make AI products worse.”
• The venture capital firm specifically highlighted concerns about “little tech” startups having “the fewest options to avoid these burdens.”
Broader opposition concerns: Industry groups have raised questions about the bill’s scope and effectiveness in addressing AI safety risks.
• The California Chamber of Commerce and TechNet, industry lobbying groups, criticized “the bill’s focus on ‘large developers’ to the exclusion of other developers of models with advanced capabilities that pose risks of catastrophic harm,” according to a letter seen by Politico.
• Critics argue the legislation may disproportionately burden smaller companies while potentially missing other significant AI safety risks from developers outside the large company threshold.
Why this matters: California’s decision could establish a template for AI regulation nationwide, particularly as federal lawmakers continue to grapple with comprehensive AI oversight frameworks. The state’s influence over tech policy, combined with its concentration of major AI companies, makes this legislation a potential catalyst for broader industry compliance standards regardless of federal action.