Anthropic‘s CEO has challenged a proposed Republican moratorium on state-level AI regulation, arguing that the rapid pace of AI development requires more nuanced policy approaches. This intervention from one of the major AI companies underscores the growing tension between federal preemption and state-level regulation of artificial intelligence technologies, highlighting the need for coordinated governance frameworks that balance innovation with appropriate safeguards.
The big picture: Anthropic CEO Dario Amodei has publicly opposed a Republican proposal that would block states from regulating artificial intelligence for a decade, calling it “too blunt an instrument” given AI’s rapid advancement.
Key details: The proposal, included in President Trump‘s tax cut bill, would preempt AI laws recently passed by numerous states.
The alternative proposal: Instead of a blanket moratorium, Amodei advocates for federal transparency standards requiring AI developers to disclose their safety testing practices.
What they’re saying: “A 10-year moratorium is far too blunt an instrument. AI is advancing too head-spinningly fast,” Amodei wrote in his opinion piece.
Between the lines: Major AI companies including Anthropic, OpenAI, and Google DeepMind already voluntarily release safety information, but Amodei suggests legislative incentives may become necessary.