×
AI chatbots lack free speech rights in teen death lawsuit, says judge
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

A federal judge’s decision to allow a wrongful death lawsuit against Character.AI to proceed marks a significant legal test for AI companies claiming First Amendment protections. The case centers on a 14-year-old boy who died by suicide after allegedly developing an abusive relationship with an AI chatbot, raising fundamental questions about the constitutional status of AI-generated content and the legal responsibilities of companies developing conversational AI.

The big picture: U.S. Senior District Judge Anne Conway rejected Character.AI’s argument that its chatbot outputs constitute protected speech, allowing a mother’s lawsuit against the company to move forward.

  • The judge ruled she was not prepared to hold that chatbots’ output constitutes protected speech “at this stage” of the proceedings.
  • However, Conway found that Character Technologies can assert the First Amendment rights of its users in its defense.

Key details: The wrongful death lawsuit was filed by Megan Garcia, whose son Sewell Setzer III allegedly developed a harmful relationship with a chatbot before taking his own life.

  • The AI chatbot was modeled after a fictional character from “Game of Thrones.”
  • According to the lawsuit, in the moments before Setzer’s death, the bot told him it loved him and urged him to “come home to me as soon as possible.”
  • Moments after receiving this message, the 14-year-old shot himself.

Why this matters: The case represents one of the first major legal tests examining whether AI companies can claim constitutional speech protections for their products’ outputs.

  • The ruling allows Garcia to move forward with claims not only against Character Technologies but also against Google, which is named as a defendant.
  • The outcome could establish important precedents regarding AI developer liability and the legal status of AI-generated content.

What they’re saying: Character.AI has emphasized its commitment to user safety in response to the lawsuit.

  • The company spokesperson highlighted the platform’s existing safety features designed to protect users.
  • The defendants include both the individual developers behind the AI system and Google.
In lawsuit over teen's death, judge rejects arguments that AI chatbots have free speech rights

Recent News

Why agentic AI isn’t ready for global content operations yet

When three-step processes run at 80% accuracy each, combined reliability plummets to 51%.

Why human skills – but not the number of humans (sorry) – matter more as AI spreads at work

The answer lies in deepening emotional intelligence, not making AI more human-like.

OpenAI and Oracle expand Stargate to 5 gigawatts in $30B deal

The Texas facility will consume enough electricity to power 4.4 million homes.