×
Sora 2 deepfakes spark legal battles over copyright violations
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

OpenAI’s Sora 2 AI video generator has sparked legal battles and creative controversies within weeks of its release, with major studios and actors demanding protection from unauthorized deepfakes of their intellectual property. The tool’s ability to create realistic videos of anyone saying anything has exposed fundamental gaps in copyright law and raised urgent questions about creativity, authenticity, and liability in the age of generative AI.

What you should know: Sora 2 can generate convincing deepfakes with minimal effort, creating immediate legal and ethical concerns.

  • The author demonstrated this by making OpenAI CEO Sam Altman appear to endorse ZDNET with blue hair and a green T-shirt using a simple text prompt.
  • Within five days of release, the app hit over a million downloads and topped iPhone app store charts.
  • Users immediately began creating videos featuring copyrighted characters like SpongeBob and Ronald McDonald in inappropriate scenarios.

Legal pushback intensifies: Hollywood studios and talent agencies are demanding stronger protections after widespread intellectual property violations.

  • The Motion Picture Association, which represents major film studios, issued a firm statement on October 6, with CEO Charles Rivkin saying “videos that infringe our members’ films, shows, and characters have proliferated on OpenAI’s service.”
  • Actor Bryan Cranston and SAG-AFTRA, the actors’ union, complained directly to OpenAI about unauthorized use of his likeness.
  • OpenAI initially contacted Hollywood rights holders in September offering an opt-out system, but this approach failed to satisfy industry concerns.

Copyright law faces an AI reckoning: Legal experts warn that existing frameworks are inadequate for generative video technology.

  • Sean O’Brien from Yale Privacy Lab outlined a “four-part doctrine” emerging in US law: only human-created works are copyrightable, AI outputs are “Public Domain by default,” humans using AI systems bear responsibility for infringement, and training on copyrighted data without permission is legally actionable.
  • Attorney Richard Santalesa noted that “copyright grants the owner various exclusive rights,” making fair use defenses limited to parody or news coverage.
  • The legal consensus places liability on users rather than AI companies, though this may change as litigation evolves.

OpenAI’s safety measures: The company has implemented guardrails but questions remain about their effectiveness.

  • New restrictions now block prompts involving “third-party likeness,” preventing the creation of videos featuring celebrities or copyrighted characters.
  • OpenAI’s safety framework includes consent-based likeness control, intellectual property safeguards, provenance watermarks, usage policies, and enforcement mechanisms.
  • Videos now include moving watermarks and C2PA metadata to help verify their artificial origin.

The creativity debate: AI video tools are democratizing content creation while threatening traditional creative industries.

  • Veteran illustrator Bert Monroy expressed concern about AI eliminating the need for human creative professionals: “Now, with AI, the client has to think of what they want and write a prompt and the computer will produce a variety of versions in minutes with NO cost.”
  • The technology allows users with minimal skills to create content rivaling work by trained professionals, raising questions about the value of artistic expertise.
  • Maly Ly, CEO of AI startup Wondr, suggests a more optimistic view: “AI video is forcing us to confront an old question with new stakes: Who owns the output when the inputs are everything we’ve ever made?”

Deepfakes and reality distortion: The technology’s ability to manipulate truth echoes historical examples of media manipulation.

  • The article draws parallels to Orson Welles’ 1938 “War of the Worlds” broadcast, which convinced listeners that a Martian invasion was occurring.
  • Photo manipulation predates digital technology, with examples dating back to an 1864 fabricated photo of General Ulysses S. Grant and Stalin airbrushing enemies from official photos.
  • Robin Williams’ daughter Zelda has spoken out against AI recreations of deceased celebrities, calling them “horrible, TikTok slop puppeteering.”

What they’re saying: Industry experts emphasize the need for updated legal frameworks and responsible development.

  • “The genie is out of the bottle and won’t be stuffed back in. The issue is how to manage and control the genie,” said attorney Richard Santalesa.
  • OpenAI’s PR representative maintains that their “video generation tools are designed to support human creativity, not replace it, helping anyone explore ideas and express themselves in new ways.”
  • Maly Ly advocates for a new approach: “The next copyright system will look less like paperwork and more like living code — dynamic, fair, and built for collaboration.”
Are Sora 2 and other AI video tools risky to use? Here's what a legal scholar says

Recent News

Republican uses AI clone to debate absent Democratic rival in Virginia

When candidates won't show up, opponents can now make them appear anyway.

New Orleans startups lead US in AI adoption at 77%

The region historically known for late tech adoption now ranks seventh nationally.

Brazil deploys AI to track online hate speech against LGBTQ+ community

The Aletheia system costs just $26,000 annually and could become a global template.