Bryan Cranston has publicly thanked OpenAI for strengthening its safeguards against unauthorized deepfakes after users of the company’s Sora 2 video platform generated his voice and likeness without consent. The incident highlights growing tensions between Hollywood and AI companies over digital replication rights, prompting a collaborative agreement between major talent agencies, unions, and OpenAI to establish clearer opt-in protocols for celebrity likenesses.
What happened: Cranston discovered his likeness was being generated on Sora 2 during the platform’s launch phase, leading him to file concerns through the Screen Actors Guild (SAG-AFTRA).
- A Sora 2 video described by the LA Times showed “a synthetic Michael Jackson takes a selfie video with an image of Breaking Bad star Bryan Cranston.”
- OpenAI claims it always intended to require opt-in consent for public figures, but multiple publications reported the company initially told talent agencies and studios they would need to opt out rather than opt in.
- The company has since strengthened its guardrails and acknowledged the unauthorized generations were “unintentional.”
Industry response: Major Hollywood agencies and unions have reached a collaborative agreement with OpenAI to protect actors’ digital rights.
- Creative Artists Agency (CAA) and United Talent Agency (UTA) co-signed a statement with OpenAI, SAG-AFTRA, and the Association of Talent Agents committing to protect actors’ “right to determine how and whether they can be simulated.”
- Sean Astin, SAG-AFTRA’s new president, warned that Cranston “is one of countless performers whose voice and likeness are in danger of massive misappropriation by replication technology.”
What they’re saying: Cranston emphasized the broader implications for all performers in his statement through SAG-AFTRA.
- “I was deeply concerned not just for myself, but for all performers whose work and identity can be misused in this way,” Cranston said.
- “I am grateful to OpenAI for its policy and for improving its guardrails, and hope that they and all of the companies involved in this work, respect our personal and professional right to manage replication of our voice and likeness.”
The bigger picture: The incident reflects ongoing challenges around AI platforms and celebrity rights, particularly regarding deceased public figures.
- Sora 2 allows users to generate “historical figures” defined as anyone both famous and dead, though OpenAI recently agreed to let representatives of “recently deceased” figures request blocking.
- The company paused depictions of Martin Luther King Jr. at his estate’s request while “strengthening guardrails for historical figures.”
- Zelda Williams has pleaded with people to “please stop” sending her AI videos of her late father Robin Williams, while George Carlin’s daughter Kelly called such videos “overwhelming, and depressing.”
Legislative context: The controversy occurs as Congress considers the NO FAKES Act, which would ban AI-generated replicas of individuals without consent.
- OpenAI publicly supports the legislation, with CEO Sam Altman stating the company is “deeply committed to protecting performers from the misappropriation of their voice and likeness.”
- Legal experts speculate that generative AI platforms are using dead historical figures to test legal boundaries around digital replication rights.
Bryan Cranston thanks OpenAI for cracking down on Sora 2 deepfakes