×
Altman pushes for AI privilege amid New York Times user data retention demands
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

OpenAI CEO Sam Altman is advocating for “AI privilege” that would protect ChatGPT conversations like attorney-client or doctor-patient confidentiality, as The New York Times has requested a court order forcing the company to retain all user chat data indefinitely as part of its ongoing copyright lawsuit. This legal battle could fundamentally reshape user privacy expectations for AI interactions, potentially requiring OpenAI to permanently store conversations that users believe are deleted within 30 days.

What you should know: The New York Times lawsuit against OpenAI and Microsoft centers on allegations that ChatGPT was trained using millions of copyrighted articles without permission.

  • The Times and other plaintiffs are demanding that OpenAI retain all consumer ChatGPT and API customer data indefinitely to preserve potential evidence.
  • Currently, deleted ChatGPT conversations are removed immediately from user accounts and permanently deleted from OpenAI’s systems within 30 days.
  • The retention order would affect users of free, Plus, Pro, and Teams accounts, but not Enterprise or Education account holders.

What they’re saying: Altman took to X (formerly Twitter) to express his frustration with the court’s data retention demands.

  • “We have been thinking recently about the need for something like ‘AI privilege’; this really accelerates the need to have the conversation. IMO talking to an AI should be like talking to a lawyer or a doctor,” Altman tweeted.
  • “Recently the NYT asked a court to force us to not delete any user chats. we think this was an inappropriate request that sets a bad precedent. we are appealing the decision.”
  • Brad Lightcap, OpenAI’s chief operating officer, argued the demand “fundamentally conflicts with the privacy commitments we have made to our users” and “abandons long-standing privacy norms.”

Why this matters: As AI chatbots increasingly serve therapeutic and counseling roles, the privacy of these conversations becomes critical for user trust and mental health support.

  • Many users share intimate personal details with AI assistants, treating them as confidential sounding boards similar to traditional privileged relationships.
  • The precedent set by this case could determine whether AI conversations receive legal protection or remain subject to discovery in litigation.

The legal tension: OpenAI describes the lawsuit as “baseless” while The Times argues it needs access to user data to prove its copyright infringement claims.

  • OpenAI has appealed the retention order to the District Court Judge and promises to “fight any demand that compromises our users’ privacy.”
  • The company maintains that the order “sets a bad precedent” that could undermine user confidence in AI privacy protections.
  • API customers using Zero Data Retention endpoints under OpenAI’s ZDR amendment are not impacted by the current order.
Sam Altman says AI chats should be as private as ‘talking to a lawyer or a doctor’, but OpenAI could soon be forced to keep your ChatGPT conversations forever

Recent News

Study reveals 4 ways AI is transforming sexual wellness

AI-powered tools offer relationship advice rated more empathetic than human responses.

In the Money: Google tests interactive finance charts in AI Mode for stock comparisons

Finance queries serve as Google's testing ground for broader data visualization across other subjects.

30 mathematicians met in secret to stump OpenAI. They (mostly) failed.

Mathematicians may soon shift from solving problems to collaborating with AI reasoning bots.