Court Rules AI Conversations Are Not Privileged: What United States v. Heppner Means for You


7 minute read | March.25.2026

On February 13, 2026, Judge Jed S. Rakoff of the U.S. District Court for the Southern District of New York issued an opinion addressing whether non-attorney communications with a generative AI platform are protected by the attorney-client privilege or work product doctrine. Judge Rakoff ruled that they are not.

This ruling has two immediate implications for anyone who uses AI tools such as ChatGPT, Claude, Gemini, or similar platforms in relation to legal or regulatory issues.

  • First, Judge Rakoff held that since AI tools do not hold law licenses, communications with them are by definition not lawyer-client communications, and thus are not subject to attorney-client privilege or attorney work product protections.
  • Second, Judge Rakoff held that even otherwise privileged communications will lose their privileged status if shared with public AI tools

Judge Rakoff’s analysis is straightforward and grounded in well-established doctrine, making it likely that other courts will treat AI-related privilege disputes similarly going forward and serves as a critical warning about the risks of using AI tools to analyze legal issues, particularly for non-lawyers.

The practical reality is stark: (1) anything typed into a consumer AI platform should be treated as if it were posted publicly, meaning any confidential communication posted into a public AI chatbot can waive privilege; and (2) communications by non-lawyers asking GenAI tools for what would otherwise appear to be legal or regulatory advice are not protected communications and likely discoverable – even if the tool in question is a private LLM sitting behind a corporate firewall.

The Case

In United States v. Heppner, the defendant—a senior executive indicted for securities fraud—used Claude, Anthropic’s publicly available AI assistant, to analyze his legal situation, outline defense strategies and develop legal arguments. He did this on his own initiative, without direction from his attorneys.

During a search of Heppner’s home, the FBI seized approximately 31 documents memorializing these AI conversations. Heppner moved to exclude the documents, arguing they were protected by the attorney-client privilege and work product doctrine. Judge Rakoff rejected both arguments.

Key Holdings

No Attorney-Client Privilege

The court found that communications with an AI chatbot are not protected by the attorney-client privilege for several independent reasons:

  • AI is not an attorney. Claude cannot form an attorney-client relationship with a user.Communications between two non-attorneys about legal issues are simply not privileged, regardless of how sophisticated or accurate the exchange may be.
  • No reasonable expectation of confidentiality. Anthropic’s privacy policy—to which every user consents—provides that the company collects user inputs and AI outputs, uses that data for training purposes and reserves the right to disclose it to third parties, including governmental authorities.
  • Inputting privileged information waives the privilege. Feeding advice received from counsel into a public AI tool is the same as disclosure to a third party—waiving the privilege over the underlying communication itself.
  • Privilege cannot be created after the fact. Assuming that the AI suggestions were intended to be shared with counsel, non-privileged communications do not become privileged communications upon being shared with counsel.

No Work Product Protection

The court also rejected the defendant’s work product argument:

  • No counsel direction. Work product protection applies only to materials prepared by or at the direction of counsel. Materials a party prepares on their own initiative—even if clearly made in anticipation of litigation—do not qualify.Heppner generated the AI documents under his own initiative, rather than at the direction of counsel. Accordingly, the materials at issue were not work product.
  • AI is not an attorney. Claude is not an attorney, so the AI documents did not reflect the strategy and mental impressions of counsel.
  • Affecting strategy is not the same as reflecting strategy. That the documents may have affected defense counsel’s strategy was insufficient, the documents had to reflect legal counsel’s strategy at the time they were created.

Key Takeaways

No one should input work product or privileged information into public AI tools. Communications with an attorney, litigation strategy, case facts and sensitive documents should never be entered into public AI platforms like ChatGPT, Claude, Gemini, or similar tools. Assume anything you type could be discovered and used against you. This applies to both attorneys and non-attorneys alike. While attorneys can use private AI tools that protect confidentiality without breaking privilege just as they can use any other type of tool while maintaining privilege, publication of otherwise privileged information to the public domain will destroy attorney-client privilege and work product.

Unless working at the express direction of counsel, non-lawyers should not use even private AI tools for seeking legal advice or undertaking legal or regulatory analyses. Neither C-Suite level nor junior employees should query AI for regulatory or legal advice because neither their queries nor the responses are likely to be protected by the attorney-client privilege or work product protection – instead, they are likely to be discoverable. It is not hard to imagine how damaging queries directed to an AI tool from employees about potential legal and regulatory issues could be if those queries became discoverable. Yet, as Judge Rackoff reiterated clearly, “Because Claude is not an attorney, that alone disposes of Heppner’s claim of privilege.”  Indeed, nearly all AI tools expressly disclaim that they offer legal advice

Understand the difference between consumer and enterprise AI. Part of the court’s analysis turned on Anthropic’s consumer privacy policy. Enterprise AI platforms with negotiated confidentiality terms may present a different picture, though this has not been tested in court. Companies should consult counsel before assuming privilege or work product protection applies to enterprise AI-generated documents.

Treat AI-generated analysis as discoverable. Documents memorializing your AI conversations—whether saved chats, exported files, or notes—may be seized or subpoenaed and used against you in legal proceedings.

Additional Considerations for Companies and In-House Counsel

Implement or update AI usage policies. This opinion underscores the urgency of having clear policies governing employee use of AI tools. Policies should specifically address the risks identified in Heppner—particularly the risk that employees will input confidential, privileged or litigation-sensitive information into public platforms, or use internal platforms to seek legal or regulatory advice without the benefit of privilege and work product protections.

Protect internal investigations. Employees who have been interviewed as part of an internal investigation, received Upjohn warnings, or been exposed to investigation findings may compromise the privilege if they process that information through a public AI tool. Consider adding AI-specific instructions to Upjohn warnings and investigation protocols.

Document directives from counsel. When employees assist with litigation preparation, there should be a clear, documented directive from counsel.

Advise senior leadership. Heppner was a senior executive who used AI to prepare for his own legal defense. Executives may be tempted to use AI to analyze regulatory exposure, prepare for board discussions, or develop strategic responses to government inquiries. They should be told clearly that doing so will not be protected, regardless of whether the platform is public or private.

Evaluate enterprise AI tools carefully. When evaluating enterprise AI deployments, negotiate contractual provisions that affirmatively guarantee confidentiality and prohibit use of inputs for training or disclosure to third parties.

The Bottom Line

United States v. Heppner is a first-impression opinion, but its reasoning is sound and likely to be followed. Generative AI tools are powerful, but public versions are not confidential channels and even private versions do not create an attorney-client relationship or offer work product protection to non-lawyers. Anyone involved in legal matters—whether as an individual, a company, or in-house counsel—should treat these platforms accordingly and take steps now to protect privileged and confidential information.