7 minute read | March.25.2026
On February 13, 2026, Judge Jed S. Rakoff of the U.S. District Court for the Southern District of New York issued an opinion addressing whether non-attorney communications with a generative AI platform are protected by the attorney-client privilege or work product doctrine. Judge Rakoff ruled that they are not.
This ruling has two immediate implications for anyone who uses AI tools such as ChatGPT, Claude, Gemini, or similar platforms in relation to legal or regulatory issues.
Judge Rakoff’s analysis is straightforward and grounded in well-established doctrine, making it likely that other courts will treat AI-related privilege disputes similarly going forward and serves as a critical warning about the risks of using AI tools to analyze legal issues, particularly for non-lawyers.
The practical reality is stark: (1) anything typed into a consumer AI platform should be treated as if it were posted publicly, meaning any confidential communication posted into a public AI chatbot can waive privilege; and (2) communications by non-lawyers asking GenAI tools for what would otherwise appear to be legal or regulatory advice are not protected communications and likely discoverable – even if the tool in question is a private LLM sitting behind a corporate firewall.
In United States v. Heppner, the defendant—a senior executive indicted for securities fraud—used Claude, Anthropic’s publicly available AI assistant, to analyze his legal situation, outline defense strategies and develop legal arguments. He did this on his own initiative, without direction from his attorneys.
During a search of Heppner’s home, the FBI seized approximately 31 documents memorializing these AI conversations. Heppner moved to exclude the documents, arguing they were protected by the attorney-client privilege and work product doctrine. Judge Rakoff rejected both arguments.
The court found that communications with an AI chatbot are not protected by the attorney-client privilege for several independent reasons:
The court also rejected the defendant’s work product argument:
No one should input work product or privileged information into public AI tools. Communications with an attorney, litigation strategy, case facts and sensitive documents should never be entered into public AI platforms like ChatGPT, Claude, Gemini, or similar tools. Assume anything you type could be discovered and used against you. This applies to both attorneys and non-attorneys alike. While attorneys can use private AI tools that protect confidentiality without breaking privilege just as they can use any other type of tool while maintaining privilege, publication of otherwise privileged information to the public domain will destroy attorney-client privilege and work product.
Unless working at the express direction of counsel, non-lawyers should not use even private AI tools for seeking legal advice or undertaking legal or regulatory analyses. Neither C-Suite level nor junior employees should query AI for regulatory or legal advice because neither their queries nor the responses are likely to be protected by the attorney-client privilege or work product protection – instead, they are likely to be discoverable. It is not hard to imagine how damaging queries directed to an AI tool from employees about potential legal and regulatory issues could be if those queries became discoverable. Yet, as Judge Rackoff reiterated clearly, “Because Claude is not an attorney, that alone disposes of Heppner’s claim of privilege.” Indeed, nearly all AI tools expressly disclaim that they offer legal advice
Understand the difference between consumer and enterprise AI. Part of the court’s analysis turned on Anthropic’s consumer privacy policy. Enterprise AI platforms with negotiated confidentiality terms may present a different picture, though this has not been tested in court. Companies should consult counsel before assuming privilege or work product protection applies to enterprise AI-generated documents.
Treat AI-generated analysis as discoverable. Documents memorializing your AI conversations—whether saved chats, exported files, or notes—may be seized or subpoenaed and used against you in legal proceedings.
Implement or update AI usage policies. This opinion underscores the urgency of having clear policies governing employee use of AI tools. Policies should specifically address the risks identified in Heppner—particularly the risk that employees will input confidential, privileged or litigation-sensitive information into public platforms, or use internal platforms to seek legal or regulatory advice without the benefit of privilege and work product protections.
Protect internal investigations. Employees who have been interviewed as part of an internal investigation, received Upjohn warnings, or been exposed to investigation findings may compromise the privilege if they process that information through a public AI tool. Consider adding AI-specific instructions to Upjohn warnings and investigation protocols.
Document directives from counsel. When employees assist with litigation preparation, there should be a clear, documented directive from counsel.
Advise senior leadership. Heppner was a senior executive who used AI to prepare for his own legal defense. Executives may be tempted to use AI to analyze regulatory exposure, prepare for board discussions, or develop strategic responses to government inquiries. They should be told clearly that doing so will not be protected, regardless of whether the platform is public or private.
Evaluate enterprise AI tools carefully. When evaluating enterprise AI deployments, negotiate contractual provisions that affirmatively guarantee confidentiality and prohibit use of inputs for training or disclosure to third parties.
United States v. Heppner is a first-impression opinion, but its reasoning is sound and likely to be followed. Generative AI tools are powerful, but public versions are not confidential channels and even private versions do not create an attorney-client relationship or offer work product protection to non-lawyers. Anyone involved in legal matters—whether as an individual, a company, or in-house counsel—should treat these platforms accordingly and take steps now to protect privileged and confidential information.