Earlier this month, Judge Rakoff of the Southern District of New York issued a first-of-its-kind ruling in United States v. Heppner. The case involved a criminal defendant, Heppner, who used a public generative AI platform (Claude) to “prepare reports that outlined his defense strategy (what he might argue with respect to the facts and the law that [his attorneys] anticipated that the government might be charging”). Although the defendant prepared the documents on his own, he later shared them with his attorneys. Heppner argued that these AI-generated documents should be protected by attorney-client privilege and the work product doctrine.
The court disagreed, and its reasoning has significant implications for anyone in the life sciences sector, especially as AI tools become more widely used in regulatory and compliance work.
Key Takeaways from the Heppner Ruling
- Privilege is Narrow: The judge reaffirmed that privilege only protects direct, confidential communications between a client and their attorney, or work product prepared by or at the direction of counsel. Using a public AI tool—especially on your own initiative, outside a counsel’s direction—breaks that chain.
- No Reasonable Expectation of Confidentiality: Using a third-party AI platform means you have no reasonable expectation of confidentiality. The platform’s privacy policy may allow it to store, use, or even disclose your inputs to third parties or government authorities, including confidential business information (CBI) or trade secrets..
- No Retroactive Privilege: Even if you later share these AI outputs with your attorney, you can’t ‘retroactively’ make them privileged. Once CBI or sensitive regulatory information is put into a public AI tool, that information may be considered disclosed.
Why This Matters for Life Sciences
Life sciences companies routinely handle sensitive data such as regulatory submissions, audit responses, clinical trial results, manufacturing records, and more. With the increasing integration of AI tools into daily workflows, there’s a temptation to use them for summarizing, analyzing, or drafting documents containing CBI or privileged information.
But as the Heppner case makes clear, this can be risky:
- Regulatory and Litigation Risks: Disclosure of CBI or privileged information via a public AI tool may result in loss of protection, not only in litigation but also during regulatory audits.
- Trade Secret Protection: Public disclosure can destroy trade secret status.
- Internal Risk: Employees in operations, safety, manufacturing, and research may not be aware of these risks, making training and policy updates essential.
Practical Steps to Protect Your Company’s Trade Secrets
- Avoid Public AI Tools for Sensitive Content: Don’t use public or commercial AI tools to process, summarize, or explain information containing CBI or trade secrets.
- Train All Teams: Ensure that not just legal and regulatory teams, but also those in operations, safety, manufacturing, and research, understand the risks of AI tool use and the importance of proper CBI tagging and handling.
- Update Internal Policies: Prohibit the use of unapproved AI tools for sensitive information, and ensure any AI use is within secure, company-controlled environments.
- Incident Response: Update your incident response plan to address scenarios where CBI or privileged information may be inadvertently entered into a public AI tool. This should include steps for internal reporting, containment, and notification.
- Document Your AI Governance: Be able to demonstrate to regulators or auditors how your organization protects CBI in an AI-enabled workflow.
- Use Only Approved Enterprise AI Tools for Sensitive Information: Avoid public or consumer-grade AI platforms for any content involving CBI, trade secrets, or privileged communications. Instead, deploy enterprise-grade AI solutions that are vetted and controlled by your organization’s IT and compliance teams. Ensure these tools are configured to prevent external data sharing, and that their use is governed by robust internal policies. This helps ensure your sensitive data remains protected and within your control. Consider utilizing a warning to employees such as
Reminder: Do not enter confidential business information, trade secrets, or privileged communications into any public AI tool (such as ChatGPT, Claude, or Gemini). Use only company-approved enterprise AI platforms for work-related tasks involving sensitive data.
- Vendor Due Diligence: If your team is using a third-party enterprise AI vendor, remember to conduct vendor due diligence: review their privacy policies, data handling practices, and contractual commitments. Ensure your data will not be used to train external models and that you retain control over outputs and data deletion. For more about AI vendor contracts, see this link.
Checklist: Updating Your Internal AI Policy
- Which AI tools are approved for use, and who decides what is added to the list?
- What types of data are employees allowed (or not allowed) to input into AI tools?
- Who is responsible for reviewing and approving AI-generated outputs?
- What kind of training do staff need for using AI and protecting data?
- How is compliance monitored and enforced?
- What is the escalation process for AI-related incidents or errors?
Tip: Conduct a data inventory to identify where confidential business information is stored and which teams or workflows might be using (or tempted to use) AI tools. This will help target training and policy enforcement where it’s most needed.
Looking Ahead
The FDA and other agencies are increasingly using AI in their review processes.[1] As these tools become more deeply embedded in regulatory workflows, the risks and the need for robust internal AI policies will only grow.
Bottom line: AI tools are transforming the way we work, but they don’t change the fundamentals of privilege and confidentiality. If you put CBI or privileged information into a public AI tool, you may lose protection—not just in litigation, but also in regulatory audits and agency interactions.
[1] See FDA Press Release, FDA Launches Agency-Wide AI Tool to Optimize Performance for the American People (June 2, 2025), https://www.fda.gov/news-events/press-announcements/fda-launches-agency-wide-ai-tool-optimize-performance-american-people and FDA News Release, FDA Expands Artificial Intelligence Capabilities with Agentic AI Deployment (December 1, 2025) https://www.fda.gov/news-events/press-announcements/fda-expands-artificial-intelligence-capabilities-agentic-ai-deployment. See also Kimberly Chew, Esq., & Michael Yang, Esq., “FDA’s Elsa AI Switches From Claude To Gemini: What Sponsors Need To Know,” Clinical Leader (Mar. 12, 2026), https://www.clinicalleader.com/doc/fda-s-elsa-ai-switches-from-claude-to-gemini-what-sponsors-need-to-know-0001.