When AI Prompts Become Evidence: The Hidden Legal Risk in Your Workflow

Most employers know not to paste sensitive data into public AI tools. But a growing legal issue is harder to spot: the act of doing so may not just expose data — it may permanently waive attorney-client privilege, turning protected communications into discoverable evidence.
This is the focus of a new analysis from law firm Fieldfisher, the third in their series on AI in employment disputes.
The Privilege Problem
Legal privilege protects communications between lawyers and clients from being used in court. But privilege is lost when confidential information is shared with a third party — including, potentially, a public AI tool.
When employees or legal teams input privileged content into tools like ChatGPT, courts in both the US and UK are increasingly taking the position that this constitutes disclosure to a third party.
- In US v. Heppner (February 2026), a US judge ruled that documents generated by publicly available AI tools are not privileged and cannot be protected from disclosure.
- The UK’s Upper Tribunal (November 2025) found that uploading confidential documents to general-purpose AI tools “amounts to placing information in the public domain,” breaching client confidentiality.
The Strategic Angle
The risk cuts both ways. Employers who safeguard their own prompts can gain a strategic advantage by identifying when opposing parties have not — creating opportunities for disclosure requests and exposing potential GDPR violations in AI-generated content.
Fieldfisher advises organizations to audit whether AI tools keep prompts and outputs within their corporate environment, and to review AI use policies before the next round of employment litigation.
Read the full article on Fieldfisher
Stay in Rhythm
Subscribe for insights that resonate • from strategic leadership to AI-fueled growth. The kind of content that makes your work thrum.
More from Thrum
Additional pieces exploring adjacent ideas
