An icon of an eye to tell to indicate you can view the content by clicking
Signal
Original article date: Jan 16, 2025

Business AI Security Alert: Nearly 10% of Employee Prompts Leak Sensitive Data

January 16, 2026
5 min read

A concerning new security study reveals that employee use of generative AI tools in the workplace is creating significant data privacy risks. According to research by Harmonic Security, 8.5% of business AI prompts inadvertently expose sensitive company and customer information.

The Scope of the Problem

The study, conducted in Q4 2024, analyzed employee interactions with major AI platforms including Microsoft Copilot, ChatGPT, Google Gemini, Claude, and Perplexity. While most usage involves routine tasks like text summarization and content editing, the concerning minority of prompts reveals critical business data.

The breakdown of sensitive data exposure is alarming:

  • 45.8% of risky prompts contained customer data, including billing information and authentication credentials
  • 26.8% exposed employee information such as payroll data and personally identifiable information
  • 14.9% included legal and financial data, covering sales pipelines and M&A activities
  • 6.9% revealed security information like network configurations and penetration test results
  • 5.6% disclosed proprietary source code and access keys

The Free Tier Risk Multiplier

A particularly concerning finding involves widespread use of free AI service tiers, which typically train on customer data. Usage rates for free versions were substantial across platforms: 63.8% for ChatGPT, 58.6% for Gemini, 75% for Claude, and 50.5% for Perplexity.

This creates a double exposure risk—not only might sensitive data be logged, but it could also be incorporated into model training, potentially surfacing in other users' interactions.

Strategic Response Recommendations

"Most generative AI use is mundane, but the 8.5% of prompts we analyzed potentially put sensitive personal and company information at risk," explained Alastair Paterson, CEO of Harmonic Security. Organizations that successfully manage this risk implement real-time monitoring systems and restrict free-tier usage.

Essential protective measures include:

  • Real-time monitoring systems for AI tool data input across all SaaS platforms
  • Paid subscription requirements that include data protection guarantees
  • Prompt-level visibility to understand exactly what information employees share
  • Employee education on AI security best practices

The study underscores that as AI integration accelerates, organizations need robust governance frameworks to harness AI benefits while protecting sensitive information assets.

🔗 Read the full study details on SiliconANGLE