An icon of an eye to tell to indicate you can view the content by clicking
January 16, 2025

Nearly 1 in 10 Business AI Prompts Expose Sensitive Company Data

Nearly 1 in 10 Business AI Prompts Expose Sensitive Company Data

A concerning new study reveals that business users are unknowingly putting sensitive information at risk when using popular AI tools. The research by data protection startup Harmonic Security found that 8.5% of AI prompts in workplace settings potentially disclose confidential data.

The study analyzed business user behavior across major AI platforms including Microsoft Copilot, ChatGPT, Google Gemini, Claude, and Perplexity during Q4 2024. While most employees use AI for routine tasks like summarizing text and editing content, a significant portion inadvertently share sensitive information.

What Data Is Being Exposed?

The breakdown of concerning prompts reveals serious security gaps:

  • 45.8% disclosed customer data (billing info, authentication credentials)
  • 26.8% contained employee information (payroll data, PII, employment records)
  • 14.9% included legal and financial data (sales pipelines, investment portfolios, M&A activity)
  • 6.9% exposed security information (penetration test results, network configurations)
  • 5.6% contained sensitive code (access keys, proprietary source code)

The Free Tier Problem

The study highlighted a major concern: widespread use of free AI service tiers that lack enterprise security features. These platforms often explicitly state they train on user data, meaning sensitive information could be used to improve their models.

Usage of free tiers varied significantly:

  • 75% of Claude users
  • 63.8% of ChatGPT users
  • 58.6% of Gemini users
  • 50.5% of Perplexity users

Expert Recommendations

"Most generative AI use is mundane, but the 8.5% of prompts we analyzed potentially put sensitive personal and company information at risk," explained Alastair Paterson, CEO of Harmonic Security. Organizations need real-time monitoring systems and should ensure employees use paid plans that don't train on input data.

Companies should implement prompt-level visibility to understand exactly what information is being shared and establish clear guidelines for AI tool usage in business settings.

🔗 Read the full article on SiliconANGLE