An icon of an eye to tell to indicate you can view the content by clicking
Signal
February 4, 2025

Unauthorized AI Tools Put Corporate Data at Risk: What IT Teams Need to Know

Unauthorized AI Tools Put Corporate Data at Risk: What IT Teams Need to Know

Employees across corporate America are secretly using AI tools like ChatGPT and DeepSeek without approval, creating a massive security blind spot that's keeping IT leaders up at night.

The surge in China-based DeepSeek's popularity has amplified these concerns. Both the Pentagon and U.S. Navy have banned the platform, citing security and ethical concerns over data processed on Chinese servers.

The Scale of Shadow AI

Security firm Prompt Security research reveals the shocking scope:

  • 67 generative AI tools run across typical company networks
  • 90% lack proper licensing or IT approval
  • 65% of ChatGPT users rely on the free tier, where corporate data can be used for model training

This "shadow AI" mirrors past shadow IT problems, where employees bypass official channels to access unapproved technology.

Key Risks Companies Face

Data Exposure: Sensitive corporate information could leak into AI training datasets when employees input confidential data into unauthorized tools.

Access Control Violations: AI models might expose restricted information to employees who shouldn't have access.

Foreign Jurisdiction: Tools like DeepSeek process data on Chinese servers, subjecting it to Chinese laws and surveillance.

The New Security Approach

Rather than blocking AI tools entirely—which hasn't worked—companies are investing in governance and guardrails. Cisco recently launched AI-driven security tools specifically targeting shadow AI, signaling this as a major 2025 enterprise trend.

"AI applications are really just cloud applications," notes Shannon Murphy from Trend Micro. "Tools already exist to monitor usage and assess risk."

However, experts warn that mobile AI apps will create new challenges this year as employees increasingly use AI tools on personal devices.

Smart companies are focusing on controlling AI inputs and preventing data leaks rather than banning access altogether. The key is visibility and governance, not prohibition.

🔗 Read the full article on Axios