An icon of an eye to tell to indicate you can view the content by clicking
Signal
September 29, 2025

New AI Security Threats: How Cybercriminals Are Turning Your AI Tools Against You

New AI Security Threats: How Cybercriminals Are Turning Your AI Tools Against You

Cybercriminals have evolved beyond simply using AI to create better phishing emails—they're now weaponizing AI systems themselves. A Columbia University and University of Chicago study reveals that AI now powers over half of spam emails, creating more polished and convincing attacks.

How Attackers Are Poisoning AI Systems

Modern cyberattacks exploit AI tools in three critical ways:

Email Assistant Exploitation: Attackers hide malicious prompts in seemingly innocent emails. When AI assistants like Microsoft Copilot scan these messages, they may execute hidden commands that leak sensitive information or alter critical records.

Security Tool Manipulation: Criminals target AI-powered defenses by poisoning auto-reply systems, smart forwarding features, and automated ticket creation tools to gain unauthorized access or deploy malware.

RAG System Corruption: By contaminating data that feeds retrieval-augmented generation systems, attackers influence AI responses, leading to poor decisions based on poisoned context.

Key Defense Strategies

  • Multi-layered Protection: Traditional controls like SPF and DKIM aren't enough—organizations need filters that understand how LLMs generate content
  • Zero-Trust Verification: Implement zero-trust principles that require verification for every AI-generated instruction
  • Employee Training: Staff awareness remains crucial for recognizing and reporting suspicious AI-crafted messages

The Growing Threat Landscape

The emergence of agentic AI-powered systems presents new risks. These autonomous systems can reason and act independently, making them attractive targets for "Confused Deputy" attacks where high-privilege AI agents unwittingly serve low-privilege attackers.

Organizations must build defenses on two fronts: detecting when adversaries use AI against them and hardening their own AI systems against manipulation.

🔗 Read the full article on Help Net Security