An icon of an eye to tell to indicate you can view the content by clicking
Signal
Original article date: Apr 24, 2026

Treat AI Like an Identity: The Cybersecurity Framework That Closes the Agentic AI Gap

April 24, 2026
5 min read

At RSA Conference 2026 in San Francisco, one theme dominated every keynote and booth conversation: Agentic AI as actor, not tool. And the cybersecurity industry is still figuring out how to defend against it.

In a timely piece for SecurityWeek, Dr. Torsten George — CMO at ID Dataweb — argues that the answer isn't another security product. It's a fundamental reframe: treat AI the same way you treat an identity.

Why the "One More Tool" Problem Is Dangerous

Each wave of cybersecurity evolution generates a new layer of point solutions. Agentic AI is no different: AI security posture tools, runtime protection platforms, anomaly detection — all valuable individually, but collectively creating the same tool sprawl that has historically benefited attackers. Early rogue AI agents are already probing environments, exploiting misconfigurations, and mimicking legitimate users.

The Identity Framework for AI Security

Because AI agents authenticate, access systems, perform actions, and can be compromised — they behave exactly like identities. That framing unlocks existing enterprise security infrastructure as the control plane:

  • Behavioral visibility to detect anomalies like unusual access or privilege escalation
  • Risk-based controls to adjust access or isolate suspicious agents in real time
  • Lifecycle management to prevent orphaned or unmanaged agents accumulating permissions
  • Least-privilege enforcement applied consistently across human and machine identities

Gartner forecasts AI spending to grow 44% in 2026, reaching $47 trillion by 2029. Getting governance architecture right now is not optional.

Read the full article on SecurityWeek