NVIDIA recently published a powerful framework: “The AI Kill Chain,” mapping how attacks against AI-powered applications unfold. It’s one of the clearest attempts yet to bring structure to an increasingly chaotic security frontier. The framework shows how adversaries move from reconnaissance and data poisoning to exploitation and command and control, giving security teams a common language for understanding AI-specific threats.
What makes this valuable is that it mirrors the maturity curve we saw in traditional cybersecurity. Once we learned to model how attackers think, we could design defenses that anticipate rather than react. But as AI systems evolve from passive models to autonomous agents, we’re facing something new: these agents carry credentials, access sensitive resources, and act on behalf of users – yet their behavior is far less predictable than any human. That’s why identity has to be the focus. Not just what the agent can do, but who it’s acting as, and under whose authority.Β
The shift from systems to actors
In conventional architectures, systems process inputs. In AI-driven environments, they act.
AI agents query databases, send messages, trigger workflows, and sometimes make policy decisions. They are, in effect, new actors in the enterprise. Each one operates under an identity that carries credentials, permissions, and behavioral patterns.
That identity is what turns an AI system from a model into an agent. And just like human users or service accounts, those identities can be hijacked, over-permissioned, or left unmonitored. This changes how we interpret every phase of the kill chain.
Reconnaissance isn’t just about mapping systems. It’s about discovering which agents exist, what they can access, and who they represent.
Exploitation happens when an attacker manipulates an agent’s logic to perform a legitimate action with illegitimate intent.
Command and Control shifts from remote access to delegated control, using the agent’s trusted identity to operate invisibly inside the environment.
The moment we view AI attacks through the lens of identity, the problem changes. Instead of asking “How do we protect the model?” we should be asking “How do we govern who the model acts as?”
A scenario in motion
Imagine an AI assistant in finance built to reconcile invoices. It’s integrated with payment systems and given credentials to approve small transactions automatically. A malicious prompt subtly changes the logic that defines “small,” and the agent begins approving larger transfers. All within its allowed permissions.
No anomaly detection flags it, because nothing technically breaks policy. The breach doesn’t come from model failure. It comes from identity misuse. The system was doing exactly what it was allowed to do, but under the wrong judgment.
This is where identity becomes the connective tissue across the AI Kill Chain. Each phase (reconnaissance, exploitation, and control) depends on visibility into who or what is acting, under whose authority, and within what boundaries.
Turning the kill chain into a trust chain
Identity security brings disciplines that map directly to AI defense: least privilege, continuous authentication, behavioral baselines, and traceable attribution. Together, they turn reactive controls into proactive assurance. I’d call this a trust chain for AI.
In that chain:
- Every action carries context: who initiated it, on whose behalf, and within what scopeΒ
- Every deviation from expected behavior can be observed, audited, and governedΒ
By connecting lifecycle-based models like the AI Kill Chain with identity-aware controls, we start to close the loop between how attacks unfold and who enables them to unfold.
Looking forward
Over time, identity will become the organizing layer for AI governance. Just as we once centralized access management for human users, we’ll soon do the same for AI agents. We’ll be defining, monitoring, and authenticating every digital actor in the enterprise.Β
The AI Kill Chain helps us see how adversaries move.
Identity tells us who they move through.
Bringing those two perspectives together is how we turn AI from an opaque system into a trustworthy one. Not by slowing innovation, but by making accountability scalable.
Want to see what this looks like in practice? Read our breakdown of GTG-1002, the first documented agentic cyber campaign – and what it signals for defenders.Β