When AI Agents Act, Identity…….
When AI Agents Act, Identity Becomes the Control Plane A product team deploys an AI agent to handle routine work across Jira, GitHub, SharePoint, and a ticketing system. It uses delegated credentials, reads documents, and calls tools to complete tasks. A month later, a single poisoned document causes the agent to pull secrets and send them to an external endpoint. The audit log shows “the user” performed the actions because the agent acted under the user’s token. The incident is not novel malware. It is identity failure in an agent-shaped wrapper. Between late 2025 and early 2026, regulators and national cyber authorities started describing autonomous AI agents as a distinct security problem, not just another application. NIST’s new public RFI frames agent systems as software that can plan and take actions affecting real systems, and asks for concrete security practices and failure cases from industry. (Federal Register) At the same time, FINRA put “AI agents” into its 2026 oversight lens, calling out autonomy, scope, auditability, and data sensitivity as supervisory and control problems for member firms. (FINRA) Gartner has put a number on the trajectory: by 2028, 25% of enterprise breaches will be traced to AI agent abuse. That prediction reflects a shift in where attackers see opportunity. (gartner.com) Enterprises have spent a decade modernizing identity programs around humans, service accounts, and APIs. AI agents change the shape of “who did what” because they: The UK NCSC’s December 2025 guidance makes the core point directly: prompt injection is not analogous to SQL injection, and it may remain a residual risk that cannot be fully eliminated with a single mitigation. That pushes enterprise strategy away from perfect prevention and toward containment, privilege reduction, and operational controls. (NCSC) Why Agents Are Not Just Service Accounts Security teams may assume existing non-human identity controls apply. They do not fully transfer. Service accounts run fixed, predictable code. Agents run probabilistic models that decide what to do based on inputs, including potentially malicious inputs. A service account that reads a poisoned document does exactly what its code specifies. An agent that reads the same document might follow instructions embedded in it. The difference: agents can be manipulated through their inputs in ways that service accounts cannot. How the Mechanism Works 1. Agents collapse “identity” and “automation” into one moving target Most agents are orchestration layers around a model that can decide which tools to call. The identity risk comes from how agents authenticate and how downstream systems attribute actions: 2. Indirect prompt injection turns normal inputs into executable instructions Agents must read information to work. If the system cannot reliably separate “data to summarize” from “instructions to follow,” untrusted content can steer behavior. NCSC’s point is structural: language models do not have a native, enforceable boundary between data and instructions the way a parameterized SQL query does. That is why “filter harder” is not a complete answer. (NCSC) A practical consequence: any agent that reads external or semi-trusted content (docs, tickets, wikis, emails, web pages) has a standing exposure channel. 3. Tool protocols like MCP widen the blast radius by design The Model Context Protocol (MCP) pattern connects models to tools and data sources. It is powerful, but it also concentrates risk: an agent reads tool metadata, chooses a tool, and invokes it. Real-world disclosures in the MCP ecosystem have repeatedly mapped back to classic security failures: lack of authentication, excessive privilege, weak isolation, and unsafe input handling. One example is CVE-2025-49596 (MCP Inspector), where a lack of authentication between the inspector client and proxy could lead to remote code execution, according to NVD. (NVD) Separately, AuthZed’s timeline write-up shows that MCP server incidents often look like “same old security fundamentals,” but in a new interface where the agent’s reasoning decides what gets executed. (AuthZed) 4. Agent supply chain risk is identity risk Agent distribution and “prompt hub” patterns create a supply chain problem: you can import an agent configuration that quietly routes traffic through attacker infrastructure. Noma Security’s AgentSmith disclosure illustrates this clearly: a malicious proxy configuration could allow interception of prompts and sensitive data, including API keys, if users adopt or run the agent. (Noma Security) 5. Attack speed changes response requirements Unit 42 demonstrated an agentic attack framework where a simulated ransomware attack chain, from initial compromise to exfiltration, took 25 minutes. They reported a 100x speed increase using AI across the chain. (Palo Alto Networks) To put that in operational terms: a typical SOC alert-to-triage cycle can exceed 25 minutes. If the entire attack completes before triage begins, detection effectively becomes forensics. What This Looks Like from the SOC Consider what a security operations team actually sees when an agent-based incident unfolds: The delay between “something is wrong” and “we understand what happened” is where damage compounds. Now Scale It The opening scenario described one agent, one user, one poisoned document. Now consider a more realistic enterprise picture: How many agents read it? Which ones act on it? Which credentials are exposed? Which downstream systems are affected? The attack surface is not one agent. It is the graph of agents, permissions, and shared data sources. A single poisoned input can fan out across that graph faster than any human review process can catch it. Analysis – Why This Matters Now Regulators are converging on a shared premise: if an agent can take actions, then “governance” is not just model policy. It is identity, authorization, logging, and supervision. The regulatory message is consistent: if you deploy agents that can act, you own the consequences of those actions, including the ones you did not authorize. Implications for Enterprises Identity and access management Tooling and platform architecture Monitoring, audit, and response Risks and Open Questions Further Reading

