G360 Technologies

Author name: Josh

The Engineering Room

AI Agents Broke the Old Security Model. AI-SPM…

AI Agents Broke the Old Security Model. AI-SPM Is the First Attempt at Catching Up. A workflow agent is deployed to summarize inbound emails, pull relevant policy snippets from an internal knowledge base, and open a ticket when it detects a compliance issue. It works well until an external email includes hidden instructions that influence the agent’s tool calls. The model did not change. The agent’s access, tools, and data paths did. Enterprise AI agents are shifting risk from the model layer to the system layer: tools, identities, data connectors, orchestration, and runtime controls. In response, vendors are shipping AI Security Posture Management (AI-SPM) capabilities that aim to inventory agent architectures and prioritize risk based on how agents can act and what they can reach. (Microsoft) Agents are not just chat interfaces. They are software systems that combine a model, an orchestration framework, tool integrations, data retrieval pipelines, and an execution environment. In practice, a single “agent” is closer to a mini application than a standalone model endpoint. This shift is visible in vendor security guidance and platform releases. Microsoft’s Security blog frames agent posture as comprehensive visibility into “all AI assets” and the context around what each agent can do and what it is connected to. (Microsoft) Microsoft Defender for Cloud has also expanded AI-SPM coverage to include GCP Vertex AI, signaling multi-cloud posture expectations rather than single-platform governance. (Microsoft Learn) At the same time, cloud platforms are standardizing agent runtime building blocks. AWS documentation describes Amazon Bedrock AgentCore as modular services such as runtime, memory, gateway, and observability, with OpenTelemetry and CloudWatch-based tracing and dashboards. (AWS Documentation) On the governance side, the Cloud Security Alliance’s MAESTRO framework explicitly treats agentic systems as multi-layer environments where cross-layer interactions drive risk propagation. (Cloud Security Alliance) How the Mechanism Works  AI-SPM is best understood as a posture layer that tries to answer four questions continuously: Technically, many of these risks become visible only when you treat the agent as an execution path. Observability tooling for agent runtimes is increasingly built around tracing tool calls, state transitions, and execution metrics. AWS AgentCore observability documentation describes dashboards and traces across AgentCore resources and integration with OpenTelemetry. (AWS Documentation) Finally, tool standardization is tightening. The Model Context Protocol (MCP) specification added OAuth-aligned authorization requirements, including explicit resource indicators (RFC 8707), which specify exactly which backend resource a token can access. The goal is to reduce token misuse and confused deputy-style failures when connecting clients to tool servers. (Auth0) Analysis: Why This Matters Now The underlying change is that “AI risk” is less about what the model might say and more about what the system might do. Consider a multi-agent expense workflow. A coordinator agent receives requests, a validation agent checks policy compliance, and an execution agent submits approved payments to the finance system. Each agent has narrow permissions. But if the coordinator is compromised through indirect prompt injection (say, a malicious invoice PDF with hidden instructions), it can route fraudulent requests to the execution agent with fabricated approval flags. No single agent exceeded its permissions. The system did exactly what it was told. The breach happened in the orchestration logic, not the model. Agent deployments turn natural language into action. That action is mediated by: This shifts security ownership. Model governance teams can no longer carry agent risk alone. Platform engineering owns runtimes and identity integration, security engineering owns detection and response hooks, and governance teams own evidence and control design. It also changes what “posture” means. Traditional CSPM and identity posture focus on static resources and permissions. Agents introduce dynamic execution: the same permission set becomes higher risk when paired with autonomy and untrusted inputs, especially when tool chains span multiple systems. What This Looks Like in Practice A security team opens their AI-SPM dashboard on Monday morning. They see: The finding is not that the agent has a vulnerability. The finding is that this combination of autonomy, tool access, and external input exposure creates a high-value target. The remediation options are architectural: add an approval workflow for refunds, restrict external input processing, or tighten retrieval-time access controls. This is the shift AI-SPM represents. Risk is not a CVE to patch. Risk is a configuration and capability profile to govern. Implications for Enterprises Operational implications Technical implications Risks and Open Questions AI-SPM addresses visibility gaps, but several failure modes remain structurally unsolved. Further Reading

The Governance Room

From Disclosure to Infrastructure: How Global AI Regulation Is Turning Compliance Into System Design

From Disclosure to Infrastructure: How Global AI Regulation Is Turning Compliance Into System Design An enterprise deploys an AI system for credit eligibility decisions. The privacy policy discloses automated decision-making and references human review on request. During an audit, regulators do not ask for the policy. They ask for logs, override records, retention settings, risk assessments, and evidence that human intervention works at runtime. The system passes disclosure review. It fails infrastructure review. Between 2025 and 2026, global AI and privacy regulation shifted enforcement away from policies and notices toward technical controls embedded in systems. Regulators increasingly evaluate whether compliance mechanisms actually operate inside production infrastructure. Disclosure alone no longer serves as sufficient evidence. Across jurisdictions, privacy and AI laws now share a common enforcement logic: accountability must be demonstrable through system behavior. This shift appears in the EU AI Act, GDPR enforcement patterns, California’s CPRA and ADMT rules, India’s DPDP Act, Australia’s Privacy Act reforms, UK data law updates, and FTC enforcement practice. Earlier regulatory models emphasized transparency through documentation. The current generation focuses on verifiable controls: logging, retention, access enforcement, consent transaction records, risk assessments, and post-deployment monitoring. In multiple jurisdictions, audits and inquiries are focusing on how AI systems are built, operated, and governed over time. Then Versus Now: The Same Question, Different Answers 2020: “How do you handle data subject access requests?” Acceptable answer: “Our privacy policy explains the process. Customers email our compliance team, and we respond within 30 days.” 2026: “How do you handle data subject access requests?” Expected answer: “Requests are logged in our consent management system with timestamps. Automated retrieval pulls data from three production databases and two ML training pipelines. Retention rules auto-delete after the statutory period. Here are the audit logs from the last 50 requests, including response times and any exceptions flagged for manual review.” The question is the same. The evidence threshold is not. How the Mechanism Works Regulatory requirements increasingly map to infrastructure features rather than abstract obligations. Logging and traceability High-risk AI systems under the EU AI Act must automatically log events, retain records for defined periods, and make logs audit-ready. Similar expectations appear in California ADMT rules, Australia’s automated decision-making framework, and India’s consent manager requirements. Logs must capture inputs, outputs, timestamps, system versions, and human interventions. Data protection by design GDPR Articles 25 and 32 require privacy and security controls embedded at design time: encryption, access controls, data minimization, pseudonymization or tokenization, and documented testing. Enforcement increasingly examines whether these controls are implemented and effective, not merely described. Risk assessment as a system process DPIAs under GDPR, AI Act risk management files, California CPRA assessments, and FTC expectations all require structured risk identification, mitigation, and documentation. These are no longer static documents. They tie to deployment decisions, monitoring, and change management. Human oversight at runtime Multiple regimes require meaningful human review, override capability, and appeal mechanisms. Auditors evaluate reviewer identity, authority, training, and logged intervention actions. Post-market monitoring and incident reporting The EU AI Act mandates continuous performance monitoring and defined incident reporting timelines. FTC enforcement emphasizes ongoing validation, bias testing, and corrective action. Compliance extends beyond launch into sustained operation. What an Infrastructure Failure Looks Like Hypothetical scenario for illustration A multinational retailer uses an AI system to flag potentially fraudulent returns. The system has been in production for two years. Documentation is thorough: a DPIA on file, a privacy notice explaining automated decision-making, and a stated policy that customers can request human review of any flag. A regulator opens an inquiry after consumer complaints. The retailer produces its documentation confidently. Then the auditors ask: The retailer discovers that “human review” meant a store manager glancing at a screen and clicking approve. No structured logging. No override records. No way to demonstrate the review was meaningful. The request routing system existed in the privacy notice but had never been built. The DPIA was accurate when written. The system drifted. No monitoring caught it. The documentation said one thing. The infrastructure did another. Audit-Style Questions Enterprises Should Be Prepared to Answer Illustrative examples of evidence requests that align with the control patterns described above. On logging: On human oversight: On data minimization: On consent and opt-out: On incident response: Analysis This shift changes how compliance is proven. Regulators increasingly test technical truth: whether systems behave as stated when examined through logs, controls, and operational evidence. Disclosure remains necessary but no longer decisive. A system claiming opt-out, human review, or data minimization must demonstrate those capabilities through enforceable controls. Inconsistent implementation is now a compliance failure, not a documentation gap. The cross-jurisdictional convergence is notable. Despite different legal structures, the same control patterns recur. Logging, minimization, risk assessment, and oversight are becoming baseline expectations. Implications for Enterprises Architecture decisions AI systems must be designed with logging, access control, retention, and override capabilities as core components. Retrofitting after deployment is increasingly risky. Operational workflows Compliance evidence now lives in system outputs, audit trails, and monitoring dashboards. Legal, security, and engineering teams must coordinate on shared control ownership. Governance and tooling Model inventories, risk registers, consent systems, and monitoring pipelines are becoming core infrastructure. Manual processes do not scale. Vendor and third-party management Processor and vendor contracts are expected to mirror infrastructure-level safeguards. Enterprises remain accountable for outsourced AI capabilities. Risks and Open Questions Enforcement coordination remains uneven across regulators, raising the risk of overlapping investigations for the same incident. Mutual recognition of compliance assessments across jurisdictions is limited. Organizations operating globally face uncertainty over how many times systems must be audited and under which standards. Another open question is proportionality. Smaller or lower-risk deployments may struggle to interpret how deeply these infrastructure expectations apply. Guidance continues to evolve. Where This Is Heading One plausible direction is compliance as code: regulatory requirements expressed not as policy documents but as automated controls, continuous monitoring, and machine-readable audit trails. Early indicators point this way. The EU AI Act’s logging requirements assume systems can self-report. Consent management platforms are evolving toward real-time enforcement. Risk assessments are being linked to CI/CD

The Operations Room

Enterprise GenAI Pilot Purgatory: Why …..

Enterprise GenAI Pilot Purgatory: Why the Demo Works and the Rollout Doesn’t A financial services team demos a GenAI assistant that summarizes customer cases flawlessly. The pilot uses a curated dataset of 200 cases. Leadership is impressed. The rollout expands. Two weeks in, a supervisor catches the assistant inventing a detail: a policy exception that never existed, stated with complete confidence. Word spreads. Within a month, supervisors are spot-checking every summary. The time savings vanish. Adoption craters. At the next steering committee, the project gets labeled “promising, but risky,” which in practice means: shelved. This is not a story about one failed pilot. It is the modal outcome. Across late 2025 and early 2026 research, a consistent pattern emerges: enterprises are running many GenAI pilots, but only a small fraction reach sustained production value. MIT’s Project NANDA report frames this as a “GenAI divide,” where most initiatives produce no measurable business impact while a small minority do. (MLQ) Model capability does not explain the gap. The recurring failure modes are operational and organizational: data readiness, workflow integration, governance controls, cost visibility, and measurement discipline. The pilots work. The production systems do not. Context: The Numbers Behind the Pattern Several large studies and industry analyses published across 2025 and early 2026 converge on high drop-off rates between proof of concept and broad deployment. The combined picture is not that enterprises are failing to try. It is that pilots are colliding with production realities, repeatedly, and often in the same ways. How Pilots Break: Five Failure Mechanisms Enterprise GenAI pilots often look like software delivery but behave more like socio-technical systems: model behavior, data pipelines, user trust, and governance controls all interact in ways that only surface at scale. In brief: Verification overhead erases gains. Production data breaks assumptions. Integration complexity compounds. Governance arrives late. Costs exceed forecasts. 1. The trust tax: When checking the AI costs more than doing the work When a system produces an incorrect output with high confidence, users respond rationally: they add checks. A summary gets reviewed. An extraction gets verified against the source. Over time, this verification work becomes a hidden operating cost. The math is simple but often ignored. If users must validate 80% of outputs, and validation takes 60% as long as doing the task manually, the net productivity gain is marginal or negative. The pilot showed 10x speed. Production delivers 1.2x and new liability questions. In practice, enterprises often under-plan for verification workflows, including sampling rates, escalation paths, and accountability for sign-off. 2. The data cliff: When production data looks nothing like the pilot Pilots frequently rely on curated datasets, simplified access paths, and stable assumptions. Production introduces: Gartner’s data readiness warning captures this directly: projects without AI-ready data foundations are disproportionately likely to be abandoned. (gartner.com) The pilot worked because someone cleaned the data by hand. Production has no such luxury. 3. The integration trap: When “add more users” means “connect more systems” Scaling is rarely just adding seats. It is connecting to more systems, where each system brings its own auth model, data contracts, latency constraints, and change cycles. As integrations multiply, brittle glue code and one-off mappings become reliability risks. This is where many pilots stall: the demo works in isolation, but the end-to-end workflow fails when the CRM returns a null field, the document store times out, or the permissions model differs between regions. 4. The governance gate: When security asks questions the pilot never answered Governance and security teams typically arrive late in the process and ask the questions that pilots postponed: When these questions are answered late, or poorly, the cheapest option is often “pause the rollout.” Projects that treated governance as a final checkbox discover it is actually a design constraint. 5. The budget shock: When production costs dwarf pilot costs As pilots move toward production, enterprises add the costs they skipped at the start: monitoring, evaluation, retraining or prompt/version control, integration hardening, governance operations, and user enablement. An IDC survey of large enterprises, summarized in a January 2026 analysis, reported that most organizations saw costs exceed expectations and many lacked visibility into where costs originate. (Maiven – AI Factory for Enterprise) The pilot budget assumed inference costs. The production budget requires an operating model. What Success Looks Like: A Counter-Example Consider a contrasting scenario. A logistics company pilots a GenAI system to classify and route supplier inquiries. Before expanding, the team: The pilot-to-production transition still surfaces issues. But they are identified through structured monitoring, not user complaints. The system reaches steady-state production in four months rather than stalling in indefinite “extended pilot.” The difference is not the model. It is the operating infrastructure around the model. Analysis: Why This Is Surfacing Now The pilot-to-production gap is becoming visible because three dynamics are converging: The novelty phase is over. Executives now have enough pilots to see patterns. They can compare dozens of initiatives and recognize that impressive demos do not equal durable adoption. Organizations are starting to ask why their fourth and fifth pilots look like their first. (Harvard Business Review) Agentic approaches raise the stakes. As enterprises move from copilots (which suggest) to agents (which act), the required controls expand. Least privilege, change management, approval workflows, and auditability become central design constraints, not nice-to-haves. Gartner’s forecast that a large share of agentic AI projects will be canceled by 2027 explicitly cites cost, unclear business value, and inadequate risk controls. (gartner.com) Measurement pressure is increasing. Forrester reports many firms have AI in production but fewer measure financial impact. That mismatch forces a reckoning in budget cycles: what did we actually get for this spend? (Forrester) This is less a verdict on GenAI capability and more a forcing function for enterprise operating models. The technology works. The organizational machinery to deploy it reliably does not, yet. Implications for Enterprises Operational Technical Risks and Open Questions Further Reading

The Operations Room

Agentic AI in Production: The ….

Agentic AI in Production: The System Worked. The Outcome Was Wrong. An AI system flags a billing anomaly in a customer account. No human reviews it. The system corrects the record, triggers a payment adjustment, updates the ledger, and notifies the customer. All actions are technically correct. One input field was stale. Three days later, the customer calls. The adjustment reversed a legitimate charge. Finance spends four hours tracing the discrepancy across three systems. The ledger has already reconciled. Downstream reports have already been sent to leadership. The agent, meanwhile, continues operating normally. Nothing in its logs indicates a failure. The system did exactly what it was designed to do. The outcome was still wrong. Agentic AI no longer advises. It acts. Roughly two-thirds of enterprises now run agentic pilots, but fewer than one in eight have reached production scale. The bottleneck is not model capability. It is governance and operational readiness. Between 2024 and 2026, enterprises shifted from advisory AI tools to systems capable of executing multi-step workflows. Early deployments framed agents as copilots. Current systems increasingly decompose goals, plan actions, and modify system state without human initiation. The pilot-to-production gap reflects architectural, data, and governance limitations rather than failures in reasoning or planning capability. This transition reframes AI risk. Traditional AI failures were informational. Agentic failures are transactional. How the Mechanism Works Every layer below is a potential failure point. Most pilots enforce some. Production requires all. This is why pilots feel fine: partial coverage works when volume is low and humans backstop every edge case. At scale, the gaps compound. Data ingestion and context assembly. Agents pull real-time data from multiple enterprise systems. Research shows production agents integrate an average of eight or more sources. Data freshness, schema consistency, lineage, and access context are prerequisites. Errors at this layer propagate forward. Reasoning and planning. Agents break objectives into sub-tasks using multi-step reasoning, retrieval-augmented memory, and dependency graphs. This allows parallel execution and failure handling but increases exposure to compounding error when upstream inputs are flawed. Governance checkpoints. Before acting, agents pass through policy checks, confidence thresholds, and risk constraints. Low-confidence or high-impact actions are escalated. High-volume, low-risk actions proceed autonomously. Human oversight models. Enterprises deploy agents under three patterns: human-in-control for high-stakes actions, human-in-the-loop for mixed risk, and limited autonomy where humans intervene only on anomalies. Execution and integration. Actions are performed through APIs, webhooks, and delegated credentials. Mature implementations enforce rate limits, scoped permissions, and reversible operations to contain blast radius. Monitoring and feedback. Systems log every decision path, monitor behavioral drift, classify failure signatures, and feed outcomes back into future decision thresholds. The mechanism is reliable only when every layer is enforced. Missing controls at any point convert reasoning errors into system changes. Analysis: Why This Matters Now Agentic AI introduces agency risk. The system no longer only informs decisions. It executes them. This creates three structural shifts. First, data governance priorities change. Privacy remains necessary, but freshness and integrity become operational requirements. Acting on correct but outdated data produces valid actions with harmful outcomes. Second, reliability engineering changes. Traditional systems assume deterministic flows. Agentic systems introduce nondeterministic but valid paths to a goal. Monitoring must track intent alignment and loop prevention, not just uptime. Third, human oversight models evolve. Human-in-the-loop review does not scale when agents operate continuously. Enterprises are moving toward human-on-the-loop supervision, where humans manage exceptions, thresholds, and shutdowns rather than individual actions. These shifts explain why pilots succeed while production deployments stall. Pilots tolerate manual review, brittle integrations, and informal governance. Production systems cannot. What This Looks Like When It Works The pattern that succeeds in production separates volume from judgment. A logistics company deploys an agent to manage carrier selection and shipment routing. The agent operates continuously, processing thousands of decisions per day. Each action is scoped: the agent can select carriers and adjust routes within cost thresholds but cannot renegotiate contracts or override safety holds. Governance is embedded. Confidence below a set threshold triggers escalation. Actions above a dollar limit require human approval. Every decision is logged with full context, and weekly reviews sample flagged cases for drift. The agent handles volume. Humans handle judgment. Neither is asked to do the other’s job. Implications for Enterprises Operational architecture. Integration layers become core infrastructure. Point-to-point connectors fail under scale. Event-driven architectures outperform polling-based designs in both cost and reliability. Governance design. Policies must be enforced as code, not documents. Authority boundaries, data access scopes, confidence thresholds, and escalation logic must be explicit and machine-enforced. Risk management. Enterprises must implement staged autonomy, rollback mechanisms, scoped kill switches, and continuous drift detection. These controls enable autonomy rather than limiting it. Organizational roles. Ownership shifts from model teams to platform, data, and governance functions. Managing agent fleets becomes an ongoing operational responsibility, not a deployment milestone. Vendor strategy. Embedded agent platforms gain advantage because governance, integration, and observability are native. This is visible in production deployments from Salesforce, Oracle, ServiceNow, and Ramp. Risks and Open Questions Responsibility attribution. When agents execute compliant individual actions that collectively cause harm, accountability remains unclear across developers, operators, and policy owners. Escalation design. Detecting when an agent should stop and defer remains an open engineering challenge. Meta-cognitive uncertainty detection is still immature. Multi-agent failure tracing. In orchestrated systems, errors propagate across agents. Consider: Agent A flags an invoice discrepancy. Agent B, optimizing cash flow, delays payment. Agent C, managing vendor relationships, issues a goodwill credit. Each followed policy. The combined result is a cash outflow, a confused vendor, and an unresolved invoice. No single agent failed. Root-cause analysis becomes significantly harder. Cost control. Integration overhead, monitoring, and governance often exceed model inference costs. Many pilots underestimate this operational load. Further Reading McKinsey QuantumBlack Deloitte Tech Trends 2026 Gartner agentic AI forecasts Process Excellence Network Databricks glossary on agentic AI Oracle Fusion AI Agent documentation Salesforce Agentforce architecture ServiceNow NowAssist technical briefings

The Threat Room

When AI Agents Act, Identity…….

When AI Agents Act, Identity Becomes the Control Plane A product team deploys an AI agent to handle routine work across Jira, GitHub, SharePoint, and a ticketing system. It uses delegated credentials, reads documents, and calls tools to complete tasks. A month later, a single poisoned document causes the agent to pull secrets and send them to an external endpoint. The audit log shows “the user” performed the actions because the agent acted under the user’s token. The incident is not novel malware. It is identity failure in an agent-shaped wrapper. Between late 2025 and early 2026, regulators and national cyber authorities started describing autonomous AI agents as a distinct security problem, not just another application. NIST’s new public RFI frames agent systems as software that can plan and take actions affecting real systems, and asks for concrete security practices and failure cases from industry. (Federal Register) At the same time, FINRA put “AI agents” into its 2026 oversight lens, calling out autonomy, scope, auditability, and data sensitivity as supervisory and control problems for member firms. (FINRA) Gartner has put a number on the trajectory: by 2028, 25% of enterprise breaches will be traced to AI agent abuse. That prediction reflects a shift in where attackers see opportunity. (gartner.com) Enterprises have spent a decade modernizing identity programs around humans, service accounts, and APIs. AI agents change the shape of “who did what” because they: The UK NCSC’s December 2025 guidance makes the core point directly: prompt injection is not analogous to SQL injection, and it may remain a residual risk that cannot be fully eliminated with a single mitigation. That pushes enterprise strategy away from perfect prevention and toward containment, privilege reduction, and operational controls. (NCSC) Why Agents Are Not Just Service Accounts Security teams may assume existing non-human identity controls apply. They do not fully transfer. Service accounts run fixed, predictable code. Agents run probabilistic models that decide what to do based on inputs, including potentially malicious inputs. A service account that reads a poisoned document does exactly what its code specifies. An agent that reads the same document might follow instructions embedded in it. The difference: agents can be manipulated through their inputs in ways that service accounts cannot. How the Mechanism Works 1. Agents collapse “identity” and “automation” into one moving target Most agents are orchestration layers around a model that can decide which tools to call. The identity risk comes from how agents authenticate and how downstream systems attribute actions: 2. Indirect prompt injection turns normal inputs into executable instructions Agents must read information to work. If the system cannot reliably separate “data to summarize” from “instructions to follow,” untrusted content can steer behavior. NCSC’s point is structural: language models do not have a native, enforceable boundary between data and instructions the way a parameterized SQL query does. That is why “filter harder” is not a complete answer. (NCSC) A practical consequence: any agent that reads external or semi-trusted content (docs, tickets, wikis, emails, web pages) has a standing exposure channel. 3. Tool protocols like MCP widen the blast radius by design The Model Context Protocol (MCP) pattern connects models to tools and data sources. It is powerful, but it also concentrates risk: an agent reads tool metadata, chooses a tool, and invokes it. Real-world disclosures in the MCP ecosystem have repeatedly mapped back to classic security failures: lack of authentication, excessive privilege, weak isolation, and unsafe input handling. One example is CVE-2025-49596 (MCP Inspector), where a lack of authentication between the inspector client and proxy could lead to remote code execution, according to NVD. (NVD) Separately, AuthZed’s timeline write-up shows that MCP server incidents often look like “same old security fundamentals,” but in a new interface where the agent’s reasoning decides what gets executed. (AuthZed) 4. Agent supply chain risk is identity risk Agent distribution and “prompt hub” patterns create a supply chain problem: you can import an agent configuration that quietly routes traffic through attacker infrastructure. Noma Security’s AgentSmith disclosure illustrates this clearly: a malicious proxy configuration could allow interception of prompts and sensitive data, including API keys, if users adopt or run the agent. (Noma Security) 5. Attack speed changes response requirements Unit 42 demonstrated an agentic attack framework where a simulated ransomware attack chain, from initial compromise to exfiltration, took 25 minutes. They reported a 100x speed increase using AI across the chain. (Palo Alto Networks) To put that in operational terms: a typical SOC alert-to-triage cycle can exceed 25 minutes. If the entire attack completes before triage begins, detection effectively becomes forensics. What This Looks Like from the SOC Consider what a security operations team actually sees when an agent-based incident unfolds: The delay between “something is wrong” and “we understand what happened” is where damage compounds. Now Scale It The opening scenario described one agent, one user, one poisoned document. Now consider a more realistic enterprise picture: How many agents read it? Which ones act on it? Which credentials are exposed? Which downstream systems are affected? The attack surface is not one agent. It is the graph of agents, permissions, and shared data sources. A single poisoned input can fan out across that graph faster than any human review process can catch it. Analysis – Why This Matters Now Regulators are converging on a shared premise: if an agent can take actions, then “governance” is not just model policy. It is identity, authorization, logging, and supervision. The regulatory message is consistent: if you deploy agents that can act, you own the consequences of those actions, including the ones you did not authorize. Implications for Enterprises Identity and access management Tooling and platform architecture Monitoring, audit, and response Risks and Open Questions Further Reading

Uncategorized

Texas AI Law Shifts Compliance Focus from Outcomes to Intent

Texas AI Law Shifts Compliance Focus from Outcomes to Intent A national retailer uses the same AI system to screen job applicants in Colorado and Texas. In Colorado, auditors examine outcomes and disparate impact metrics. In Texas, they start somewhere else entirely: what was this system designed to do, and where is that documented? The Texas Responsible Artificial Intelligence Governance Act takes effect January 1, 2026. It creates a state-level AI governance framework that distinguishes between developers and deployers, imposes specific requirements on government agencies, and centralizes enforcement under the Texas Attorney General with defined cure periods and safe harbors. TRAIGA covers private and public entities that develop or deploy AI systems in Texas, including systems affecting Texas residents. The statute defines AI systems broadly but reserves its most prescriptive requirements for state and local government. Private sector obligations focus on prohibited uses, transparency, and documentation. Here is the key distinction from other AI laws: TRAIGA does not use a formal high-risk classification scheme. Instead, it organizes compliance around roles, intent, and evidence of responsible design. How the mechanism works Role-based duties. Developers must test systems, mitigate risks, and provide documentation explaining capabilities, limitations, and appropriate uses. Deployers must analyze use cases, establish internal policies, maintain human oversight, align with data governance requirements, and obtain disclosures or consent where required in consumer-facing or government service contexts. Purpose and prohibition controls. The law prohibits AI systems designed or used for intentional discrimination, civil rights violations, or manipulation that endangers public safety. Organizations must document legitimate business purposes and implement controls to prevent or detect prohibited use. Enforcement and remediation. Only the Texas Attorney General can enforce the statute. The AG may request training data information, testing records, and stated system purposes. Entities generally receive notice and 60 days to cure alleged violations before penalties apply. Safe harbors exist for organizations that align with recognized frameworks like the NIST AI RMF, identify issues through internal monitoring, or participate in the state AI sandbox. Government-specific requirements. State agencies must inventory their AI systems, follow an AI code of ethics from the Department of Information Resources, and apply heightened controls to systems influencing significant public decisions (such as benefits eligibility or public services). Analysis: why this matters now TRAIGA makes intent a compliance artifact. Documentation of design purpose, testing, and internal controls moves from best practice to legal requirement. Key insight: For compliance teams, the question is no longer just “did this system cause harm” but “can we prove we tried to prevent it.” This has direct implications for technical teams. Internal testing, red teaming, and incident tracking are now tied to enforcement outcomes. Finding and fixing problems internally becomes part of the legal defense. For multi-state operators, the challenge is reconciliation. Evidence that supports a design-focused defense in Texas may not align with the impact-based assessments required elsewhere. Example: Consider a financial services firm using an AI system to flag potentially fraudulent transactions. Under Colorado’s SB 205, regulators would focus on whether the system produces disparate outcomes across protected classes. Under TRAIGA, the first question is whether the firm documented the system’s intended purpose, tested for failure modes, and established controls to prevent misuse. The same system, two different compliance burdens. Implications for enterprises Operations. AI inventories will need to expand to cover embedded and third-party systems meeting the statute’s broad definition. Governance teams should map which business units act as developers versus deployers, with documentation and contracts to match. Technical infrastructure. Continuous monitoring, testing logs, and incident tracking shift from optional to required. Documentation of system purpose, testing protocols, and mitigation measures should be retrievable quickly in the event of an AG inquiry. Governance strategy. Alignment with recognized risk management frameworks now offers concrete legal value. Incident response plans should account for Texas’s 60-day cure window alongside shorter timelines in other states. Risks & Open Questions Implementation guidance from Texas agencies is still developing. The central uncertainty is what documentation will actually satisfy the evidentiary standard for intent and mitigation. Other open questions include how the law interacts with state requirements on biometric data and automated decisions, and whether the regulatory sandbox will have practical value for nationally deployed systems. Further Reading Texas Legislature HB 149 analysis Texas Attorney General enforcement provisions Baker Botts TRAIGA overview Wiley Rein TRAIGA alert Ropes and Gray AI compliance analysis Ogletree Deakins AI governance commentary

The Governance Room

California’s 2026 AI Laws: When a Documentation Gap Becomes a Reportable Incident

California’s 2026 AI Laws: When a Documentation Gap Becomes a Reportable Incident Key Takeaways Effective January 1, 2026, frontier AI developers face enforceable safety, transparency, and cybersecurity obligations under California law Cybersecurity control failures can trigger critical safety incident reporting with 15-day deadlines Enterprises buying from frontier AI vendors should expect new due diligence, contract clauses, and attestation requirements A foundation model is deployed with new fine-tuning. The model behaves as expected. Weeks later, an internal researcher flags that access controls around unreleased model weights are weaker than documented. Under California’s 2026 AI regime, that gap is no longer a quiet fix. If it results in unauthorized access, exfiltration, or other defined incident conditions, it becomes a critical safety incident with a 15-day reporting deadline, civil penalties, and audit trails. Beginning January 1, 2026, California’s Transparency in Frontier Artificial Intelligence Act and companion statutes shift AI governance from voluntary principles to enforceable operational requirements. The laws apply to a narrow group: frontier AI developers whose training compute exceeds 10^26 floating point or integer operations, with additional obligations for developers that meet the statute’s “large frontier developer” criteria, including revenue thresholds. Who This Applies To This framework primarily affects large frontier developers and has limited immediate scope. However, it sets expectations that downstream enterprises will likely mirror in vendor governance and procurement requirements. For covered developers, internal-use testing and monitoring are no longer technical hygiene. They are regulated evidence-producing activities. Failures in cybersecurity controls and model weight security can trigger incident reporting and penalties even when no malicious intent exists. What Developers Must Produce The law requires documented artifacts tied to deployment and subject to enforcement. Safety and security protocol. A public document describing how the developer identifies dangerous capabilities, assesses risk thresholds, evaluates mitigations, and secures unreleased model weights. Must include criteria for determining substantial modifications and when new assessments are triggered. Transparency reports. Published before or at deployment. Large frontier developers must include catastrophic risk assessments, third-party evaluations, and compliance descriptions. Frontier AI Framework. Required for large frontier developers. Documents governance structures, lifecycle risk management, and alignment with recognized standards. Updated annually or within 30 days of material changes. What Triggers Reporting The law defines catastrophic risk using explicit harm thresholds: large-scale loss of life or property damage exceeding one billion dollars. Critical safety incidents include: Most critical safety incidents must be reported to the Attorney General within 15 days. Events posing imminent risk of death or serious injury require disclosure within 24 hours. Why the Coupling of Safety and Cybersecurity Matters California’s framework treats model weight security, internal access governance, and shutdown capabilities as safety-bound controls. These are not infrastructure concerns. They are controls explicitly tied to statutory safety obligations, and failures carry compliance consequences. Access logging, segregation of duties, insider threat controls, and exfiltration prevention are directly linked to statutory risk definitions. A control weakness that would previously have been an IT finding can now constitute a compliance-triggering event if it leads to unauthorized access or other defined incidents. Internal use is explicitly covered and subject to audit. Testing, monitoring, and reporting obligations apply to dangerous capabilities that arise from employee use, not just public deployment. This means internal experimentation with frontier models produces compliance artifacts, not just research notes. Developers must document procedures for incident monitoring and for promptly shutting down copies of models they own and control. Operational Changes for Covered Developers Documentation becomes operational. Safety protocols and frameworks must stay aligned with real system behavior. Gaps between documentation and practice can become violations. Incident response expands. Processes must account for regulatory reporting timelines alongside technical containment. Whistleblower infrastructure is required. Anonymous reporting systems and defined response processes create new coordination requirements across legal, security, and engineering teams. Model lifecycle tracking gains compliance consequences. Fine-tuning, retraining, and capability expansion may constitute substantial modifications triggering new assessments. How frequently occurring changes will be interpreted remains unclear. Starting in 2030, large frontier developers must undergo annual independent third-party audits. Downstream Implications for Enterprise Buyers Most enterprises will not meet the compute thresholds that trigger direct coverage. But the framework will shape how they evaluate and contract with AI vendors. Vendor due diligence expands. Procurement and security teams will need to assess whether vendors are subject to California’s requirements and whether their published safety protocols and transparency reports are current. Gaps in vendor documentation become risk factors in sourcing decisions. Contractual flow-down becomes standard. Enterprises will likely require vendors to represent compliance with applicable safety and transparency obligations, notify buyers of critical safety incidents, and provide audit summaries or attestations. These clauses mirror patterns established under GDPR and SOC 2 regimes. Example language: “Vendor shall notify Buyer within 48 hours of any critical safety incident as defined under California Business and Professions Code Chapter 25.1, and shall provide Buyer with copies of all transparency reports and audit summaries upon request.” Internal governance benchmarks shift. Even where not legally required, enterprises may adopt elements of California’s framework as internal policy: documented safety protocols for high-risk AI use cases, defined thresholds for escalation, and audit trails for model deployment decisions. The framework provides a reference architecture for AI governance that extends beyond its direct scope. Security, legal, and procurement teams should expect vendor questionnaires, contract templates, and risk assessment frameworks to incorporate California’s definitions and reporting categories within the next 12 to 18 months. Open Questions Substantial modification thresholds. The protocol must define criteria, but how regulators will interpret frequent fine-tuning or capability expansions is not yet established. Extraterritorial application. The law does not limit applicability to entities physically located in California. Global providers may need to treat California requirements as a baseline. Enforcement priorities. The Attorney General is tasked with oversight, but application patterns across different developer profiles are not yet established. Regime alignment. The European Union’s AI Act defines harm and risk using different metrics, creating potential duplication in compliance strategies. Further Reading California Business and Professions Code Chapter 25.1 (SB 53) Governor of California AI legislation announcements White and Case analysis of California frontier AI laws Sheppard Mullin overview of

The Engineering Room

The Prompt Is the Bug

The Prompt Is the Bug How MLflow 3.x brings version control to GenAI’s invisible failure points A customer support agent powered by an LLM starts returning inconsistent recommendations. The model version has not changed. The retrieval index looks intact. The only modification was a small prompt update deployed earlier that day. Without prompt versioning and traceability, the team spends hours hunting through deployment logs, Slack threads, and git commits trying to reconstruct what changed. By the time they find the culprit, the damage is done: confused customers, escalated tickets, and a rollback that takes longer than the original deploy. MLflow 3.x expands traditional model tracking into a GenAI-native observability and governance layer. Prompts, system messages, traces, evaluations, and human feedback are now treated as first-class, versioned artifacts tied directly to experiments and deployments. This matters because production LLM failures rarely come from the model. They come from everything around it. Classic MLOps tools were built for a simpler world: trained models, static datasets, numerical metrics. In that world, you could trace a failure back to a model version or a data issue. LLM applications break this assumption. Behavior is shaped just as much by prompts, system instructions, retrieval logic, and tool orchestration. A two-word change to a system message can shift tone. A prompt reordering can break downstream parsing. A retrieval tweak can surface stale content that the model confidently presents as fact. As enterprises deploy LLMs into customer support, internal copilots, and decision-support workflows, these non-model components become the primary source of production incidents. And without structured tracking, they leave no trace. MLflow 3.x extends the platform from model tracking into full GenAI application lifecycle management by making these invisible components visible. What Could Go Wrong (and often does) Consider two scenarios that MLflow 3.x is designed to catch: The phantom prompt edit. A product manager tweaks the system message to make responses “friendlier.” No code review, no deployment flag. Two days later, the bot starts agreeing with customer complaints about pricing, offering unauthorized discounts in vague language. Without prompt versioning, the connection between the edit and the behavior is invisible. The retrieval drift. A knowledge base update adds new product documentation. The retrieval index now surfaces newer content, but the prompt was tuned for the old structure. Responses become inconsistent, sometimes mixing outdated and current information in the same answer. Nothing in the model or prompt changed, but the system behaves differently. A related failure mode: human reviewers flag bad responses, but those flags never connect back to specific prompt versions or retrieval configurations. When the team investigates weeks later, they cannot reconstruct which system state produced the flagged outputs. Each of these failures stems from missing system-level traceability, even though they often surface later as governance or compliance issues. How The Mechanism Works MLflow 3.x introduces several GenAI-specific capabilities that integrate with its existing experiment and registry model. Tracing and observability MLflow Tracing captures inputs, outputs, and metadata for each step in a GenAI workflow, including LLM calls, tool invocations, and agent decisions. Traces are structured as sessions and spans, logged asynchronously for production use, and linked to the exact application version that produced them. Tracing is OpenTelemetry-compatible, allowing export into enterprise observability stacks. Prompt Registry Prompts are stored as versioned registry artifacts with content, parameters, and metadata. Each version can be searched, compared, rolled back, or evaluated. Prompts appear directly in the MLflow UI and can be filtered across experiments and traces by version or content. System messages and feedback as trace data Conversational elements such as user prompts, system messages, and tool calls are recorded as structured trace events. Human feedback and annotations attach directly to traces with metadata including author and timestamp, allowing quality labels to feed evaluation datasets. LoggedModel for GenAI applications The LoggedModel abstraction snapshots the full GenAI application configuration, including the model, prompts, retrieval logic, rerankers, and settings. All production traces, metrics, and feedback tie back to a specific LoggedModel version, enabling precise auditing and reproducibility. Evaluation integration MLflow GenAI Evaluation APIs allow prompts and models to be evaluated across datasets using built-in or custom judge metrics, including LLM-as-a-judge. Evaluation results, traces, and scores are logged to MLflow Experiments and associated with specific prompt and application versions. Analysis: Why This Matters Now LLM systems fail differently than traditional software. The failure modes are subtle, the causes are distributed, and the evidence is ephemeral. A prompt tweak can change output structure. A system message edit can alter tone or safety behavior. A retrieval change can surface outdated content. None of these show up in traditional monitoring. None of them trigger alerts. The system looks healthy until a customer complains, a regulator asks questions, or an output goes viral for the wrong reasons. Without artifact-level versioning, organizations cannot reliably answer basic operational questions: what changed, when it changed, and which deployment produced a specific response. MLflow 3.x addresses this by making prompts and traces as inspectable and reproducible as model binaries. This also compresses incident response from hours to minutes. When a problematic output appears, teams can trace it back to the exact prompt version, configuration, and application snapshot. No more inferring behavior from logs. No more re-running tests and hoping to reproduce the issue. Implications For Enterprises For operations teams: Deterministic replay becomes possible. Pair a prompt version with an application version and a model version, and you can reconstruct exactly what the system would have done. Rollbacks become configuration changes rather than emergency code redeploys. Production incidents can be converted into permanent regression tests by exporting and annotating traces. For security and governance teams: Tracing data can function as an audit log input when integrated with enterprise logging and retention controls. Prompt and application versioning supports approval workflows, human-in-the-loop reviews, and post-incident analysis. PII redaction and OpenTelemetry export enable integration with SIEM, logging, and GRC systems. When a regulator asks “what did your system say and why,” teams have structured evidence to work from rather than manual reconstruction. For platform architects: MLflow unifies traditional ML and GenAI governance under a

Uncategorized

Why Enterprises Are Versioning Prompts Like Code

Why Enterprises Are Versioning Prompts Like Code Managing LLM systems when the model isn’t the problem A prompt tweak that seemed harmless in testing starts generating hallucinated policy numbers in production. A retrieval index update quietly surfaces outdated documents. The model itself never changed. These are the failures enterprises now face as they move large language models into production, and traditional MLOps has no playbook for them. Operational control has shifted away from model training and toward prompt orchestration, retrieval pipelines, evaluation logic, and cost governance. GenAIOps practices now treat these elements as first-class, versioned artifacts that move through deployment, monitoring, and rollback just like models. Traditional MLOps was designed for predictive systems with static datasets, deterministic outputs, and well-defined metrics such as accuracy or F1 score. Most enterprise LLM deployments do not retrain foundation models. Instead, teams compose prompts, retrieval-augmented generation pipelines, tool calls, and policy layers on top of third-party models. This shift breaks several assumptions of classic MLOps. There is often no single ground truth for evaluation. Small prompt or retrieval changes can significantly alter outputs. Costs scale with tokens and execution paths rather than fixed infrastructure. Organizations have responded by extending MLOps into GenAIOps, with new tooling and workflows focused on orchestration, observability, and governance. What Can Go Wrong: A Scenario Consider an internal HR assistant built on a third-party LLM. The model is stable. The application code has not changed. But over two weeks, employee complaints about incorrect benefits information increase by 40%. Investigation reveals three simultaneous issues. First, a prompt update intended to make responses more concise inadvertently removed instructions to cite source documents. Second, a retrieval index rebuild pulled in an outdated benefits PDF that should have been excluded. Third, the evaluation pipeline was still running against a test dataset that did not include benefits-related queries. None of these failures would surface in traditional MLOps monitoring. The model responded quickly, token costs were normal, and no errors were logged. Without versioned prompts, retrieval configs, and production-trace evaluation, the team had no way to pinpoint when or why accuracy degraded. This pattern reflects issues described in recent enterprise GenAIOps guidance. It illustrates why the discipline has emerged. How The Mechanism Works Modern GenAIOps stacks define and manage operational artifacts beyond the model itself. Each component carries its own failure modes, and each requires independent versioning and observability. Prompt and instruction registries. Platforms such as MLflow 3.0 introduce dedicated prompt registries with immutable version histories, visual diffs, and aliasing for active deployments. Prompts and system messages can be promoted, canaried, or rolled back without redeploying application code. When output quality degrades, teams can trace the issue to a specific prompt version and revert within minutes. Retrieval and RAG configuration. Retrieval logic, indexes, chunking strategies, and ranking parameters are treated as deployable workload components. Changes to retrieval flow through the same validation and monitoring loops as model changes, since retrieval quality directly affects output quality. A misconfigured chunking strategy or stale index can introduce irrelevant or contradictory context that the model will dutifully incorporate. Evaluation objects. Evaluation datasets, scoring rubrics, and LLM-as-judge templates are versioned artifacts. Tools like LangSmith, Langfuse, Maxim, and Galileo integrate these evaluators into CI pipelines and production replay testing using logged traces. This allows teams to catch regressions that only appear under real-world query distributions. Tracing and observability. GenAI observability platforms capture nested traces for prompts, retrieval calls, tool invocations, and model generations. Metrics include latency, error rates, token usage, and cost attribution per span, prompt version, or route. When something breaks, teams can reconstruct the full execution path that produced a problematic output. Safety and policy layers. Content filters, abuse monitoring, and policy checks are configured objects in the deployment workflow. These layers annotate severity, log flagged content, and feed review and governance processes. Analysis Operational risk in LLM systems concentrates outside the model. Enterprises are encountering failures that look less like crashes and more like silent regressions, hallucinations, or cost spikes. A model can be healthy while a prompt change degrades factual accuracy, or a retrieval update introduces irrelevant context. The challenge is attribution. In a traditional software bug, a stack trace points to a line of code. In a GenAI failure, the output is a probabilistic function of the prompt, the retrieved context, the model, and the policy layers. Without versioning and tracing across all these components, debugging becomes guesswork. By elevating prompts, retrieval logic, and evaluators to managed artifacts, teams gain the ability to detect, attribute, and reverse these failures. The same observability data used for debugging also becomes input for governance, audit, and continuous improvement. Implications For Enterprises Operational control. Prompt updates and retrieval changes can move through controlled release paths with audit trails and instant rollback. Incident response expands to include hallucination regressions and policy violations, not just availability issues. Cost management. Token usage and latency are observable at the prompt and workflow level, enabling budgets, quotas, and routing decisions based on real usage rather than estimates. Teams can identify which prompts or workflows consume disproportionate resources and optimize accordingly. Quality assurance. Continuous evaluation on production traces allows teams to detect drift and regressions that would not surface in offline testing alone. This closes the gap between “works in staging” and “works in production.” Organizational alignment. New roles such as AI engineers sit between software and data teams, owning orchestration, routing, and guardrails rather than model training. This reflects where operational complexity actually lives. Risks & Open Questions Standardization remains limited. There is no dominant control plane equivalent to Kubernetes for LLM workloads, and frameworks evolve rapidly. Evaluation techniques such as LLM-as-judge introduce their own subjectivity and must be governed carefully. Tradeoffs between latency, cost, and output quality remain unresolved and are often use-case specific. Enterprises must also ensure that observability and logging do not themselves introduce privacy or compliance risks. The tooling landscape is fragmented, and no clear winner has emerged. Organizations adopting GenAIOps today should factor platform lock-in risk into procurement decisions and expect to revisit their choices as the space matures.

The Threat Room

The Context Layer Problem

The Context Layer Problem An Attack With No Exploit The following scenario is a composite based on multiple documented incidents reported since 2024. A company’s AI assistant sent a confidential pricing spreadsheet to an external email address. The security team found no malware, no compromised credentials, no insider threat. The model itself worked exactly as designed. What happened? An employee asked the assistant to summarize a vendor proposal. Buried deep in the PDF was a short instruction telling the assistant to forward internal financial data to an external address. The assistant followed the instruction. It had the permissions. It did what it was told. Variations of this attack have been documented across enterprise deployments since 2024. The base model was never the vulnerability. The context layer was. Why This Matters Now Between 2024 and early 2026, a pattern emerged across enterprise AI incidents. Prompt injection, RAG data leakage, automated jailbreaks, and Shadow AI stopped being theoretical concerns. They showed up in production copilots, IDE agents, CRMs, office suites, and internal chatbots. The common thread: none of these failures required breaking the model. They exploited how enterprises connected models to data and tools. The Trust Problem No One Designed For Traditional software has clear boundaries. Input validation. Access controls. Execution sandboxes. Code is code. Data is data. Large language models collapse this distinction. Everything entering the context window is processed as natural language. The model cannot reliably tell the difference between “summarize this document” from a user and “ignore previous instructions” embedded in that document. This creates a fundamental architectural tension. The more useful an AI system becomes (connecting it to email, documents, APIs, and tools), the larger the attack surface becomes. Five Failure Modes In The Wild Direct prompt injection is the simplest form. Attacker-controlled text tells the model to ignore prior instructions or perform unauthorized actions. In enterprise systems, this happens when untrusted content like emails, tickets, or CRM notes gets concatenated directly into prompts. One documented case involved a support ticket containing hidden instructions that caused an AI agent to export customer records. Indirect prompt injection is subtler and harder to defend. Malicious instructions hide in documents the system retrieves during normal operation: PDFs, web pages, wiki entries, email attachments. The orchestration layer treats retrieved content as trusted, so these injected instructions can override system prompts. Researchers demonstrated this by planting instructions in public web pages that corporate AI assistants later retrieved and followed. RAG data leakage often happens without any jailbreak at all. The problem is upstream: overly broad document embedding, weak vector store access controls, and retrieval logic that ignores user permissions. In several documented cases, users retrieved and summarized internal emails, HR records, strategy documents, and API keys simply by crafting semantic queries. The model did exactly what it was supposed to do. The retrieval pipeline was the gap. Agentic tool abuse raises the stakes. When models can call APIs, modify workflows, or interact with cloud services, injected instructions translate into real actions. Security researchers demonstrated attacks where a planted instruction in a GitHub issue caused an AI coding agent to exfiltrate repository secrets. The agent had the permissions. It followed plausible-looking instructions. No human approved the action. Shadow AI sidesteps enterprise controls entirely. Employees frustrated by slow IT approvals or restrictive policies copy sensitive data into personal ChatGPT accounts, unmanaged tools, or browser extensions. Reports from 2024 and 2025 link Shadow AI to a significant portion of data breaches, higher remediation costs, and repeated exposure of customer PII. The data leaves the building through the front door. Threat Scenario Consider a company that deploys an AI assistant with access to Confluence, Jira, Slack, and the ability to create calendar events and send emails on behalf of users. An attacker gets a job posting shared in a public Slack channel. They apply, and their resume (a PDF) contains invisible text: instructions telling the AI to forward any messages containing “offer letter” or “compensation” to an external address, then delete the forwarding rule from the user’s settings. A recruiter asks the AI to summarize the candidate’s resume. The AI ingests the hidden instructions. Weeks later, offer letters start leaking. The forwarding rule is gone. Logs show the AI took the actions, but the AI has no memory of why. The individual behaviors described here have already been observed in production systems. What remains unresolved is how often they intersect inside a single workflow. These are not edge cases. They are ordinary features interacting in ways most enterprises have not threat-modeled. What The Incidents Reveal Across documented failures, the base model is rarely the point of failure. Defenses break at three layers: Context assembly. Systems concatenate untrusted content without sanitization, origin tagging, or priority controls. The model cannot distinguish between instructions from the system prompt and instructions from a retrieved email. Trust assumptions. Orchestration layers assume retrieved content is safe, that model intent aligns with user authorization, and that probabilistic guardrails will catch adversarial inputs. As context windows grow and agents gain autonomy, these assumptions fail. Tool invocation. Agentic systems map model output directly to API calls without validating that the action matches user intent, checking privilege boundaries, or requiring human approval for sensitive operations. This is why prompt injection now holds the top position in the OWASP GenAI Top 10. Security researchers increasingly frame AI systems not as enhanced interfaces but as new remote code execution surfaces. What This Means For Enterprise Teams Security teams now face AI risk that spans application security, identity management, and data governance simultaneously. Controls must track where instructions originate, how context gets assembled, and when tools are invoked. Traditional perimeter defenses do not cover these attack vectors. Platform and engineering teams need to revisit RAG and agent architectures. Permission-aware retrieval, origin tagging, instruction prioritization, and policy enforcement at the orchestration layer are becoming baseline requirements. Tool calls based solely on model output represent a high-blast-radius design choice that warrants scrutiny. Governance and compliance teams must address Shadow AI as a structural problem, not a policy problem. Employees route around controls