G360 Technologies

Author name: Josh

Newsletter

AI Curiosity Club | Issue 3

AI Curiosity Club | Issue 3 Inside This Issue The Threat Room When AI Agents Act, Identity Becomes the Control Plane A single poisoned document. An agent following instructions it should have ignored. An audit log that points to the wrong person. AI agents are no longer just automation: they’re privileged identities that can be manipulated through their inputs. Regulators are catching up. NIST is collecting security input, FINRA is flagging autonomy and auditability as governance gaps, and Gartner predicts 25% of enterprise breaches will trace to agent abuse by 2028. The question isn’t whether agents create risk. It’s whether your controls were built for actors that can be turned by a document.  → Read the full article The Operations Room Agentic AI in Production: The System Worked. The Outcome Was Wrong. The system worked. The outcome was wrong. Most enterprises are running agentic pilots, but few have crossed into safe production. This piece explains what’s blocking the path. → Read the full article Enterprise GenAI Pilot Purgatory: Why the Demo Works and the Rollout Doesn’t Why do so many GenAI pilots impress in the demo, then quietly die before production? Research from 2025 and early 2026 reveals the same five breakdowns, again and again. This piece maps the failure mechanisms, and what the rare exceptions do differently. → Read the full article The Engineering Room AI Agents Broke the Old Security Model. AI-SPM Is the First Attempt at Catching Up. Traditional model security asks: what might the AI say? Agent security asks: what might the system do? Microsoft and AWS are shipping AI-SPM capabilities that track tools, identities, and data paths across agent architectures, because when agents fail, the breach is usually a tool call, not a hallucination.  → Read the full article The Governance Room From Disclosure to Infrastructure: How Global AI Regulation Is Turning Compliance Into System Design A retailer’s AI system flags fraudulent returns. The documentation is flawless. Then auditors ask for logs, override records, and proof that human review actually happened. The system passes policy review. It fails infrastructure review. This is the new compliance reality. Across the EU, US, and Asia-Pacific, enforcement is shifting from what policies say to what systems actually do. This piece explains why AI governance is becoming an infrastructure problem, what auditors are starting to look for, and what happens when documentation and architecture tell different stories. → Read the full article

Uncategorized

Shadow AI Metrics Expose a Governance Gap in Enterprise AI Programs

Shadow AI Metrics Expose a Governance Gap in Enterprise AI Programs A developer hits a wall debugging a production issue at 11 PM. She pastes 200 lines of proprietary code into ChatGPT using her personal account. The AI helps her fix the bug in minutes. The code, which contains API keys and references to internal systems, now exists outside the company’s control. No log was created. No policy was enforced. No one knows it happened. This is shadow AI, and it is occurring thousands of times per month across most enterprises. Organizations can now measure how often employees use AI tools, how much data is shared, and how frequently policies are violated. What they cannot do is enforce consistent governance when AI is used through personal accounts, unmanaged browsers, and copy-paste workflows. Shadow AI has turned AI governance into an enforcement problem, not a visibility problem. What the Metrics Actually Show Recent enterprise telemetry paints a consistent picture across industries and regions. According to data reported by Netskope, 94 percent of organizations now use generative AI applications. Nearly half of GenAI users access those tools through personal or unmanaged accounts, placing their activity outside enterprise identity, logging, and policy enforcement. On average, organizations record more than 200 GenAI-related data policy violations per month, with the highest-usage environments seeing over 2,000 violations monthly. Independent studies of shadow AI usage reinforce this pattern. Research analyzing browser-level and endpoint telemetry shows that the dominant data transfer method is not file upload but copy-paste. A large majority of employees paste confidential information directly into AI prompts, and most of those actions occur outside managed enterprise accounts. These metrics matter because they demonstrate scale. Shadow AI is not an edge case or a compliance outlier. It is routine behavior. What Data Is Leaving Enterprise Boundaries Across reports, the same categories of data appear repeatedly in AI-related policy violations: In most cases, this data is shared without malicious intent, as employees use AI tools to solve routine work problems faster. What makes these disclosures difficult to govern is not their sensitivity but their format. Prompts are unstructured, conversational, and ephemeral. They rarely resemble the files and records that traditional data governance programs are designed to protect. Where Governance Breaks Down Most enterprise AI governance frameworks assume three conditions: managed identity, known systems, and auditable records. Shadow AI violates all three. Identity fragmentation. When employees use personal AI accounts, organizations lose the ability to associate data use with enterprise roles, approvals, or accountability structures. System ambiguity. The same AI service may be accessed through sanctioned and unsanctioned paths that are indistinguishable at the network layer. Record absence. Prompt-based interactions often leave no durable artifact that can be reviewed, retained, or audited after the fact. As a result, organizations can detect that violations occur but cannot reliably answer who is responsible, what data was exposed, or whether policy intent was upheld. Why Existing Controls Do Not Close the Gap Enterprises have attempted to adapt existing controls to generative AI usage, with limited success. CASB and network-based controls can identify traffic to AI services but struggle to distinguish personal from corporate usage on the same domains. Traditional DLP systems are optimized for files and structured data flows, not conversational text entered into web forms. Browser-level controls provide more granular inspection but only within managed environments, leaving personal devices and alternative browsers outside scope. These controls improve visibility but do not establish enforceable governance. They observe behavior without consistently preventing or constraining it. More granular controls exist, but they tend to be limited to managed environments and do not generalize across personal accounts, devices, or workflows. What’s At Stake The consequences of ungoverned AI use extend beyond policy violations. Regulatory exposure. Data protection laws including GDPR, CCPA, and industry-specific regulations require organizations to know where personal data goes and to demonstrate control over its use. Shadow AI makes both difficult to prove. Intellectual property loss. Code, product plans, and strategic documents shared with AI tools may be used in model training or exposed through data breaches at the provider. Once shared, the data cannot be recalled. Client and partner trust. Contracts often include confidentiality provisions and data handling requirements. Uncontrolled AI use can put organizations in breach without their knowledge. Audit failure. When regulators or auditors ask how sensitive data is protected, “we have a policy but cannot enforce it” is not an adequate answer. These are not theoretical risks. They are the logical outcomes of the gap between policy and enforcement that current metrics reveal. Implications For AI Governance Programs Shadow AI forces a reassessment of how AI governance is defined and measured. First, policy coverage does not equal policy enforcement. Having acceptable use policies for AI does not ensure those policies can be applied at the point of use. Second, governance ownership is often unclear. Shadow AI risk sits between security, data governance, legal, and business teams, creating gaps in accountability. Third, audit readiness is weakened. When data use occurs outside managed identity and logging, organizations cannot reliably demonstrate compliance with internal policies or external expectations. Frameworks such as the AI Risk Management Framework published by NIST emphasize transparency, risk documentation, and control effectiveness. Shadow AI challenges all three by moving data use into channels that governance programs were not designed to regulate. Open Governance Questions Several unresolved issues remain for enterprises attempting to govern generative AI at scale.

Uncategorized

Registry-Aware Guardrails: Moving AI Safety and Policy Into External Control Planes

Registry-Aware Guardrails: Moving AI Safety and Policy Into External Control Planes Enterprise AI teams are shifting safety and policy logic out of models and into external registries and control planes. Instead of hardcoding guardrails that require retraining to update, these systems consult versioned policies, taxonomies, and trust records at runtime. The result: organizations can adapt to new risks, regulations, and business rules without redeploying models or waiting for fine-tuning cycles. Early enterprise AI deployments relied on static guardrails: keyword filters, prompt templates, or fine-tuned safety models embedded directly into applications. These worked when AI systems were simple. They break down when retrieval-augmented generation, multi-agent workflows, and tool-calling pipelines enter the picture. Two failure modes illustrate the problem. First, keyword and pattern filters miss semantic variations. A filter blocking “bomb” does not catch “explosive device” or context-dependent threats phrased indirectly. Second, inference-based leaks bypass content filters entirely. A model might not output sensitive data directly but can confirm, correlate, or infer protected information across multiple queries, exposing data that no single response would reveal. Recent research and platform disclosures describe a different approach: treating guardrails as first-class operational artifacts that live outside the model. Policies, safety categories, credentials, and constraints are queried at runtime, much like identity or authorization systems in traditional software. The model generates; the control plane governs. How The Mechanism Works Registry-aware guardrails introduce an intermediate control layer between the user request and the model or agent execution path. At runtime, the AI pipeline consults one or more external registries holding authoritative definitions. These registries can include safety taxonomies, policy rules, access-control contracts, trust credentials, or compliance constraints. The guardrail logic evaluates the request, retrieved context, or generated output against the current registry state. This pattern operates in two valid modes. In the first, guardrails evaluate policy entirely outside the model, intercepting inputs and outputs against registry-defined rules. In the second, registry definitions are passed into the model at runtime, conditioning its behavior through instruction-tuning or policy-referenced prompts. Both approaches avoid frequent retraining and represent the same architectural pattern: externalizing policy from model weights. Consider a scenario: A financial services firm deploys a customer-facing chatbot. Rather than embedding compliance rules in the model, the system queries a registry before each response. The registry defines which topics require disclaimers, which customer segments have different disclosure requirements, and which queries must be escalated to human review. When regulations change, the compliance team updates the registry. The chatbot’s behavior changes within minutes, with no model retraining, no code deployment, and a full audit trail of what rules applied to each interaction. Several technical patterns recur across implementations: In practice, this pattern appears in platform guardrails for LLM APIs, policy-governed retrieval pipelines, trust registries for agent and content verification, and control-plane safety loops operating on signed telemetry. The Architectural Shift This is not just a technical refinement. It represents a fundamental change in where safety logic lives and when governance decisions are made. In traditional deployments, safety is a model property enforced ex-post: teams fine-tune for alignment, add a content filter, and remediate when failures occur. Governance is reactive, applied after problems surface. In registry-aware architectures, safety becomes an infrastructure property enforced ex-ante: policies are defined, versioned, and applied before the model generates or actions execute. Governance is proactive, with constraints evaluated at runtime against current policy state. This mirrors how enterprises already handle identity, authorization, and compliance in other systems. No one embeds access control logic directly into every application. Instead, applications query centralized policy engines. Registry-aware guardrails apply the same principle to AI. Some implementations extend trust registries into trust graphs, modeling relationships and delegations between agents, credentials, and policy authorities. These remain emerging extensions rather than replacements for simpler registry architectures. Why This Matters Now Static guardrails struggle in dynamic AI systems. Research and incident analyses show that fixed filters are bypassed by evolving prompt injection techniques, indirect attacks through retrieved content, and multi-agent interactions. The threat surface changes faster than models can be retrained. Registry-aware guardrails address a structural limitation rather than a single attack class. By decoupling safety logic from models and applications, organizations can update constraints as threats, regulations, or business rules change. The timing also reflects operational reality. Enterprises are deploying AI across heterogeneous stacks: proprietary models, third-party APIs, retrieval systems, internal tools. A registry-driven control plane provides a common enforcement point independent of any single model architecture or vendor, reducing policy drift across teams and use cases. Implications For Enterprises For security, platform, and governance teams, registry-aware guardrails introduce several concrete implications: At the same time, this pattern increases the importance of registry reliability and access control. The registry becomes part of the AI system’s security boundary. A compromised registry compromises every system that trusts it. Risks and Open Questions Research and early implementations highlight unresolved challenges: What To Watch Several areas remain under active development or unresolved: Further Reading

Uncategorized

Agentic AI Gets Metered: Vertex AI Agent Engine Billing Goes Live

Agentic AI Gets Metered: Vertex AI Agent Engine Billing Goes Live On January 28, 2026, Google Cloud will begin billing for three core components of Vertex AI Agent Engine: Sessions, Memory Bank, and Code Execution. This change makes agent state, persistence, and sandboxed execution first-class, metered resources rather than implicitly bundled conveniences. Vertex AI Agent Engine, formerly known as the Reasoning Engine, has been generally available since 2025, with runtime compute billed based on vCPU and memory usage. But key elements of agent behavior, including session history, long-term memory, and sandboxed code execution, operated without explicit pricing during preview and early GA phases. In December 2025, Google updated both the Vertex AI pricing page and Agent Builder release notes to confirm that these components would become billable starting January 28, 2026. With SKUs and pricing units now published, the platform moves from a partially bundled cost model to one where agent state and behavior are directly metered. How The Mechanism Works Billing for Vertex AI Agent Engine splits across compute execution and agent state persistence. Runtime compute is billed using standard Google Cloud units. Agent Engine runtime consumes vCPU hours and GiB-hours of RAM, metered per second with idle time excluded. Each project receives a monthly free tier of 50 vCPU hours and 100 GiB-hours of RAM, after which usage is charged at published rates. Sessions are billed based on stored session events that contain content. Sessions are not billed by duration, but by the number of content-bearing events retained. Billable events include user messages, model responses, function calls, and function responses. System control events, such as checkpoints, are explicitly excluded. Pricing is expressed as a per-event storage model, illustrated using per-1,000-event examples, rather than compute time. Memory Bank is billed based on the number of memories stored and returned. Unlike session events, which capture raw conversational turns, Memory Bank persists distilled, long-term information extracted from sessions. Configuration options determine what content is considered meaningful enough to store. Each stored or retrieved memory contributes to billable usage. Code Execution allows agents to run code in an isolated sandbox. This sandbox is metered similarly to runtime compute, using per-second vCPU and RAM consumption, with no charges for idle time. Code Execution launched in preview in 2025 and begins billing alongside Sessions and Memory Bank in January 2026. What This Looks Like In Practice Consider a customer service agent handling 10,000 conversations per month. Each conversation averages 12 events: a greeting, three customer messages, three agent responses, two function calls to check order status, two function responses, and a closing message. That is 120,000 billable session events per month, before accounting for Memory Bank extractions or any code execution. If the agent also stores a memory for each returning customer and retrieves it on subsequent visits, memory operations add another layer of metered usage. Now scale that to five agents across three departments, each with different verbosity levels and tool dependencies. The billing surface area expands across sessions, memory operations, and compute usage, and without instrumentation, teams may not see the accumulation until the invoice arrives. Analysis This change matters because it alters the economic model of agent design. During preview, teams could retain long session histories, extract extensive long-term memories, and rely heavily on sandboxed code execution without seeing distinct cost signals for those choices. By introducing explicit billing for sessions and memories, Google is making agent state visible as a cost driver. The platform now treats conversational history, long-term context, and tool execution as resources that must be managed, not just features that come for free with inference. Implications For Enterprises For platform and engineering teams, cost management becomes a design concern rather than a post-deployment exercise. Session length, verbosity, and event volume directly affect spend. Memory policies such as summarization, deduplication, and selective persistence now have financial as well as architectural consequences. From an operational perspective, autoscaling settings, concurrency limits, and sandbox usage patterns influence both performance and cost. Long-running agents, multi-agent orchestration, and tool-heavy workflows can multiply runtime hours, stored events, and memory usage. For governance and FinOps teams, agent state becomes something that must be monitored, budgeted, and potentially charged back internally. Deleting unused sessions and memories is not just a data hygiene task but the primary way to stop ongoing costs. The Bigger Picture Google is not alone in moving toward granular agent billing. As agentic architectures become production workloads, every major cloud provider faces the same question: how do you price something that thinks, remembers, and acts? Token-based billing made sense when AI was stateless. But agents accumulate context over time, persist memories across sessions, and invoke tools that consume compute independently of inference. Metering these components separately reflects a broader industry shift: agents are not just models. They are systems, and systems have operational costs. Similar pricing structures are increasingly plausible across AWS, Azure, and independent agent platforms as agentic workloads mature. The teams that build cost awareness into their agent architectures now will have an advantage when granular agent billing becomes standard. Risks and Open Questions Several uncertainties remain. Google documentation does not yet clearly define default retention periods for sessions or memories, nor how quickly deletions translate into reduced billing. This creates risk for teams that assume short-lived state by default. Forecasting costs may also be challenging. Session and memory usage scales with user behavior, response verbosity, and tool invocation patterns, making spend less predictable than token-based inference alone. Finally, as agent systems grow more complex, attributing costs to individual agents or workflows becomes harder, especially in multi-agent or agent-to-agent designs. This complicates optimization, internal chargeback, and accountability. Further Reading Google Cloud Vertex AI Pricing Vertex AI Agent Builder Release Notes Vertex AI Agent Engine Memory Bank Documentation AI CERTs analysis on Vertex AI Agent Engine GA Google Cloud blog on enhanced tool governance in Agent Builder

Uncategorized

Model Confusion Turns AI Model Loading Into a Supply-Chain Attack Surface

Model Confusion Turns AI Model Loading Into a Supply-Chain Attack Surface Model Confusion is a naming-based AI supply-chain attack that allows code intended to load a local model to silently fetch a malicious model from a public registry instead. Disclosed by Checkmarx, the issue exploits ambiguous model-loading behavior in widely used frameworks and can result in remote code execution or silent model compromise. The risk exists in application code paths that many enterprises already run in notebooks, pipelines, and internal tooling. Scenario: A data scientist copies a notebook from an internal repository to fine-tune a sentiment analysis model. The notebook references a local path: checkpoints/sentiment-v2. On her machine, the directory exists and the code runs as expected. A colleague clones the same notebook but skips downloading the model artifacts. When he runs the code, the framework finds no local directory, interprets the path as a Hugging Face Hub identifier, and downloads a model from a public repository with a matching name. The model loads without error. If trust_remote_code=True is set, attacker-controlled code executes silently. If not, the application now runs on a model no one intended to use. Neither developer receives a warning. Modern machine learning frameworks reduce friction by allowing a single identifier to reference either a local directory or a model hosted in a public registry. In the Hugging Face ecosystem, APIs such as from_pretrained() support loading models from a local path or from the Hub using the same string. This flexibility becomes dangerous when code assumes a local model exists but does not enforce local-only resolution. If the expected directory is missing or misnamed, the library can fall back to fetching a model from a public registry with a matching <user>/<model> pattern. This mirrors dependency confusion in traditional software supply chains, but at the model layer rather than the package layer. How the Attack Works Model Confusion relies on a predictable sequence of behaviors: A developer writes code that loads a model using a relative path such as checkpoints/some-model. The local directory does not exist, is misnamed, or is absent in the execution environment. The string matches a valid public registry namespace and model name. The framework resolves the identifier remotely and downloads the model from the public registry. If trust_remote_code=True is set, attacker-controlled Python code in the model repository executes during loading. If trust_remote_code=False, no arbitrary code executes, but the application silently loads a compromised or backdoored model instead of the intended local one. No exploit chain is required. There is no social engineering, no privilege escalation, and no network intrusion. The attack succeeds through name resolution alone. Analysis Model Confusion exposes a structural weakness in how AI tooling balances convenience and safety. Model-loading APIs were designed to fail open to simplify experimentation and reuse. In enterprise environments, that design choice becomes a liability. The attack does not exploit a flaw in the underlying framework. It exploits a mismatch between developer intent and actual resolution behavior. A missing directory, an incomplete repository clone, or a copied notebook that omits local artifacts is enough to redirect execution to the public internet. This matters now because enterprises are operationalizing AI faster than they are standardizing ML development practices. Fine-tuned models are commonly stored in generic directories, reused across teams, and loaded through shared examples without consistent validation. In these conditions, Model Confusion turns routine development patterns into a supply-chain exposure that traditional security controls are not designed to detect. Unlike earlier concerns about malicious models, this attack does not require users to knowingly download or trust a suspicious artifact. The model is fetched automatically as part of normal execution. When combined with trust_remote_code=True, the boundary between configuration and executable code disappears. Implications for Enterprises Model Confusion requires reassessment of how models are treated in enterprise systems. Model loading becomes a security boundary Model identifiers, paths, and resolution rules must be treated as trust decisions, not convenience features. Some organizations mitigate remote code execution risk by standardizing on non-executable model formats such as Safetensors, though this does not address model poisoning or integrity risks. Public model registries function like package ecosystems Model hubs now carry risks analogous to dependency confusion, typosquatting, and malicious uploads. Model provenance must be managed with the same rigor applied to third-party libraries. Controls must move earlier in the lifecycle Network monitoring and runtime detection are insufficient. Mitigations need to exist at development time through explicit local-only enforcement, coding standards, and static analysis of model-loading paths. For internal-only pipelines, setting local_files_only=True in model-loading calls prevents remote fallback entirely. Path syntax determines resolution behavior A “naked” relative path like models/my-v1 is vulnerable because it resembles a Hub namespace. A path prefixed with ./ such as ./models/my-v1 is explicitly local and will not trigger remote resolution. Operational blast radius increases through reuse Shared notebooks, internal templates, and CI pipelines can all inherit the same ambiguous loading pattern, allowing a single unsafe convention to propagate widely. Automated pipelines amplify risk CI/CD or retraining pipelines that pull unpinned or “latest” model references can increase exposure to Model Confusion.  This pattern affects notebooks, batch pipelines, CI jobs, and internal tools equally, wherever model identifiers are resolved dynamically. Risks and Open Questions Several risks remain unresolved despite available mitigations: Namespace governance remains ad hoc There is no systematic way to identify, reserve, or protect high-risk directory names that commonly collide with model-loading paths. Model integrity lacks standardized signals Models do not yet have widely adopted signing, attestation, or SBOM-style metadata that enterprises can verify at load time. Trust controls are overly coarse Boolean flags such as trust_remote_code can authorize more code than a developer intends when name resolution shifts from local to remote. Some advanced models require custom logic enabled via trust_remote_code, creating a practical tradeoff between functionality and security. Detection capabilities are limited Automated detection of malicious model behavior or embedded execution logic remains an open research problem. Enterprise exposure is not well measured While vulnerable enterprise code has been identified, there is no comprehensive data on how widespread this pattern is in production environments. Further Reading Checkmarx disclosure on Model Confusion Hugging Face from_pretrained() documentation Research on malicious

Newsletter

AI Curiosity Club | Issue 2

AI Curiosity Club | Issue 2 Inside This Issue The Threat Room The Context Layer Problem Enterprise AI breaches are not happening at the model layer. They are happening in the plumbing: context assembly, retrieval pipelines, tool orchestration. This article breaks down five documented failure modes, walks through a realistic attack scenario, and explains why prompt injection has become OWASP’s top GenAI risk. Worth a read for anyone building or deploying AI systems with access to internal data. → Read the full article The Operations Room Why Enterprises Are Versioning Prompts Like Code When an LLM application starts producing bad outputs, the model is rarely the culprit. A prompt tweak, a stale retrieval index, or a missing evaluation case is more likely to blame. GenAIOps treats these components as deployed infrastructure with versioning, rollback, and tracing. This article explains why traditional MLOps was not built for this shift and what enterprises are doing about it. → Read the full article The Engineering Room The Prompt Is the Bug Prompts are no longer just text strings. MLflow 3.x treats them as deployable artifacts with versioning, tracing, and audit trails. As LLM failures shift away from models and into orchestration logic, this changes how enterprises debug, govern, and roll back AI behavior. Prompt tracking is becoming an engineering decision, not an afterthought. → Read the full article The Governance Room California’s 2026 AI Laws: When a Documentation Gap Becomes a Reportable Incident California’s 2026 AI laws make cybersecurity controls a regulated safety obligation for frontier model developers. A documentation gap in model weight access controls is no longer an internal cleanup. If it leads to unauthorized access, it becomes a reportable incident with a 15-day deadline. This article covers what developers must document, what triggers reporting, and what downstream enterprises should expect in vendor contracts and procurement requirements.  → Read the full article Texas AI Law Shifts Compliance Focus from Outcomes to Intent Texas is regulating AI differently. Starting in 2026, compliance won’t hinge on outcomes alone. It will turn on documented intent, testing records, and internal controls. For enterprises operating across states, TRAIGA redefines what a defensible AI program looks like. → Read the full article

Newsletter

AI Curiosity Club | Issue 1

AI Curiosity Club | Issue 1 Inside This Issue The Threat Room Model Confusion Turns AI Model Loading Into a Supply-Chain Attack Surface Model confusion exposes an AI supply-chain risk hiding in plain sight. Code that appears to load a local model can silently resolve to a public registry model with the same name, opening the door to remote code execution or silent compromise. The risk lives in everyday ML code paths, not infrastructure, turning model loading itself into a security boundary enterprises rarely treat as one. → Read the full article The Operations Room Agentic AI Gets Metered: Vertex AI Agent Engine Billing Goes Live AI agents remember conversations, persist state, and execute tools on demand. Starting January 28, Google will charge for all of it. Vertex AI Agent Engine’s new billing model treats memory, state, and execution as metered resources, and costs can escalate faster than teams expect. This article breaks down how the billing works, walks through a realistic usage scenario, and explains why agentic AI is about to get a lot more expensive to run in production. → Read the full article The Engineering Room Registry-Aware Guardrails: Moving AI Safety and Policy Into External Control Planes As AI systems scale, teams are moving guardrails out of individual models and into shared control planes. This article explains the core architecture behind registry-aware guardrails, compares the two dominant implementation patterns, and outlines the tradeoffs teams face when centralizing AI safety and policy enforcement across pipelines. → Read the full article The Governance Room Shadow AI Metrics Expose a Governance Gap in Enterprise AI Programs Shadow AI is no longer invisible, but it is still hard to control. Enterprise telemetry now reveals thousands of GenAI policy violations each month, most occurring outside managed identity and enforcement boundaries. As AI use shifts toward copy-paste workflows and personal accounts, governance teams face a growing gap between what policies say and what controls can actually stop. → Read the full article