AI Curiosity Club | Issue 1
Inside This Issue
The Threat Room
Model Confusion Turns AI Model Loading Into a Supply-Chain Attack Surface
Model confusion exposes an AI supply-chain risk hiding in plain sight. Code that appears to load a local model can silently resolve to a public registry model with the same name, opening the door to remote code execution or silent compromise. The risk lives in everyday ML code paths, not infrastructure, turning model loading itself into a security boundary enterprises rarely treat as one.
The Operations Room
Agentic AI Gets Metered: Vertex AI Agent Engine Billing Goes Live
AI agents remember conversations, persist state, and execute tools on demand. Starting January 28, Google will charge for all of it. Vertex AI Agent Engine’s new billing model treats memory, state, and execution as metered resources, and costs can escalate faster than teams expect. This article breaks down how the billing works, walks through a realistic usage scenario, and explains why agentic AI is about to get a lot more expensive to run in production.
The Engineering Room
Registry-Aware Guardrails: Moving AI Safety and Policy Into External Control Planes
As AI systems scale, teams are moving guardrails out of individual models and into shared control planes. This article explains the core architecture behind registry-aware guardrails, compares the two dominant implementation patterns, and outlines the tradeoffs teams face when centralizing AI safety and policy enforcement across pipelines.
The Governance Room
Shadow AI Metrics Expose a Governance Gap in Enterprise AI Programs
Shadow AI is no longer invisible, but it is still hard to control. Enterprise telemetry now reveals thousands of GenAI policy violations each month, most occurring outside managed identity and enforcement boundaries. As AI use shifts toward copy-paste workflows and personal accounts, governance teams face a growing gap between what policies say and what controls can actually stop.