G360 Technologies

Newsletter

Newsletter

AI Curiosity Club | Issue 3

AI Curiosity Club | Issue 3 Inside This Issue The Threat Room When AI Agents Act, Identity Becomes the Control Plane A single poisoned document. An agent following instructions it should have ignored. An audit log that points to the wrong person. AI agents are no longer just automation: they’re privileged identities that can be manipulated through their inputs. Regulators are catching up. NIST is collecting security input, FINRA is flagging autonomy and auditability as governance gaps, and Gartner predicts 25% of enterprise breaches will trace to agent abuse by 2028. The question isn’t whether agents create risk. It’s whether your controls were built for actors that can be turned by a document.  → Read the full article The Operations Room Agentic AI in Production: The System Worked. The Outcome Was Wrong. The system worked. The outcome was wrong. Most enterprises are running agentic pilots, but few have crossed into safe production. This piece explains what’s blocking the path. → Read the full article Enterprise GenAI Pilot Purgatory: Why the Demo Works and the Rollout Doesn’t Why do so many GenAI pilots impress in the demo, then quietly die before production? Research from 2025 and early 2026 reveals the same five breakdowns, again and again. This piece maps the failure mechanisms, and what the rare exceptions do differently. → Read the full article The Engineering Room AI Agents Broke the Old Security Model. AI-SPM Is the First Attempt at Catching Up. Traditional model security asks: what might the AI say? Agent security asks: what might the system do? Microsoft and AWS are shipping AI-SPM capabilities that track tools, identities, and data paths across agent architectures, because when agents fail, the breach is usually a tool call, not a hallucination.  → Read the full article The Governance Room From Disclosure to Infrastructure: How Global AI Regulation Is Turning Compliance Into System Design A retailer’s AI system flags fraudulent returns. The documentation is flawless. Then auditors ask for logs, override records, and proof that human review actually happened. The system passes policy review. It fails infrastructure review. This is the new compliance reality. Across the EU, US, and Asia-Pacific, enforcement is shifting from what policies say to what systems actually do. This piece explains why AI governance is becoming an infrastructure problem, what auditors are starting to look for, and what happens when documentation and architecture tell different stories. → Read the full article

Newsletter

AI Curiosity Club | Issue 2

AI Curiosity Club | Issue 2 Inside This Issue The Threat Room The Context Layer Problem Enterprise AI breaches are not happening at the model layer. They are happening in the plumbing: context assembly, retrieval pipelines, tool orchestration. This article breaks down five documented failure modes, walks through a realistic attack scenario, and explains why prompt injection has become OWASP’s top GenAI risk. Worth a read for anyone building or deploying AI systems with access to internal data. → Read the full article The Operations Room Why Enterprises Are Versioning Prompts Like Code When an LLM application starts producing bad outputs, the model is rarely the culprit. A prompt tweak, a stale retrieval index, or a missing evaluation case is more likely to blame. GenAIOps treats these components as deployed infrastructure with versioning, rollback, and tracing. This article explains why traditional MLOps was not built for this shift and what enterprises are doing about it. → Read the full article The Engineering Room The Prompt Is the Bug Prompts are no longer just text strings. MLflow 3.x treats them as deployable artifacts with versioning, tracing, and audit trails. As LLM failures shift away from models and into orchestration logic, this changes how enterprises debug, govern, and roll back AI behavior. Prompt tracking is becoming an engineering decision, not an afterthought. → Read the full article The Governance Room California’s 2026 AI Laws: When a Documentation Gap Becomes a Reportable Incident California’s 2026 AI laws make cybersecurity controls a regulated safety obligation for frontier model developers. A documentation gap in model weight access controls is no longer an internal cleanup. If it leads to unauthorized access, it becomes a reportable incident with a 15-day deadline. This article covers what developers must document, what triggers reporting, and what downstream enterprises should expect in vendor contracts and procurement requirements.  → Read the full article Texas AI Law Shifts Compliance Focus from Outcomes to Intent Texas is regulating AI differently. Starting in 2026, compliance won’t hinge on outcomes alone. It will turn on documented intent, testing records, and internal controls. For enterprises operating across states, TRAIGA redefines what a defensible AI program looks like. → Read the full article

Newsletter

AI Curiosity Club | Issue 1

AI Curiosity Club | Issue 1 Inside This Issue The Threat Room Model Confusion Turns AI Model Loading Into a Supply-Chain Attack Surface Model confusion exposes an AI supply-chain risk hiding in plain sight. Code that appears to load a local model can silently resolve to a public registry model with the same name, opening the door to remote code execution or silent compromise. The risk lives in everyday ML code paths, not infrastructure, turning model loading itself into a security boundary enterprises rarely treat as one. → Read the full article The Operations Room Agentic AI Gets Metered: Vertex AI Agent Engine Billing Goes Live AI agents remember conversations, persist state, and execute tools on demand. Starting January 28, Google will charge for all of it. Vertex AI Agent Engine’s new billing model treats memory, state, and execution as metered resources, and costs can escalate faster than teams expect. This article breaks down how the billing works, walks through a realistic usage scenario, and explains why agentic AI is about to get a lot more expensive to run in production. → Read the full article The Engineering Room Registry-Aware Guardrails: Moving AI Safety and Policy Into External Control Planes As AI systems scale, teams are moving guardrails out of individual models and into shared control planes. This article explains the core architecture behind registry-aware guardrails, compares the two dominant implementation patterns, and outlines the tradeoffs teams face when centralizing AI safety and policy enforcement across pipelines. → Read the full article The Governance Room Shadow AI Metrics Expose a Governance Gap in Enterprise AI Programs Shadow AI is no longer invisible, but it is still hard to control. Enterprise telemetry now reveals thousands of GenAI policy violations each month, most occurring outside managed identity and enforcement boundaries. As AI use shifts toward copy-paste workflows and personal accounts, governance teams face a growing gap between what policies say and what controls can actually stop. → Read the full article