G360 Technologies

AI Curiosity Club | Issue 3

Inside This Issue

The Threat Room

When AI Agents Act, Identity Becomes the Control Plane

A single poisoned document. An agent following instructions it should have ignored. An audit log that points to the wrong person. AI agents are no longer just automation: they’re privileged identities that can be manipulated through their inputs. Regulators are catching up. NIST is collecting security input, FINRA is flagging autonomy and auditability as governance gaps, and Gartner predicts 25% of enterprise breaches will trace to agent abuse by 2028. The question isn’t whether agents create risk. It’s whether your controls were built for actors that can be turned by a document. 

→ Read the full article

The Operations Room

Agentic AI in Production: The System Worked. The Outcome Was Wrong.

The system worked. The outcome was wrong. Most enterprises are running agentic pilots, but few have crossed into safe production. This piece explains what’s blocking the path.

→ Read the full article

Enterprise GenAI Pilot Purgatory: Why the Demo Works and the Rollout Doesn’t

Why do so many GenAI pilots impress in the demo, then quietly die before production? Research from 2025 and early 2026 reveals the same five breakdowns, again and again. This piece maps the failure mechanisms, and what the rare exceptions do differently.

→ Read the full article

The Engineering Room

AI Agents Broke the Old Security Model. AI-SPM Is the First Attempt at Catching Up.

Traditional model security asks: what might the AI say? Agent security asks: what might the system do? Microsoft and AWS are shipping AI-SPM capabilities that track tools, identities, and data paths across agent architectures, because when agents fail, the breach is usually a tool call, not a hallucination. 

→ Read the full article

The Governance Room

From Disclosure to Infrastructure: How Global AI Regulation Is Turning Compliance Into System Design

A retailer’s AI system flags fraudulent returns. The documentation is flawless. Then auditors ask for logs, override records, and proof that human review actually happened. The system passes policy review. It fails infrastructure review. This is the new compliance reality. Across the EU, US, and Asia-Pacific, enforcement is shifting from what policies say to what systems actually do. This piece explains why AI governance is becoming an infrastructure problem, what auditors are starting to look for, and what happens when documentation and architecture tell different stories.

→ Read the full article