AI Curiosity Club | Issue 2
Inside This Issue
The Threat Room
The Context Layer Problem
Enterprise AI breaches are not happening at the model layer. They are happening in the plumbing: context assembly, retrieval pipelines, tool orchestration. This article breaks down five documented failure modes, walks through a realistic attack scenario, and explains why prompt injection has become OWASP’s top GenAI risk. Worth a read for anyone building or deploying AI systems with access to internal data.
The Operations Room
Why Enterprises Are Versioning Prompts Like Code
When an LLM application starts producing bad outputs, the model is rarely the culprit. A prompt tweak, a stale retrieval index, or a missing evaluation case is more likely to blame. GenAIOps treats these components as deployed infrastructure with versioning, rollback, and tracing. This article explains why traditional MLOps was not built for this shift and what enterprises are doing about it.
The Engineering Room
The Prompt Is the Bug
Prompts are no longer just text strings. MLflow 3.x treats them as deployable artifacts with versioning, tracing, and audit trails. As LLM failures shift away from models and into orchestration logic, this changes how enterprises debug, govern, and roll back AI behavior. Prompt tracking is becoming an engineering decision, not an afterthought.
The Governance Room
California’s 2026 AI Laws: When a Documentation Gap Becomes a Reportable Incident
California’s 2026 AI laws make cybersecurity controls a regulated safety obligation for frontier model developers. A documentation gap in model weight access controls is no longer an internal cleanup. If it leads to unauthorized access, it becomes a reportable incident with a 15-day deadline. This article covers what developers must document, what triggers reporting, and what downstream enterprises should expect in vendor contracts and procurement requirements.
Texas AI Law Shifts Compliance Focus from Outcomes to Intent
Texas is regulating AI differently. Starting in 2026, compliance won’t hinge on outcomes alone. It will turn on documented intent, testing records, and internal controls. For enterprises operating across states, TRAIGA redefines what a defensible AI program looks like.