PromptOps.
Control plane for AI work. Between your team and every model, every agent.
Policy · Sandbox · Registry · AuditWhat it does
PromptOps is the layer between an institution’s people and the AI systems they use — every prompt, every agent run, every model call passes through it. The point isn’t to slow work down; it’s to make AI work governable without making it un-shippable. Policy, sandboxing, registry, and audit, in one substrate.
How it works
A policy engine evaluates each request against rules the institution writes (data class allowed, model allowed, tool allowed, budget remaining). A sandbox layer scopes what each agent can touch — file system, network, downstream APIs — so a misbehaving agent fails closed, not open. A registry tracks every prompt, version, owner, and downstream consumer. An audit log records every call, its policy verdict, its tools, its cost.
Deployment & governance
Self-hosted by design — the control plane shouldn’t be a third-party SaaS dependency. Slots in front of OpenAI, Anthropic, local models, and internal endpoints uniformly. Policy is YAML, version-controlled, reviewable. Provides the substrate Classifai, RedactAI, GenBI, GeoIntel, and ReusableRAG run through.