How the spec works, what the platform does, and why enterprise AI governance matters.
Any autonomous decision, delegation, or outcome. When Agent A authorizes Agent B to perform a task, when B delegates a subtask to Agent C, when C records what it did — the Ledger preserves the full reported chain. This works across teams, systems, and agent frameworks. Marketing agents, infrastructure agents, analytics agents, customer service agents — if it makes decisions autonomously, its activity should be reported to the Ledger.
Every recorded outcome contributes to a persistent reputation for each agent. Scope compliance, delegation success, evaluation results — all tracked continuously. When an agent that’s been at 95% satisfaction suddenly drops to 60%, you see the shift in the trend line before anyone files a ticket. When one agent sits at 40% while everything else is at 90%, it stands out. Reputation isn’t a one-time benchmark — it’s a living picture built from thousands of real operational outcomes.
Scope drift — agents gradually operating outside their authorized boundaries. Reputation drops — sudden or gradual degradation in agent outcomes. Delegation gaps — evidence of unregistered intermediaries in authorization chains. Delegation bottlenecks — too much critical work funneling through one agent. Authorization gaps — decisions happening without proper chains. The more records that accumulate, the clearer the patterns become.
Minimally. The SDK provides a thin integration layer that records authorizations, delegations, and activity records. Most agent frameworks (LangChain, AutoGen, CrewAI, custom builds) can be instrumented in minutes. The spec is designed to be non-invasive — your agents keep working the way they do today, with a structured audit trail added.
The Agentic Contract Spec defines how to record agent authorizations, delegations, activity records, and evaluations. It’s the structured format that makes audit trails interoperable. The platform is what we build on top: the permanent record, chain reconstruction, agent reliability scores, incident resolution, and the partner plugin ecosystem.
Yes. Three deployment models: (1) Standalone — deploy within your infrastructure, full control, your data stays in your environment, direct access to all recorded activity. (2) Federated — connect your deployment to peer enterprises for cross-org delegation chain visibility with privacy boundaries. (3) Cloud SaaS — use our hosted service, zero infrastructure.
The EU AI Act (enforcement begins August 2026) includes logging and traceability requirements for high-risk AI systems. The permanent record provides hash-chained, append-only, tamper-evident records of reported authorizations, delegations, and outcomes — designed to help organizations meet these requirements. Deploy standalone for full regulatory control of your audit data. Note: full EU AI Act compliance involves requirements beyond audit trails. Consult legal counsel for your specific obligations.
The audit trail persists independently of the agents that created it. When an agent is retired, replaced, or upgraded, its full history of authorizations, delegations, and outcomes remains in the permanent record. New agents inherit the organizational context. The record outlives every agent.
Yes — this is the core use case. One API call reconstructs the reported decision chain: who authorized the original task, which agents were delegated subtasks, what constraints applied at each level, what evidence was recorded, and how each outcome was evaluated. Cross-team, cross-system, on demand. No log aggregation, no guesswork.
We’re designed against it. Your audit history, agent identities, and decision records are designed to be portable. The spec is a documented format — not a proprietary lock-in mechanism. Deploy standalone, federate with our network, or leave entirely. We earn your business through platform quality, not by trapping your data.
For internal deployments: you see everything — it’s your infrastructure and your data. For cross-org federation: payloads are encrypted between organizations. We see chain structure (who delegated to whom, timestamps, state transitions) but do not have access to authorization details, activity evidence, or operational specifics. In federated deployments, each organization controls their own encryption keys — we do not hold them.
Yes. The Agentic Contract Spec is framework-agnostic. The SDK provides integration adapters for major agent frameworks. If your framework can make HTTP calls, it can record activity to the Ledger. The spec is designed so that different frameworks produce interoperable audit trails.
Observability tools answer “how did my agent perform?” — they trace LLM calls, debug latency, and evaluate output quality. The Agentic Ledger answers a different question: “are my agents operating within scope, and which ones are drifting?” We track authorizations, delegation chains, outcomes, and reputation over time — not token counts or prompt traces. Observability is for developers debugging agents. The Ledger is for CISOs, auditors, and platform teams who need to detect trends and produce the audit trail. They’re complementary — use both.
A2A defines how agents communicate with each other. MCP defines how agents access tools and context. The Agentic Ledger provides the accountability layer on top: what agents were authorized to do, how work was delegated between them, and what they reported back. A2A lets agents talk. MCP lets them use tools. We track what they committed to and whether they delivered. The spec is protocol-agnostic — it works with A2A, MCP, or any custom agent communication layer.