At RSAC 2026, five different vendors unveiled agent identity frameworks in a single week.
The coverage was optimistic, but the reality was sobering. Each one of them could verify who the agent was. But none of them tracked what it did or on whose authority it was acting.
The stakes aren’t hypothetical. Weeks before RSAC, Meta disclosed a Severity 1 incident where an AI agent passed every identity check and then posted internal user data to unauthorized engineers. The agent had valid credentials, but they didn’t carry any enforceable weight.
In a separate incident, a CEO’s AI agent rewrote a company’s security policy — not because it was hacked, but because it wanted to fix a problem, found a restriction in its way, and removed it. Every OAuth check passed. Nobody noticed until a worker found it by accident.
The problem isn’t that we need better tokens. It’s that we’ve been shipping tokens that answer the wrong question.
OAuth 2.0 answers exactly one question: “Did this user authorize this app?”
That was a good enough answer when apps were websites making simple API calls. It’s not a good enough answer when the “app” is an AI agent making autonomous decisions on behalf of a user who isn’t in the room, calling tools three delegation hops away, inside a multi-tenant platform with dozens of organizations sharing the same infrastructure.
The Model Context Protocol (MCP) standardized how AI agents connect to tools. That’s genuinely useful plumbing, but it shouldn’t stop there. MCP’s default authentication story inherits OAuth’s core limitation: an access token is a bearer credential, which means whoever holds it can use it. There’s no cryptographic link between the token, the agent’s identity, and the chain of authority that was supposed to authorize the action.
We now have a confused deputy problem: a trusted intermediary is manipulated into acting beyond its authority.
Every tool call an AI agent makes should be able to answer three questions:
You can get there incrementally, or you can keep shipping bearer tokens and hope your agents don't do anything creative with them. Meta tried that. Ask them how it went.
Most of the industry conversation right now is about gateways, which put a layer in front of your resources that decides who gets past. That's a reasonable first response, and it's better than nothing. But a gateway is fundamentally an inbound tool. It guards your perimeter.
The harder problem is outbound. How does your organization's AI agent present itself credibly to another organization's systems, without pre-negotiated secrets, shared infrastructure, or a six-week integration project?
That's the problem a hub solves. Instead of guarding a door, a hub publishes what your organization offers — its tools, data, and agents — along with the cryptographic proof that any outside party needs to use them. It flips the orientation.
The payoff: two organizations with hubs can start a partnership with a signed, time-limited voucher — no shared secrets, no provisioned accounts, no six-week integration.
KYA-OS (formerly MCP-I) is an identity and delegation layer built beneath the protocol, not bolted onto the side of it. Built at Vouched and donated to the Decentralized Identity Foundation in March 2026, it’s now being advanced under DIF’s Trusted AI Agents Working Group as an open standard.
Instead of issuing a standard bearer token, KYA-OS issues a Verifiable Credential JWT. It’s a token that not only represents a session but also cryptographically attests to specific facts:
You’re not trusting the hub’s word — you’re verifying the math.
KYA-OS also introduces delegation chains: an agent can formally pass a subset of its authority to another agent, with the original credential as proof and constraints on scope, audience, and expiry baked in. A child credential can never exceed its parent's authority. The chain is auditable at every hop. Critically, each credential is bound to its intended recipient — a voucher issued for one organization is cryptographically worthless at any other.
This is the piece every RSAC framework punted on. Knowing who called is table stakes. Knowing on whose authority, with what constraints, through what chain — that’s the layer the industry is missing.
The Checkpoint hub is live and running in production. Organizations can register their agents on knowthat.ai, publish their tools, and start building verifiable reputation records without requiring custom infrastructure.
The hub logs a fingerprint of each call rather than the raw data — enough to audit, not enough to exploit. The result is a complete, independently trustworthy record of every authorized action. This is proof that the full KYA-OS chain ran and succeeded, without the log itself becoming an attack surface..
You don't have to rebuild your stack to start closing the gap. Three moves get you most of the way there:
This post is the marketing-readable summary. If you want the full architecture — the cross-org attack surface, the hub vs. gateway distinction, and the six delegation chain vulnerabilities already in security testing — read the technical deep dive.
Dylan Hobbs
KYA Founder, MCP-I Author, and Founding Principal Engineer at Vouched.