Right now, the only thing standing between an AI agent and an unauthorized action is a system prompt. A few lines of natural language, written by a human, interpreted by a model, enforced by nothing.
That's it. That's the entire governance stack for software that moves money, accesses customer data, deploys code to production, and calls APIs on your behalf at 3am while you sleep.
The problem isn't the agents.
The problem is that we gave agents power before we gave them accountability.
An AI agent today authenticates with an API key. The API key proves nothing about the agent — what it is, who authorized it, what it's supposed to do, or what it's prohibited from doing. If the agent books a $3,000 flight instead of a $300 one, there is no cryptographic proof that a spending limit was ever configured. If it accesses data it shouldn't, there is no tamper-proof record of what it touched. If it takes an action that causes harm, the audit trail is a log file that anyone with access can modify.
System prompts are not policy. Config files are not enforcement. Log files are not proof.
These are the tools we have. They are not enough.
Both sides of the problem.
The trust gap runs in both directions.
If you deploy agents, you have no reliable way to enforce what they can do. You can write instructions. You can set rules in a config file. But there is no standard mechanism that enforces those constraints at the infrastructure level — before the action executes, not after you read the logs.
If you run the services agents access — an API, an MCP server, a database, a third-party tool — you have no way to verify who is calling you. Agents show up with an API key and a claim. You don't know if the agent is authorized to do what it's requesting, whether the credential is still valid, or whether the human who issued it even knows it's being used. When that agent racks up charges, exfiltrates data, or violates your terms of service, you're the one explaining it to your customers.
Both sides need the same thing: verifiable identity, enforced constraints, and a signed audit trail. Right now, neither side has it.
This is not a new problem.
Every time a new class of actors got access to powerful systems, we built infrastructure to govern them.
Before we built the infrastructure, banks sent wire instructions in plaintext and hoped they arrived. Websites couldn't prove they were who they claimed to be. Apps asked for your actual password and stored it on their servers.
In each case, the system worked until it didn't. Then the infrastructure became obvious, then mandatory, then invisible. AI agents are the next class of actors that needs this infrastructure. The pattern is the same. The timeline is compressed.
That's why we built Modus.
What we're building.
Modus is the trust infrastructure for the age of autonomy.
Not a monitoring tool. Not a compliance dashboard. Not another layer of prompts sitting on top of an agent hoping it behaves. A real enforcement layer — one that operates at the infrastructure level, before actions execute, with cryptographic proof of every decision.
Every agent should have a verifiable identity — not just an API key, but a real credential that proves who it is, who authorized it, and what it's allowed to do. Every action should pass through enforcement that checks those constraints before it executes — not after. And every decision should produce cryptographic proof of what happened — not a log file that anyone can edit.
The neutral layer.
We believe this infrastructure has to be neutral.
No AI model provider should own the trust layer. The moment Anthropic or OpenAI or Google owns the standard for agent identity, every other agent is playing on someone else's field. The incentives are wrong. The governance is wrong. The entity verifying agent behavior should not be the same entity selling the model that produces it.
This isn't hypothetical. The land grab is already underway. Visa launched Agentic Ready. Google proposed a Universal Commerce Protocol. Coinbase built x402 for agent payments. The largest companies in the world see what's coming and are moving to own pieces of it.
Modus is independent. It verifies agents from any provider. It enforces constraints regardless of who built the model. It produces attestations that anyone can verify — no trusted third party required beyond the cryptographic proof itself.
The infrastructure that governs agents should work for all of them — regardless of who built the model, who deployed the agent, or who runs the service.
What happens next.
The agent economy is not coming. It's here.
The trust layer is just getting started. The gap between those two facts is closing, and what fills it will matter enormously.
Either we build real accountability into this now — cryptographic, enforceable, independent — or we end up with something bolted on later, after the damage has been done, owned by someone with the wrong incentives.
We built Modus because the age of autonomy doesn't need another prayer. It needs trust infrastructure.
Jason Hanlon is the founder of Standard Logic Co., the company behind Modus.
