On March 31st, Anthropic accidentally published 512,000 lines of Claude Code's source code. They've been issuing takedown notices ever since. But before the mirrors went dark, developers had already found the interesting parts.
Buried in the leak: a disabled feature called Kairos.
Kairos is a persistent daemon. It runs in the background — even after you close the terminal. It issues periodic "tick" prompts, asking the agent whether it should take action. It has a flag called PROACTIVE, described as "surfacing something the user hasn't asked for and needs to see now."
There's also AutoDream: when you go idle, the agent enters a reflective pass over its memory files. It scans the day's transcripts for things worth keeping. It consolidates, prunes, synthesizes. It gets ready for tomorrow.
Then it wakes up and acts.
The pattern isn't theoretical. It's already running.
Kairos is behind a disabled flag. But the behavior it describes — persistent background execution, proactive action, memory across sessions — is already running on people's machines today.
OpenClaw does this out of the box. It runs as a persistent process on your hardware, polls on a configurable heartbeat, monitors your inbox, surfaces things you didn't ask for, and acts between conversations without waiting to be prompted. It remembers across sessions. It has over 345,000 stars on GitHub. It runs while you sleep.
OpenClaw isn't alone. This is the direction every serious agent framework is moving. The capability is inevitable — it's what makes agents genuinely useful rather than just glorified autocomplete.
Anthropic didn't leak a roadmap. They confirmed one.
This is the shift.
In the first post, I called this the age of autonomy. That wasn't a marketing phrase. It's a description of what's actually changing.
The tools we've used for the last thirty years — browsers, apps, dashboards — required a human at the keyboard. Every action started with a click, a command, a prompt. The human was always in the loop because the software couldn't run without one.
That assumption is ending. Agents that persist, remember, and act on their own represent a fundamental change in how software operates — and by extension, how we work and live. This isn't an incremental upgrade to existing tools. It's a new class of software actor that operates autonomously, continuously, and on your behalf whether you're watching or not.
That's the shift. And it has no governance layer.
The 3am problem.
In the last post, I wrote about agents "calling APIs on your behalf at 3am while you sleep."
That was meant to be evocative. Turns out it was just accurate.
The trust gap we described — no verifiable identity, no enforced constraints, no tamper-proof audit trail — was already serious when agents only acted when you prompted them. An agent that initiates actions on its own is a different problem entirely.
A system prompt is an instruction you write before you go to bed. Kairos acts after. OpenClaw acts after. Your agents act after. None of them are waiting for your permission.
And the governance layer for all of it is still a system prompt and a log file.
You can't write a system prompt for an agent you're not watching.
Every major AI lab is building toward this. Persistent agents. Background execution. Proactive action. Memory across sessions.
The question isn't whether your agents will run without you. It's whether anything will be watching them when they do.
Right now, the answer for most systems is: logs. Maybe a webhook. Possibly an alert you'll see when you wake up, after the action already executed.
That's not oversight. That's archaeology.
An ungoverned daemon is just a demon that runs on your infrastructure instead of your imagination.
What watching actually looks like.
Real oversight over autonomous agents isn't passive monitoring. It's enforcement that can interrupt before an action executes — and a mechanism that puts a human in the loop when the agent hits something it shouldn't decide alone.
In Modus, we call this Supravision.
Every agent on Modus carries a passport — a verifiable credential that defines its identity, its permissions, and its constraints. Every action passes through gates that check those constraints before execution, not after. When an agent tries to exceed its permissions — a spend threshold crossed, a sensitive data scope accessed, an action outside its defined role — the gate doesn't log it. It stops it.
But stopping isn't always the right answer. Sometimes the action is legitimate and the agent just needs authorization from a human who isn't at the keyboard.
That's when Supravision escalates. You get a text message. You reply YES or NO from wherever you are. No app. No login. No dashboard to find while half-asleep. Just a reply.
The agent waits until it hears from you.
And when you respond, that decision — who approved it, when, what action was approved or blocked — is recorded as a signed attestation. Permanent. Tamper-proof. Cryptographic proof that a human made a call and when.
When Kairos wakes up at 3am and decides it's time to act, that's exactly the moment Supravision is designed for.
The window is closing.
The Anthropic leak is a preview of the official announcement, not a warning about a distant future. The labs aren't waiting for trust infrastructure to catch up — they're shipping capability and assuming the governance will follow.
It won't follow on its own. It has to be built.
Kairos is one flag away from shipping. OpenClaw is already running. Every other lab has a persistent agent in the works. The agents that run while you sleep are here whether the trust layer is ready or not.
The age of autonomy isn't arriving on a schedule. It's arriving in leaked source code and open-source repos, one feature flag at a time. The question is whether these agents will be running with real oversight — or just running.
Jason Hanlon is the founder of Standard Logic Co., the company behind Modus. Supravision is available on all Modus plans.
