As AI agents become more capable, a fundamental question emerges: how do humans maintain meaningful control over autonomous systems without becoming bottlenecks?
You might find yourself reaching for one of two extremes:
| You might try... | What happens |
|---|---|
| Full autonomy: "Let the agent handle everything" | Errors compound silently. You lose agency. |
| Full oversight: "I'll approve every action" | You become the bottleneck. Automation's purpose is defeated. |
When you catch yourself at either extreme, you've found the tension this paper addresses: what is the minimum oversight that preserves meaningful human control?
We argue that the answer is progress reports—periodic checkpoints that enable reactive steering. The harness runs autonomously; humans engage only when they choose to. This is not abdication of control but a different mode of control.