Migrating Your Own Harness or Autonomous Agent
To apply this migration pattern to your autonomous Claude Code orchestration:
Step 1: Audit Current Tool Access (Human)
List all tools your harness currently uses. Check headless session logs or code
that spawns Claude. If using --dangerously-skip-permissions, you have unlimited
access by default.
Step 2: Categorize Essential Tools (Human)
Group tools by purpose:
- Core file operations (Read, Write, Edit, Glob, Grep)
- Version control (Bash(git:*))
- Package management (Bash(pnpm:*), Bash(npm:*))
- Build/deploy (Bash(wrangler:*), domain-specific commands)
- MCP integrations (mcp__cloudflare__*, mcp__airtable__*, etc.)
Step 3: Create Explicit Allowlist (Human)
Build your ALLOWED_TOOLS string. Start conservative—add only what you've verified
is needed. You can expand later if sessions fail.
Step 4: Add Runaway Prevention (Human + Agent)
Set --max-turns based on observed session lengths. Use 2-3x your average as a
safety margin. If most sessions complete in 30-50 turns, set --max-turns 100.
Step 5: Enable Cost Tracking (Agent)
Add --output-format json to capture session metadata. Parse stdout to extract
costUsd, numTurns, sessionId. Store these for analysis.
Step 6: Test on Non-Critical Work First (Human)
Run the migrated harness on low-stakes tasks. Verify tools aren't blocked.
Check that turn limits don't prevent legitimate completion.
Real-World Example: Migrating a Deployment Harness
Let's say you have a harness that autonomously deploys Cloudflare Workers. Before migration:
// Before: Unsafe pattern
const args = [
'-p',
'--dangerously-skip-permissions',
];
const result = await spawn('claude', args, { input: deployPrompt });
// No cost tracking, no runaway prevention, unrestricted tool access After analyzing actual usage, you discover the harness needs:
- File operations to read wrangler.toml and Worker scripts
- Git to check status and create deployment tags
- Wrangler to deploy and check deployment status
- Cloudflare MCP to update KV/D1 data if needed
After migration:
// After: Agent SDK best practices
const DEPLOY_ALLOWED_TOOLS = [
// Core file operations
'Read', 'Write', 'Edit', 'Glob', 'Grep',
// Version control (scoped)
'Bash(git:status)', 'Bash(git:tag)', 'Bash(git:log)',
// Deployment (scoped)
'Bash(wrangler:deploy)', 'Bash(wrangler:tail)', 'Bash(wrangler:whoami)',
// Cloudflare MCP (explicit)
'mcp__cloudflare__worker_deploy',
'mcp__cloudflare__kv_put',
'mcp__cloudflare__d1_query',
].join(',');
const args = [
'-p',
'--allowedTools', DEPLOY_ALLOWED_TOOLS,
'--max-turns', '50', // Deployments are fast; low limit appropriate
'--output-format', 'json',
'--model', 'sonnet', // Sonnet sufficient for deployments
];
const result = await spawn('claude', args, { input: deployPrompt });
// Parse metrics
const metrics = JSON.parse(result.stdout);
console.log(`Deployment cost: $${metrics.costUsd}`);
console.log(`Turns used: ${metrics.numTurns}/50`);
// Store for analysis
await db.deployments.create({
sessionId: metrics.sessionId,
costUsd: metrics.costUsd,
numTurns: metrics.numTurns,
timestamp: new Date(),
}); Notice:
- Scoped Bash patterns:
git:status allowed, git:reset --hard blocked - Lower turn limit: Deployments complete in 10-20 turns; 50 provides headroom
- Model selection: Sonnet is 5x cheaper than Opus, sufficient for standard deploys
- Metrics capture: JSON output enables cost analysis over time
When to Expand Tool Access
Add tools to the allowlist when:
- Sessions fail with "permission denied": Check logs, identify blocked tool, evaluate if it should be allowed
- New workflow requirements: Adding database migrations? Add
mcp__cloudflare__d1_query - Peer review identifies missing capability: Architecture reviewer notes the harness can't perform needed operation
Don't add tools when:
- The request is "just in case"—only add verified needs
- A safer alternative exists (prefer
WebFetch over Bash(curl:*)) - The operation should require human approval (don't automate destructive operations)
Validating the Migration
After migration, validate success by:
✓ Zero sessions blocked by missing tool permissions
✓ All sessions complete within turn limits (or fail for legitimate reasons)
✓ Cost tracking data populates correctly
✓ Model selection matches task complexity (Haiku for simple, Opus for complex)
✓ Peer reviews run and surface appropriate findings
✓ No degradation in harness capabilities compared to legacy approach
The goal is explicit security without operational cost. If the migration
blocks legitimate work or significantly slows execution, the allowlist is too restrictive.
If it allows operations that shouldn't be automated, it's too permissive. Iterate until
the harness operates transparently—Zuhandenheit achieved.