This section shows how to configure parallel peer review in your own autonomous
development workflows. The pattern works for any harness system that supports
checkpoints and issue creation.
Step-by-Step Process
Step 1: Configure Reviewers (Human)
Define three specialized reviewers in harness config:
- architecture: DRY violations, coupling, module boundaries
- security: OWASP Top 10, auth gaps, injection risks
- quality: Error handling, test coverage, maintainability
Ensure orthogonal perspectives (minimal overlap).
Step 2: Set Blocking Rules (Human)
Decide which reviewers can pause execution:
- architecture: blocking (structural issues compound)
- security: blocking (vulnerabilities are critical)
- quality: advisory (minor issues can queue)
Configure pauseOnCritical: true for blocking reviewers.
Step 3: Enable Parallel Execution (Agent)
Run reviewers simultaneously, not sequentially:
- Each receives same git diff and file context
- Total review time = longest single reviewer
- Results merge into unified findings list
Use Promise.all() or equivalent concurrency primitive.
Step 4: Create Finding Issues (Agent)
When reviewer returns critical findings:
- Create Beads issue for each finding
- Label with reviewer name and severity
- Link to checkpoint for context
- Add dependencies if findings are related
Step 5: Pause and Surface (Harness)
When blocking reviewer fails:
- Halt execution immediately
- Generate checkpoint summary
- Preserve agent context for resumption
- Alert human for review decision
Step 6: Resolve and Resume (Agent + Human)
Human reviews findings, agent implements fixes:
- Close findings with commit references
- Update harness context with resolution
- Re-run reviewers if needed
- Resume execution when clear
Real-World Example: API Endpoint Duplication
Let's say your agent builds three similar API routes across different packages:
// packages/shop/src/routes/api/subscribe/+server.ts
export async function POST({ request }) {
const { email } = await request.json();
const token = generateToken(email);
await db.insert(subscribers).values({ email, token });
await sendConfirmationEmail(email, token);
return json({ success: true });
}
// packages/blog/src/routes/api/subscribe/+server.ts
export async function POST({ request }) {
const { email } = await request.json();
const token = generateToken(email);
await db.insert(blogSubscribers).values({ email, token });
await sendConfirmationEmail(email, token);
return json({ success: true });
}
// packages/newsletter/src/routes/api/subscribe/+server.ts
export async function POST({ request }) {
const { email } = await request.json();
const token = generateToken(email);
await db.insert(newsletterSubscribers).values({ email, token });
await sendConfirmationEmail(email, token);
return json({ success: true });
} The architecture reviewer detects:
[CRITICAL] Duplicated subscription logic across 3 packages
→ packages/shop/src/routes/api/subscribe/+server.ts
→ packages/blog/src/routes/api/subscribe/+server.ts
→ packages/newsletter/src/routes/api/subscribe/+server.ts
→ Only differs in table name
→ Recommend: Extract to @myapp/subscriptions package
Meanwhile, the security reviewer flags:
[HIGH] Missing rate limiting on subscription endpoints
→ All three endpoints accept unlimited POST requests
→ Vulnerable to subscription bombing
→ Recommend: Add rate limiting middleware
Harness creates issues for both findings, pauses, and alerts human. Agent then:
- Creates
@myapp/subscriptions package - Extracts shared logic to
processSubscription(table, email) - Adds rate limiting middleware to all endpoints
- Updates consumers to import shared function
- Closes findings with commit references
When to Use Parallel Peer Review
Use this pattern when:
- Autonomous work: Agent-driven development with harness orchestration
- Multi-file changes: Checkpoints cover significant scope (3+ files)
- Quality gates matter: Structural or security issues can't accumulate silently
- Hermeneutic continuity: Work spans multiple sessions, understanding must persist
Don't use for:
- Single-file changes or trivial fixes
- Exploratory prototyping (no established patterns yet)
- Emergency hotfixes (review adds latency)
- Human-driven development (peer review happens via PR)
Calibrating Reviewer Sensitivity
Over time, tune your reviewers based on false positive rates:
The goal is a self-correcting system, not a gate-keeping system. Reviewers should
catch real issues while allowing good work to proceed. Calibrate continuously based
on outcomes.