Ground: Verification-First Code Analysis
How computed claims replaced guesswork in an 80+ package monorepo
Summary
This case study documents how Ground was used to analyze the CREATE SOMETHING monorepo (~80+ packages, 50k+ lines of TypeScript). The verification-first approach prevented AI hallucination and saved an estimated 8+ hours compared to manual code review or pattern-matching tools.
The Problem
Large monorepos accumulate technical debt: duplicate functions, dead exports, orphaned modules. Traditional approaches have serious limitations:
- Manual review: Time-consuming, inconsistent, easy to miss patterns
- grep/ripgrep: High false positive rate (30%+), no semantic understanding
- AI without grounding: Confident hallucinations ("these look 95% similar" without comparison)
The core issue: AI agents make claims without evidence. They pattern-match rather than compute.
The Solution: Verification-First
Ground enforces a simple rule: no claim without evidence.
- Duplicates → Must call
ground_comparebeforeground_claim_duplicate - Dead code → Must call
ground_count_usesbeforeground_claim_dead_code - Orphans → Must call
ground_check_connectionsbeforeground_claim_orphan
This blocks hallucinated analysis by requiring computation before synthesis.
Before / After Comparison
Duplicate Detection
AI claims "these look 95% similar" without comparison
Ground computes 87.3% AST similarity with evidence
Computed claims replace guesses
Dead Code Analysis
AI says "this appears unused" based on text search
Ground counts 0 imports, 0 type references with verification
Zero false positives on framework entry points
Design Drift
Manual audit of CSS for hardcoded values
Ground reports 73% token adoption, lists violations
Quantified design system health
Algorithm Details
Ground uses multiple analysis layers:
Duplicate Detection
- AST similarity (40% weight): Tree-sitter parses actual syntax structure
- Line diff (35% weight): Patience algorithm for semantic line matching
- Token Jaccard (25% weight): Set overlap for quick pre-filtering
- LSH indexing: O(n) comparison vs O(n³) naive approach
Confidence Scoring (Bayesian)
- 90%+ confidence → Auto-fix safe
- 50-90% → Flag for review
- Below 50% → Skip (likely false positive)
Factors include: import count, export usage, file location, naming patterns, PageRank percentile, framework conventions.
Framework Awareness
Ground understands framework conventions:
- SvelteKit:
+page.svelte,+server.tsare entry points - Cloudflare Workers:
wrangler.tomlentry points - Test files: Entry points by convention
This eliminates false positives on framework-implicit modules.
Time Savings Analysis
| Task | Manual | grep | Ground |
|---|---|---|---|
| Find duplicates (80 packages) | 4+ hours | 1 hour + 2h false positive triage | 5 minutes |
| Identify dead exports | 3+ hours | 30 min + 1h triage | 3 minutes |
| Check design drift | 2+ hours | N/A | 2 minutes |
| Total | 9+ hours | 4.5+ hours | 10 minutes |
Note: Ground analysis was run via ground_analyze MCP tool from Claude Code.
Results were verified by spot-checking 20% of findings.
Findings Summary
Duplicates Found: 47
Most common patterns:
- Validation functions copied across packages (12 instances)
- Error handling wrappers (8 instances)
- Date/time utilities (6 instances)
- API response formatters (5 instances)
Action: Created @create-something/utils shared package.
Dead Exports: 23
Categories:
- Deprecated API endpoints (9) — safe to remove
- Unused type exports (8) — safe to remove
- Public API but never imported (6) — flagged for review
Design Drift: 27% violations
Token adoption was 73%. Common violations:
- Hardcoded
rgba(255,255,255,0.x)instead of--color-fg-* - Hardcoded
8pxinstead of--radius-md - Inline colors in older components
Action: Created migration tickets for affected components.
Conclusion
Ground's verification-first approach transforms code analysis from guesswork to computation:
- Accuracy: <5% false positive rate vs 30%+ with pattern matching
- Speed: 10 minutes vs 9+ hours manual review
- Trust: Every claim backed by computed evidence
The key insight: AI agents are happy to use tools that save them cognition. Ground makes code analysis efficient by doing the computation they would otherwise hallucinate.
Try Ground
npm install @createsomething/ground-mcp