A curious phenomenon has emerged in LLM agent development: models consistently perform better when they write code to accomplish tasks than when they invoke tools directly. This observation, noted across multiple implementations from Claude's computer use to Anthropic's MCP (Model Context Protocol), has been attributed to training data distributions—models have seen more code than tool schemas.
This paper proposes an alternative explanation grounded in Heidegger's phenomenology (the philosophical study of structures of experience and consciousness—how things show themselves to us through lived experience, not abstract theory). We argue that Code Mode succeeds because it achieves what Heidegger calls Zuhandenheit—the ready-to-hand relationship where tools recede from conscious attention into transparent use. Direct tool calling, by contrast, forces Vorhandenheit—tools as present-at-hand objects requiring explicit focus.
This distinction is not merely academic. It has practical implications for how we design LLM agent architectures, tool interfaces, and the boundary between natural language and code in AI systems.