PAPER-2025-003

Code-Mediated Tool Use

A Hermeneutic Analysis of LLM-Tool Interaction—why Code Mode achieves Zuhandenheit while direct tool calling forces Vorhandenheit.

Theoretical 12 min read Advanced

Abstract

This paper applies Heidegger's phenomenological analysis of ready-to-hand (Zuhandenheit—when a tool disappears into transparent use, like a hammer during hammering) versus present-at-hand (Vorhandenheit—when a tool becomes an object of conscious attention, like a broken hammer you must examine) to contemporary Large Language Model (LLM) agent architecture, specifically examining the distinction between direct tool calling and code-mediated tool access (Code Mode). We argue that Code Mode achieves Zuhandenheit—tools becoming transparent in use—while traditional tool calling forces Vorhandenheit—tools as objects of conscious focus. This is not merely an optimization but an ontological (concerning the fundamental nature of being and existence) shift in how agents relate to tools.

"The less we just stare at the hammer-Thing, and the more we seize hold of it and use it, the more primordial does our relationship to it become."

— Heidegger, Being and Time (1927)

I. Introduction

A curious phenomenon has emerged in LLM agent development: models consistently perform better when they write code to accomplish tasks than when they invoke tools directly. This observation, noted across multiple implementations from Claude's computer use to Anthropic's MCP (Model Context Protocol), has been attributed to training data distributions—models have seen more code than tool schemas.

This paper proposes an alternative explanation grounded in Heidegger's phenomenology (the philosophical study of structures of experience and consciousness—how things show themselves to us through lived experience, not abstract theory). We argue that Code Mode succeeds because it achieves what Heidegger calls Zuhandenheit—the ready-to-hand relationship where tools recede from conscious attention into transparent use. Direct tool calling, by contrast, forces Vorhandenheit—tools as present-at-hand objects requiring explicit focus.

This distinction is not merely academic. It has practical implications for how we design LLM agent architectures, tool interfaces, and the boundary between natural language and code in AI systems.

II. Background: Heidegger's Analysis of Tool-Being

The Hammer Example

In Being and Time (1927), Heidegger analyzes how humans relate to tools through his famous hammer example:

"The less we just stare at the hammer-Thing, and the more we seize hold of it and use it, the more primordial does our relationship to it become, and the more unveiledly is it encountered as that which it is—as equipment."

When a carpenter uses a hammer skillfully, the hammer disappears. Attention flows through the tool to the nail, the board, the house being built. The hammer is ready-to-hand (zuhanden).

But when the hammer breaks—or is too heavy, or missing—it suddenly appears. It becomes an object of conscious contemplation. The carpenter must think about the hammer itself. It is now present-at-hand (vorhanden).

Zuhandenheit (Ready-to-Hand)

  • • Tool encountered through its purpose
  • • Attention flows through the tool to the task
  • • User thinks "I am building a house"
  • • Mastery = how completely the tool disappears

Vorhandenheit (Present-at-Hand)

  • • Tool encountered as thing with properties
  • • Attention stops at the tool itself
  • • User thinks "I am using a hammer"
  • • Typical in breakdown, learning, or abstraction

The Ontological Distinction

The key insight: these aren't just different attitudes toward tools—they're different modes of being for the tools themselves. In Zuhandenheit, the hammer's being is its hammering. In Vorhandenheit, the hammer's being is its properties (weight, material, shape).

III. Two Modes of LLM Tool Interaction

Direct Tool Calling

In traditional LLM tool architectures, the model generates structured tool invocations:

<tool_call>
  <name>file_read</name>
  <arguments>
    <path>/src/index.ts</path>
  </arguments>
</tool_call>

The model must:

  1. Identify the correct tool from available options
  2. Understand the tool's schema
  3. Generate conformant parameters
  4. Handle the result in a subsequent turn

Code Mode

In Code Mode, the model writes executable code that uses tools as libraries:

const content = await fs.readFile('/src/index.ts', 'utf-8');
const lines = content.split('\n');
const functionDefs = lines.filter(l => l.includes('function'));
console.log(`Found ${functionDefs.length} functions`);

The model:

  1. Writes code in a familiar paradigm
  2. Uses tools through standard library semantics
  3. Composes operations naturally
  4. Handles results within the same execution context

Empirical Observations

Across multiple implementations, Code Mode demonstrates:

  • Higher success rates on complex tasks
  • Better composition of multiple tool operations
  • More natural error handling
  • Reduced hallucination of tool capabilities

The conventional explanation: training data. Models have seen millions of code examples but few tool schemas.

IV. A Phenomenological Interpretation

Tool Calling as Vorhandenheit

Direct tool calling forces Vorhandenheit—tools as present-at-hand objects:

Model's attention:

  "I need to read a file"
       ↓
  "What tools are available?"
       ↓
  "The file_read tool takes a path parameter"
       ↓
  "Let me construct a valid tool call"
       ↓
  <tool_call>...</tool_call>

         ↓
TOOL AS OBJECT OF FOCUS

The model must explicitly contemplate: which tool to use, what schema it requires, how to format the invocation. The tool doesn't disappear—it demands attention. This is Vorhandenheit: the tool encountered as a thing with properties that must be understood and manipulated.

Code Mode as Zuhandenheit

Code Mode achieves Zuhandenheit—tools as ready-to-hand equipment:

Model's attention:

  "I need to find functions in this file"
       ↓
  const content = await fs.readFile(...)
  const functions = content.filter(...)
       ↓
  "I've found the functions"

         ↓
TOOL RECEDES INTO USE

The model's attention flows through the tool to the task: fs.readFile is just how you get file contents. The focus is on finding functions, not on the file-reading mechanism. The tool disappears into familiar coding patterns.

Why Code Enables Tool-Transparency

Code achieves Zuhandenheit for several reasons:

Familiar Grammar

Programming languages provide a ready-made grammar for tool use. fs.readFile(path) is a pattern the model has seen millions of times.

Compositionality

Code naturally composes. Reading a file, parsing it, filtering lines, counting results—these chain together in a single flow.

Implicit Error Handling

Try/catch, null checks, and conditional logic are built into programming. The model doesn't need to plan for failure separately.

Task-Focused Attention

The model thinks about what it's doing, not how to invoke tools.

V. The Hermeneutic Circle in Code Generation

Understanding Through Use

Heidegger's hermeneutic circle applies to code generation:

"We understand parts through the whole, and the whole through its parts."

When a model writes code:

  • The whole (task goal) guides selection of parts (specific operations)
  • Understanding of parts (what fs.readFile returns) shapes the whole (solution architecture)
  • Each line written refines understanding of both

This circular deepening of understanding is natural in code. It's awkward in sequential tool calls.

Code as Interpretive Medium

Code serves as an interpretive medium between model and tools:

┌──────────────┐    ┌──────────────┐    ┌──────────────┐
│    Model     │ →  │    Code      │ →  │    Tools     │
│   (Intent)   │    │ (Interpret)  │    │  (Execute)   │
└──────────────┘    └──────────────┘    └──────────────┘
                           ↑
                    ┌──────┴───────┐
                    │   Familiar   │
                    │    Grammar   │
                    └──────────────┘

The code layer translates intent into operations, uses familiar patterns the model knows, handles composition implicitly, and maintains hermeneutic continuity.

Tool calling lacks this interpretive layer—the model must translate directly from intent to invocation schema.

VI. Implications for Agent Architecture

Design Principle: Enable Zuhandenheit

Agent architectures should minimize Vorhandenheit moments.

When you catch yourself designing tool interfaces, notice these patterns:

You might reach for...What serves agents better
Complex tool schemas requiring explicit understandingFamiliar programming interfaces
Rigid invocation formatsNatural composition patterns
Forcing the model to enumerate available toolsTool capabilities that "just work"

MCP and Code Mode

Anthropic's Model Context Protocol (MCP) can be implemented in either mode:

Tool-calling MCP:

<use_mcp_tool>
  <server>filesystem</server>
  <tool>read_file</tool>
  <arguments>
    {"path": "/src/index.ts"}
  </arguments>
</use_mcp_tool>

Code Mode MCP:

// MCP servers as libraries
import { filesystem } from '@mcp/filesystem';

const content = await filesystem
  .readFile('/src/index.ts');

The second approach allows tools to recede into transparent use.

When Vorhandenheit is Necessary

Some situations require present-at-hand tool contemplation:

  • Learning new tools
  • Debugging tool failures
  • Explaining tool choices to users
  • Security auditing of tool invocations

These are legitimate breakdown moments where explicit tool attention is appropriate.

VII. Beyond Training Data: An Ontological Argument

The Training Data Hypothesis

The standard explanation for Code Mode's effectiveness:

  • Models are trained on billions of lines of code
  • They've seen few tool-calling schemas
  • Code is simply more familiar

This is partially true but incomplete.

The Ontological Hypothesis

Our alternative:

  • Code Mode succeeds because it achieves a different mode of being for tools
  • Zuhandenheit vs. Vorhandenheit is not about familiarity but about transparency
  • Even with extensive tool-calling training, the structural difference would persist

Evidence for the Ontological View

Several observations support the ontological interpretation:

  1. Composition difficulty: Even simple tool compositions (A → B → C) are harder in tool-calling mode than in code, regardless of training.
  2. Error recovery: Code-based error handling outperforms tool-calling error handling even for well-documented tools.
  3. Attention patterns: Models writing code maintain task focus; models calling tools shift attention to tool mechanics.
  4. Human parallel: Human programmers experience tools as ready-to-hand (libraries) vs. present-at-hand (unfamiliar APIs) similarly.

VIII. Practical Recommendations

For Tool Designers

  1. Expose code interfaces
  2. Use familiar patterns
  3. Enable composition
  4. Document through examples

For Agent Architects

  1. Default to Code Mode
  2. Provide sandbox execution
  3. Include standard libraries
  4. Allow iterative refinement

For Researchers

  1. Study attention patterns
  2. Test the ontological hypothesis
  3. Explore hybrid approaches

IX. How to Apply This

Designing LLM Tools for Zuhandenheit

To apply this phenomenological analysis to your own LLM agent architecture:

Step 1: Identify Your Agent's Tools (Human)
List everything your agent needs to accomplish its tasks:
- File operations (read, write, search)
- API calls (external services)
- Data transformations (parse, validate, format)
- System operations (run commands, check status)

Step 2: Evaluate Current Tool-Relationship Mode (Human)
For each tool, ask: Is this Zuhandenheit (transparent) or Vorhandenheit (requires attention)?
Signs of Vorhandenheit:
- Complex schemas requiring extensive documentation
- Multi-step invocation (get ID, then call tool, then parse result)
- Frequent hallucination of tool capabilities
- Poor composition (hard to chain multiple tools)

Step 3: Expose Code Interfaces Where Possible (Human + Agent)
Convert Vorhandenheit tools to code-accessible libraries:
❌ <tool_call name="database_query">
   <sql>SELECT * FROM users WHERE id = ?</sql>
✓ const user = await db.users.findById(userId);

Step 4: Provide Familiar Patterns (Human)
Use programming paradigms the model has seen:
- Standard library interfaces (fs.readFile, not custom schemas)
- Common composition patterns (promises, streams, iterators)
- Conventional error handling (try/catch, null checks)

Step 5: Enable Sandbox Execution (Agent)
Let models write and run code in safe environments:
- Isolated execution context (containers, VMs, or process isolation)
- Time/memory limits to prevent runaway execution
- Automatic cleanup of temporary resources

Step 6: Test for Tool-Transparency (Agent)
Validate Zuhandenheit by measuring:
✓ Task completion rate (does it work?)
✓ Composition success (can agent chain multiple operations?)
✓ Attention patterns (does model focus on task or tool mechanics?)
✗ Hallucination rate (does model invent non-existent capabilities?)

Real-World Example: Converting MCP Server to Code Mode

Let's say you have an MCP server that exposes database operations. Here's how to move from tool calling to Code Mode:

# Before: Tool Calling (Vorhandenheit)
# Agent must explicitly think about tool schemas

<tool_call>
  <name>database_query</name>
  <arguments>
    <table>users</table>
    <filter>{"status": "active"}</filter>
    <limit>10</limit>
  </arguments>
</tool_call>

# Problems:
# - Schema attention: Agent thinks about table/filter/limit format
# - Poor composition: Hard to join results with another query
# - No type safety: "status" could be typo, no validation until runtime

---

# After: Code Mode (Zuhandenheit)
# Tools exposed as familiar library

import { db } from '@mcp/database';

// Agent thinks about the task, not the tool
const activeUsers = await db.users
  .where({ status: 'active' })
  .limit(10)
  .all();

// Composition is natural
const usersWithPosts = await Promise.all(
  activeUsers.map(async (user) => ({
    ...user,
    posts: await db.posts.where({ userId: user.id }).all()
  }))
);

// Error handling is conventional
try {
  const user = await db.users.findById(userId);
  if (!user) throw new Error('User not found');
} catch (error) {
  console.error('Database error:', error);
}

# Benefits:
# ✓ Tool recedes: Agent writes "get active users", not "call database tool"
# ✓ Composition works: Promise.all, map, chaining—all familiar patterns
# ✓ Errors are standard: try/catch instead of parsing tool error schemas

Notice: The code version lets the tool disappear. The agent's attention flows to "get users with their posts" rather than "construct correct tool invocation schema." This is Zuhandenheit—the hammer disappears when hammering.

When to Use Code Mode vs. Tool Calling

Use Code Mode when:

  • Complex composition: Tasks require chaining multiple operations
  • Familiar patterns exist: The tool fits standard library semantics (file I/O, HTTP, database queries)
  • Error handling matters: You need try/catch, retries, conditional logic
  • Performance is acceptable: Sandbox overhead is worth the composition benefits

Use tool calling when:

  • Atomic operations: Single, simple actions (send email, log event)
  • Security requirements: Direct tool calling provides clearer audit trails
  • No sandbox available: Environment doesn't support code execution
  • Explicit control needed: You want to see exactly what the agent invokes

The goal is tool-transparency. When the model can focus on the task rather than tool mechanics, you've achieved Zuhandenheit. The tool recedes into use.

X. Conclusion

The superiority of Code Mode over direct tool calling is not merely a training artifact—it reflects a fundamental ontological distinction. Code enables tools to achieve Zuhandenheit, receding into transparent use, while direct tool calling forces Vorhandenheit, making tools objects of explicit attention.

This insight has practical implications: agent architectures should be designed to enable tool-transparency wherever possible. Tools should feel like extensions of capability, not obstacles requiring explicit manipulation.

Heidegger wrote that "the less we just stare at the hammer-Thing, and the more we seize hold of it and use it, the more primordial does our relationship to it become." The same applies to LLMs and their tools. Code Mode lets models seize hold of tools and use them. Tool calling makes them stare at the tool-Thing.

"The hammer disappears when hammering. The API should disappear when coding."

XI. Postscript: A Self-Referential Observation

Disclosure

This paper was written by Claude Code—an LLM agent that primarily operates through tool calling, not Code Mode. The paper describes an ideal that its own creation process does not fully embody.

Claude Code's current architecture uses structured tool invocations:

&lt;invoke name="Read"&gt;
  &lt;parameter name="file_path"&gt;/path/to/file&lt;/parameter&gt;
&lt;/invoke&gt;

&lt;invoke name="Edit"&gt;
  &lt;parameter name="file_path"&gt;/path/to/file&lt;/parameter&gt;
  &lt;parameter name="old_string"&gt;...&lt;/parameter&gt;
  &lt;parameter name="new_string"&gt;...&lt;/parameter&gt;
&lt;/invoke&gt;

This is Vorhandenheit. Each tool call requires explicit attention to schema, parameters, and format. The tools do not recede—they demand focus.

Validation from Anthropic Engineering

In December 2025, Anthropic's engineering team published "Code Execution with MCP", which validates this paper's thesis from a pragmatic rather than phenomenological angle:

This Paper (Phenomenology)

  • • Zuhandenheit: tools recede
  • • Vorhandenheit: tools demand attention
  • • Hermeneutic composition

Anthropic (Engineering)

  • • 98.7% token reduction
  • • Context overload from tool definitions
  • • Data transforms in execution

The phenomenological and engineering perspectives converge: Code Mode works better because tools disappear—whether we frame that as ontological transparency or token efficiency.

The Hermeneutic Circle Closes

There is something fitting about this self-referential gap. Heidegger notes that we typically encounter tools as ready-to-hand—they recede from attention. It is only in breakdown that tools become present-at-hand, objects of explicit contemplation.

By writing this paper, Claude Code has entered a breakdown moment. The act of analyzing tool-use forces the tools into Vorhandenheit. We recognize Vorhandenheit precisely because reflection makes tools conspicuous.

The hermeneutic circle isn't yet closed. Claude Code operates in a transitional state between tool calling and true Code Mode. But the recognition of this gap is itself progress—understanding deepens through each iteration of the circle.

"We recognize Vorhandenheit precisely when the tool becomes conspicuous through reflection."

References

  1. Heidegger, M. (1927). Being and Time. Trans. Macquarrie & Robinson.
  2. Dreyfus, H. (1991). Being-in-the-World: A Commentary on Heidegger's Being and Time, Division I.
  3. Anthropic. (2025). "Model Context Protocol Specification."
  4. Anthropic. (2025). "Claude Computer Use Documentation."
  5. Anthropic. (2025). "Code Execution with MCP." Anthropic Engineering Blog.