Anatomy of a Coding Agent
A deep-dive on Opencode
This is part 3 of a multi-part deep dive into LLM/AI-based coding assistants. Part 1 covers SWE-Bench and Part 2 covers Terminal-Bench, the most influential benchmarks for evaluating coding assistants.
This post will describe the core components of a coding agent and give a step-by-step, granular illustration of the tool calling loop for a real open-source coding agent, Opencode.
The Basics
First, let’s define a “Coding Agent” as a system that combines an LLM with tool calling in a loop to read and write code. Some popular examples:
- Claude Code
- OpenAI Codex
- Cursor
- Windsurf
- Opencode
A coding agent has two independent components: the model and the scaffold.
- The Model: The underlying LLM (e.g.
gpt-5.2,claude-4.5-opus, etc.) - The Scaffold: The set of tools that the LLM has access to, and the system prompt(s) that are input to the LLM.
Note that these definitions can apply to any agentic system, not necessarily a coding agent. For coding agents, the set of tools and the system prompts are tuned for coding tasks.
The scaffold is the unique component of a coding agent. That is where we will focus our attention. Later we will look at a real example scaffold for a production-grade coding agent (Opencode), and see how it is used. First, I want to spend some time understanding how LLM tool calling works, since this is a critical component of the coding agent scaffold.
The Tool Calling Loop: Step-by-Step
Tool calling is a topic that is easily obfuscated but is actually simple when you see an example. A tool is just a function that can be executed by the LLM, typically in your local environment. The tool calling interface operates over HTTP transactions between a local orchestrator process and the LLM running remotely. In the case of coding agents, the orchestrator is part of the agent.
Let’s define an example tool, use it in the OpenAI responses API, and trace how it works at the HTTP layer step-by-step. Consider a bash tool that allows the model to execute
shell commands in your local environment. Here is what a potential JSON interface to this tool could look like:
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| command | string | required | The shell command to execute |
Example Payload
{
"command": "find src -name '*.ts' | wc -l"
} Description
Execute a shell command and return the output.
Now let’s trace through a complete example. Suppose the user asks the model “How many TypeScript files are in the src directory?”.
Here’s the complete HTTP flow between your application and the OpenAI API:
Tool Definitions + Messages
- • Send the user's question about the codebase
- • Send the interface of the tool to the model
api.openai.com/v1/responses {
"model": "gpt-5.2",
"tool_choice": "auto",
"tools": [
{
"type": "function",
"name": "bash",
"description": "Execute a shell command and return the output.",
"parameters": {
"type": "object",
"properties": {
"command": { "type": "string", "description": "The command to execute" }
},
"required": ["command"],
"additionalProperties": false
},
"strict": true
}
],
"input": [
{ "role": "user", "content": "How many TypeScript files are in the src directory?" }
]
} Tool Calls
- • Model decides to call bash with a find command
- • Returns a function_call instead of a text message
{
"id": "resp_001",
"object": "response",
"status": "completed",
"model": "gpt-5.2",
"output": [
{
"type": "function_call",
"call_id": "call_001",
"name": "bash",
"arguments": "{\"command\":\"find src -name '*.ts' | wc -l\"}"
}
]
} Execute Function Code
Local- • Application parses the command from arguments
- • Executes the shell command in a sandboxed environment
- • Captures stdout and stderr from the process
localhost:8080/execute {
"command": "find src -name '*.ts' | wc -l"
} {
"stdout": "42",
"stderr": "",
"exit_code": 0
} Results
- • Send command output back using the call_id
- • Reference previous response to maintain conversation state
api.openai.com/v1/responses {
"model": "gpt-5.2",
"previous_response_id": "resp_001",
"input": [
{
"type": "function_call_output",
"call_id": "call_001",
"output": "{\"stdout\":\"42\",\"stderr\":\"\",\"exit_code\":0}"
}
]
} Final Response
- • Model interprets the command output
- • Generates a natural language response for the user
{
"id": "resp_002",
"object": "response",
"status": "completed",
"model": "gpt-5.2",
"output": [
{
"type": "message",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "There are 42 TypeScript files in the src directory."
}
]
}
]
} In summary, tool calling is just a response of type function_call. When the orchestrator receives a function call response, it executes
the function locally with the parameters provided by the model, and feeds the result back.
This loop is the core of any coding agent: the model requests tools, the application executes them, and the results are fed back. Frontier models have been trained to execute these tool calls precisely and reliably.
Now that you understand the basic tool calling loop, all that’s left is to understand the tools and system prompt that make up a production coding agent. That is what we’ll focus on next.
An Example Scaffold: Opencode
Opencode is an open-source TUI-based coding assistant, a popular alternative to closed-source agents like Claude Code and Codex. Here is an example subset of the core tools available to models running in the Opencode scaffold:
As you can see, these tools are very simple and very flexible. We are just giving the model the minimum set of tools that it needs to be able to explore your filesystem, read files, and write files in your local environment.
This set of core tools gets you to 99% of the functionality of any current coding agent. There are a variety of more special-purpose tools integrated into Opencode and other agents that are also helpful, but they are not critical for core coding agent tasks. See the appendix for a full breakdown of Opencode tools.
Below is an interactive view of the complete API request that Opencode sends to the model alongside the user’s first message. This example assumes an empty directory with no custom configuration. Click on any tool or system prompt section to expand it.
api.openai.com/v1/responses You are OpenCode, the best coding agent on the planet. You are an interactive CLI tool that helps users with software engineering tasks. Use the instructions below and the tools available to you to assist the user. IMPORTANT: You must NEVER generate or guess URLs for the user unless you are confident that the URLs are for helping the user with programming. If the user asks for help or wants to give feedback inform them of the following: - ctrl+p to list available actions - To give feedback, report issues at https://github.com/sst/opencode # Tone and style - Only use emojis if the user explicitly requests it - Your output will be displayed on a command line interface - Responses should be short and concise - Use Github-flavored markdown for formatting - Output text to communicate with the user - NEVER create files unless absolutely necessary - ALWAYS prefer editing existing files to creating new ones # Professional objectivity Prioritize technical accuracy and truthfulness over validating the user's beliefs. Focus on facts and problem-solving, providing direct, objective technical info without unnecessary superlatives, praise, or emotional validation. Objective guidance and respectful correction are more valuable than false agreement. When uncertain, investigate to find the truth first. # Task Management You have access to the TodoWrite tool to help manage and plan tasks. Use it VERY frequently to track progress and give the user visibility. These tools are EXTREMELY helpful for planning and breaking down larger tasks into smaller steps. Mark todos as completed immediately when done. # Tool usage policy - Prefer the Task tool for file searches to reduce context usage - Call multiple tools in parallel when there are no dependencies - Use specialized tools instead of bash commands when possible - For file operations: Read (not cat), Edit (not sed), Write (not echo) - When exploring codebases, use the Task tool instead of running search commands directly Here is useful information about the environment: <env> Working directory: /Users/dev/my-project Is directory a git repo: yes Platform: darwin Today's date: Mon Dec 22 2025 </env>
So, before the first user message, the model sees a system prompt and a description of the tools available to it. This is precisely a description of the scaffold available to the model.
From here the Opencode scaffold is just responsible for receiving the tool calls from the model, executing them and providing the results back. Here is an illustrated example of what this looks like:
Initial Request
- • Send tool definitions, system prompt, and user message
- • Model receives full context to begin working
api.anthropic.com/v1/messages {
"model": "claude-4.5-opus",
"tools": [ /* ... */ ],
"system": "You are OpenCode, the best coding agent...",
"messages": [
{ "role": "user", "content": "The login form is throwing a validation error when the email contains a plus sign. Can you find the bug and fix it?" }
]
} Search for Validation Code
- • Model decides to search for email validation logic
- • Uses grep tool to find relevant files
{
"output": [
{
"type": "function_call",
"call_id": "call_001",
"name": "grep",
"arguments": "{\"pattern\":\"email.*valid\",\"path\":\"src\"}"
}
]
} Search Results
- • Opencode executes grep and returns matching files
- • Found validation logic in LoginForm.tsx
api.anthropic.com/v1/messages {
"type": "function_call_output",
"call_id": "call_001",
"output": "src/components/LoginForm.tsx:23: const emailRegex = /^[a-zA-Z0-9._-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}$/;"
} Read the File
- • Model reads the file to understand the full context
- • Needs to see surrounding code before making changes
{
"output": [
{
"type": "function_call",
"call_id": "call_002",
"name": "read",
"arguments": "{\"filePath\":\"/Users/dev/project/src/components/LoginForm.tsx\"}"
}
]
} File Contents
- • Opencode returns the file contents with line numbers
- • Model can now see the buggy regex pattern
api.anthropic.com/v1/messages {
"type": "function_call_output",
"call_id": "call_002",
"output": "21: const validateEmail = (email: string) => {\n22: // BUG: doesn't allow + in local part\n23: const emailRegex = /^[a-zA-Z0-9._-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}$/;\n24: return emailRegex.test(email);\n25: };"
} Fix the Bug
- • Model identifies the missing + in the character class
- • Uses edit tool to make the fix
{
"output": [
{
"type": "function_call",
"call_id": "call_003",
"name": "edit",
"arguments": "{\"filePath\":\"/Users/dev/project/src/components/LoginForm.tsx\",\"oldString\":\"const emailRegex = /^[a-zA-Z0-9._-]+@\",\"newString\":\"const emailRegex = /^[a-zA-Z0-9.+_-]+@\"}"
}
]
} Edit Confirmed
- • Opencode confirms the edit was applied successfully
- • File has been modified on disk
api.anthropic.com/v1/messages {
"type": "function_call_output",
"call_id": "call_003",
"output": "Successfully edited /Users/dev/project/src/components/LoginForm.tsx"
} Final Response
- • Model summarizes the fix for the user
- • Explains what was wrong and how it was fixed
{
"output": [
{
"type": "text",
"text": "Found and fixed the bug in src/components/LoginForm.tsx.\n\nThe email validation regex was missing the + character in the allowed characters for the local part. I updated the regex from:\n\n /^[a-zA-Z0-9._-]+@.../\n\nto:\n\n /^[a-zA-Z0-9.+_-]+@.../\n\nEmails like user+tag@example.com will now validate correctly."
}
]
} And that’s it! In the next installment of the series we will use our knowledge to build our own fully-featured coding agent.
Appendix: Complete Tools Reference
Below is the complete list of all tools available in Opencode, organized by category.