OpenCode
The open-source AI coding agent — free models included, or connect Claude, GPT, Gemini, and 75+ other providers.
An open-source AI coding agent, honestly reviewed. Built for developers who want model freedom, a real TUI, and no vendor lock-in.
TL;DR
- What it is: Open-source (MIT) AI coding agent — think Claude Code, but the source code lives on your machine and Anthropic can’t raise your bill [3][5].
- Who it’s for: Developers who want terminal-native AI coding without being locked into a single model provider. Especially useful if you’re already paying for Claude, GPT, or Gemini through existing subscriptions [4][5].
- Cost savings: Claude Code requires an Anthropic subscription and charges per token. OpenCode is free software you run yourself, and it supports existing ChatGPT Plus/Pro and GitHub Copilot subscriptions — no extra per-license fees [4][homepage].
- Key strength: 75+ LLM providers through Models.dev, a proper TUI built on a custom rendering engine, and a
servecommand that lets it run as a persistent headless server [2][5][homepage]. - Key weakness: Still young (crossed 120K stars fast but the project moves quickly and APIs can shift), local LLM performance is aspirational on most consumer hardware, and the model-agnosticism that’s its biggest advantage is also its biggest complexity tax for non-technical users [1][5].
What is OpenCode
OpenCode is a terminal-first AI coding agent. You run it in your terminal alongside your editor, describe what you want built or changed, and it reads your files, writes code, runs commands, and iterates. The GitHub description is plain: “The open source coding agent.” [README][homepage]
The project was built by Dax Raad and the team behind SST (Serverless Stack), who had a specific frustration: switching between an editor and a browser to have AI conversations with copy-pasted code felt wrong. When Claude Code shipped with direct filesystem access, that clicked — but they wanted something with deeper terminal investment and model flexibility [3]. The result is a project that crossed 124,066 GitHub stars with 800 contributors and claims 5 million monthly active developers [homepage].
Three things distinguish it from the field. First, MIT license — you can run it, fork it, embed it in your own tooling, use it in CI/CD, or build commercial products on top of it without a lawyer on speed-dial [4][5]. Second, provider freedom — 75+ LLM providers through Models.dev, including every major hosted model plus local models via Ollama, and support for using your existing ChatGPT Plus or GitHub Copilot subscription instead of paying again [4][homepage]. Third, a real TUI — not a REPL that streams tokens to stdout, but a proper terminal UI application with its own rendering engine (TypeScript API layer over a native Zig backend) that handles resizing, scrolling, and syntax-highlighted diffs without falling apart [5].
The project ships two built-in agents: build (default, full filesystem and shell access for development work) and plan (read-only for analysis and exploration — denies file edits by default, asks before running bash) [README]. You switch between them with Tab.
Why people choose it over Claude Code, Cursor, and Aider
The reviews we synthesized land consistently: OpenCode wins on model freedom, terminal UX, and cost structure, and loses on maturity and local LLM viability today.
Versus Claude Code. This is the comparison that matters. Thomas Wiegold [5] ran Claude Code as his daily driver from early 2025, tried every alternative, and eventually switched. His conclusion: the feature gap has closed. Both tools support multi-file edits, shell execution, MCP integration, subagents, LSP integration, and slash commands. The core capability is nearly identical now. Where they split: Claude Code locks you to Anthropic’s models and charges accordingly. OpenCode supports 75+ providers and lets you swap per task. Claude Code is a polished REPL that streams to stdout. OpenCode is a proper TUI application that handles resize and scroll without breaking [5]. Claude Code has automatic workspace snapshots via /rewind. OpenCode has git-based /undo that reverts the last message and any file changes made [5].
Wiegold’s honest take on the local LLM angle: “My M1 Mac Mini and M5 MacBook Air don’t have the RAM for serious local coding models. But the architecture is ready for when hardware catches up, and that matters.” [5] This is the right framing — it’s aspirational today, but the optionality is real.
Versus Cursor and VS Code AI extensions. Martin Alderson [4] makes the clearest argument here: if you need AI code review on infrastructure that isn’t GitHub or GitLab — Bitbucket, self-hosted Gitea, anything else — you’re locked out of most AI code review tools that require repo access via OAuth. OpenCode running in CI/CD with a plain YAML pipeline and a Git diff prompt works everywhere. The security angle matters too: “I don’t want to give another SaaS product access to my repositories.” [4]
On the subscription economics. Alderson flags something that OpenCode’s positioning makes possible and Claude Code’s doesn’t: using your existing ChatGPT Plus, Pro, or Business subscription with OpenCode at no extra cost. “There are no additional per-license, per-user, per-developer, or per-CI fees.” [4] If you’re already paying $20/mo for ChatGPT Plus and you want to use an AI coding agent in CI/CD, OpenCode is the only path that doesn’t stack another bill on top.
On real-world build reliability. Ethan Cooper [1] ran five builds using OpenCode “the way I really build: fast, parallel, and slightly aggressive.” Four were effectively one-shot successes. One failed due to a stubborn dependency issue. His summary: “4 wins, 1 loss is the most honest summary I can give.” The wins were real developer work, not toy demos. The loss exposed where current agent workflows still hit walls — complex multi-step dependency resolution where the agent would loop without escaping. This matches what you’d expect from any coding agent at this maturity level [1].
Features
Core agent capabilities:
- Terminal-based interface (TUI), desktop app (beta on macOS/Windows/Linux), and IDE extensions for VS Code, Cursor, Zed, and Windsurf [homepage][5]
buildagent (default, full access) andplanagent (read-only, for exploration) switchable with Tab [README]- LSP integration — automatically loads the right language server for the LLM [homepage]
- Multi-session: start multiple agents in parallel on the same project [homepage]
- Git-based
/undoand/redofor rollback [5] - AGENTS.md project initialization — analyzes your project and generates a config you commit to Git [docs]
Model flexibility:
- 75+ LLM providers through Models.dev [homepage]
- OpenCode Zen — curated, benchmarked models the team has tested specifically for coding agent work [homepage][3]
- GitHub Copilot login (use your existing subscription) [homepage]
- ChatGPT Plus/Pro login (use your existing subscription) [homepage][4]
- Local models via Ollama [5]
Headless/server mode:
opencode servestarts a web UI accessible from any browser [2]- Can run as a systemd service on a home server or VPS, accessible over VPN [2]
- Supports scheduled overnight jobs: automated test writing, documentation updates, convention enforcement — wakes up to PRs ready for review [2]
- Works in CI/CD pipelines via
opencode runwith a prompt and model flag [4]
Sharing and collaboration:
- Share links: generate a URL for any session for reference or debugging [homepage]
Installation:
- curl install script, npm/bun/pnpm/yarn, Homebrew, Scoop, Chocolatey, pacman/AUR (Arch), Docker, Nix, Mise [README]
- Desktop app:
.dmg(macOS),.exe(Windows),.deb/.rpm/AppImage (Linux) [README]
Pricing: SaaS vs self-hosted math
OpenCode itself: $0. MIT license. The software is free [README].
What you pay for: model API access. This is the part that matters, and where OpenCode’s design is clever.
If you have existing subscriptions:
- ChatGPT Plus ($20/mo): works directly with OpenCode. No extra AI subscription needed [4][homepage].
- GitHub Copilot (Individual $10/mo, Business $19/seat): works directly [homepage].
- In both cases, OpenCode in CI/CD adds $0 in additional per-run or per-developer costs [4].
OpenCode Zen: The team’s own model hosting — a curated set of models they’ve benchmarked for coding agent work. Available at opencode.ai/auth. Pricing details not publicly listed; requires account signup [homepage][3].
Direct API keys: any provider supported by Models.dev at that provider’s standard API rates. Anthropic, OpenAI, Google, Deepseek, etc.
Claude Code for comparison:
- Requires Anthropic API access
- Max plan: $100/mo per user
- Pro plan: $20/mo with usage limits
- Enterprise pricing varies, but it’s per-seat
Cursor for comparison:
- Hobby: $0 (limited), Pro: $20/mo per user, Business: $40/seat/mo
Concrete scenario for a solo developer: A developer already paying $20/mo for ChatGPT Plus runs OpenCode in their terminal and in CI/CD. Their effective AI coding agent cost: $0 additional. The same developer on Claude Code would pay $20–100/mo in Anthropic subscriptions. Savings: $240–$1,200/year [4][homepage].
The math shifts if you’re using heavy API usage without an existing subscription, but for anyone who already pays for one of the supported consumer plans, the model is unusually attractive.
Deployment reality check
Terminal use (simplest path):
curl -fsSL https://opencode.ai/install | bash
Navigate to your project, run opencode, run /connect to configure a provider. Working in under 10 minutes for a developer who already has API keys or an existing subscription [docs].
Desktop app: Download the .dmg/.exe from the releases page. Beta, but functional on macOS, Windows, and Linux [README].
As a persistent server (the power use case): Roger Garmendia [2] documents running OpenCode as a systemd user service on a home server (Ryzen 9, 64GB RAM) with Nginx Proxy Manager and WireGuard VPN. The opencode serve command starts a web UI at a configurable port. With loginctl enable-linger, the service survives logout and runs permanently. Systemd timers handle overnight automated jobs. The result: PRs waiting for review every morning, created by OpenCode running while he slept [2].
In CI/CD: opencode run -m <model> "<prompt>" is the key command. Pipe in a Git diff, output to a file, post to your Git provider’s API. Works with any YAML CI system — GitHub Actions, GitLab CI, Bitbucket Pipelines, anything [4].
What can go sideways:
- The auth JSON file for OpenAI Codex expires every 14 days in the CI/CD use case — requires rotation handling [4].
- Local LLMs require separate Ollama setup; OpenCode doesn’t ship inference [5].
- Windows support recommends WSL for best compatibility; native Windows has some limitations [docs].
- The project moves fast. APIs and configuration format can shift between releases. Pin your version in CI/CD [1].
- Ethan Cooper [1] hit a failure case involving complex dependency resolution where the agent looped without resolution — not a deal-breaker, but worth knowing that “4 out of 5” is the honest current reliability rate on ambitious one-shot builds.
Pros and Cons
Pros
- MIT license, genuinely free software. Use it in CI/CD, embed it, fork it, build commercial tooling on top — no restrictions [4][5]. This matters more than it sounds in an AI tools space full of “free for personal use” licenses.
- 75+ model providers. Not locked to Anthropic or any single vendor. Swap models per task. Use local models when hardware supports it [5][homepage].
- Reuse existing AI subscriptions. ChatGPT Plus and GitHub Copilot work directly — no additional per-seat or per-run fees [4][homepage].
- Real TUI, not a REPL. Proper terminal UI with custom rendering — handles resize, scroll, syntax-highlighted diffs without breaking [5]. Matters when you live in the terminal.
- Headless server mode.
opencode serveturns it into a persistent agent you can access from anywhere and schedule for overnight work [2]. This use case is underrated. - Works everywhere in CI/CD. Doesn’t require GitHub or GitLab OAuth. Works on Bitbucket, self-hosted Gitea, any Git host, any YAML CI system [4].
- 120K+ GitHub stars, 800 contributors, active community. This isn’t a one-person side project [homepage][3].
- Multilingual documentation. README translated into 20+ languages — signals genuine global adoption [README].
Cons
- Young and fast-moving. The codebase ships frequently. Config formats shift, APIs change, behavior can be inconsistent between versions. Pin versions in production use [1].
- Local LLMs are aspirational, not practical, for most hardware. The architecture is ready but most developers don’t have the RAM for serious local coding models today [5].
- Agent reliability is “4 out of 5” on ambitious tasks. Complex multi-step builds can loop [1]. This is an honest constraint of current LLM coding agents broadly, but worth naming.
- Non-technical users need to configure model providers. The tool asks you to bring your own API keys or connect a subscription. For non-technical founders, that’s a setup hurdle Claude Code partially abstracts away.
- No automatic workspace snapshots. Claude Code’s silent, always-on snapshot system is more forgiving than OpenCode’s git-based
/undo— the latter requires git history to exist and is more explicit [5]. - OpenCode Zen pricing opaque. The curated model service the team recommends for new users doesn’t have public pricing — requires signing up to find out [homepage].
- Desktop app is still beta. Works, but expect rough edges on macOS, Windows, and Linux [README].
Who should use this / who shouldn’t
Use OpenCode if:
- You’re a developer who lives in the terminal and wants a proper TUI, not a streaming REPL.
- You’re already paying for ChatGPT Plus or GitHub Copilot and don’t want to pay again for an AI coding agent.
- You need AI code review or agent automation in CI/CD pipelines that aren’t GitHub or GitLab.
- You want to self-host a persistent coding agent server accessible from multiple devices.
- You have a philosophical or practical preference for MIT-licensed tools over closed-source ones.
- You want to experiment with multiple models and swap based on task or cost.
Consider staying on Claude Code if:
- You want automatic workspace snapshots without thinking about git history.
- You prefer a polished, opinionated experience over maximum configurability.
- You’re on a team that has standardized on Anthropic’s model stack and that consistency matters.
- You want the model maker’s first-party implementation of their own prompting optimizations.
Skip both and use Cursor if:
- You want AI coding in a full GUI IDE with inline diffs, visual context management, and you don’t want to touch the terminal.
Skip both and use Aider if:
- You want a simpler, more battle-tested open-source CLI agent with a longer track record and less UI complexity.
Alternatives worth considering
- Claude Code — the obvious comparison. First-party Anthropic agent, automatic snapshots, locked to Claude models, subscription required. Cleaner onboarding for non-technical users, no model flexibility [5].
- Aider — older, more battle-tested open-source CLI coding agent. Less TUI investment, more focused on the core coding loop. MIT licensed. Good if you want less surface area.
- Cursor — GUI IDE with AI built in. Different category (IDE vs terminal agent), but competes for the “AI coding tool” budget. Pro: $20/mo, Business: $40/seat [5].
- GitHub Copilot CLI / Workspace — if you’re already paying for Copilot, worth evaluating before adding another tool. Less agent-y, more completion-y.
- Codex CLI — OpenAI’s open-source terminal agent. Narrower, less TUI investment, but MIT licensed and directly supported by OpenAI [4].
- Continue — open-source IDE extension (VS Code, JetBrains). If you want model flexibility but prefer staying in an IDE rather than a terminal.
- Warp — the listed SaaS competitor in the profile. Terminal emulator with AI features built in — different angle (smart terminal vs pure coding agent).
For a developer who currently pays for Claude Code or Cursor and wants to evaluate an open-source alternative, the realistic comparison is OpenCode vs Aider. OpenCode wins on TUI quality, feature breadth, and the server/CI use cases. Aider wins on maturity and simplicity.
Bottom line
OpenCode is the most honest open-source answer to Claude Code that currently exists — not a toy, not a clone, but a project with genuine design opinions (real TUI, provider freedom, serve mode) built by people who actually use terminals. The 124K GitHub stars and 5M monthly active developer claims aren’t just hype numbers; the CI/CD use case alone, which no hosted AI code review tool handles cleanly for non-GitHub/GitLab repositories, represents real developer pain solved cheaply. The limitations are real too: it’s fast-moving software, local LLMs aren’t practical on most hardware yet, and the one-shot reliability ceiling on complex builds is honestly about 80%. But for a developer already paying for ChatGPT Plus who wants a capable terminal coding agent at $0 additional cost, or a team that needs headless AI agents running in CI without handing over repository OAuth to a third party, the math is clear. The software is free, the model flexibility is genuine, and the project has enough velocity and community that “too early to bet on” stopped being a valid objection sometime around the 100K star mark.
Sources
-
Ethan Cooper, Medium — “I Tried OpenCode Like I Actually Use It: Setup, Five ‘One-Shot’ Builds, and the One That Broke Me” (Jan 14, 2026). https://medium.com/@EthanCooperwrtier/i-tried-opencode-like-i-actually-use-it-setup-five-one-shot-builds-and-the-one-that-broke-me-f7584acb29e1
-
Roger Garmendia (rogs.me) — “OpenCode as a server: AI agents that work while I sleep” (Apr 2, 2026). https://rogs.me/2026/04/opencode-as-a-server-ai-agents-that-work-while-i-sleep/
-
Madison Kanna’s Substack — “Building AI Agents, Open Code And Open Source: A Conversation with Dax Raad” (Jan 2, 2026). https://madisonkanna.substack.com/p/building-ai-agents-open-code-and
-
Martin Alderson — “Using OpenCode in CI/CD for AI pull request reviews”. https://martinalderson.com/posts/using-opencode-in-cicd-for-ai-pull-request-reviews/
-
Thomas Wiegold — “I Switched From Claude Code to OpenCode — Here’s Why”. https://thomas-wiegold.com/blog/i-switched-from-claude-code-to-opencode/
Primary sources:
- GitHub repository and README: https://github.com/sst/opencode (124,066 stars, MIT license, 800+ contributors)
- Official website: https://opencode.ai
- Documentation: https://opencode.ai/docs
- Desktop download: https://opencode.ai/download
- OpenCode Zen: https://opencode.ai/zen
Features
Mobile & Desktop
- Mobile App
Category
Related AI & Machine Learning Tools
View all 93 →OpenClaw
320KPersonal AI assistant you run on your own devices. 25+ messaging channels, voice, cron jobs, browser control, and a skills system.
Ollama
166KRun open-source LLMs locally — get up and running with DeepSeek, Qwen, Gemma, Llama, and more with a single command.
Open WebUI
128KRun AI on your own terms. Connect any model, extend with code, protect what matters—without compromise.
Zed
77KA high-performance code editor built from scratch in Rust by the creators of Atom — GPU-accelerated rendering, built-in AI, real-time multiplayer, and no Electron.
OpenHands
69KThe open-source, model-agnostic platform for cloud coding agents — automate real software engineering tasks with sandboxed execution, SDK, CLI, and enterprise-grade security.
Daytona
67KSecure, elastic infrastructure for running AI-generated code — sub-90ms sandbox creation, stateful operations, and SDKs for Python, TypeScript, Ruby, and Go.