unsubbed.co

Code

Code handles fast, extensible CLI as a self-hosted solution.

A fork of OpenAI’s Codex CLI, honestly reviewed. Focused on what you actually get: browser integration, background code review, and multi-agent commands running locally.

TL;DR

  • What it is: Apache-2.0 licensed fork of OpenAI’s codex CLI — a terminal-based coding agent that orchestrates multiple AI models from OpenAI, Anthropic, Google, and others [README].
  • Who it’s for: Developers who want a Cursor/GitHub Copilot alternative they fully control, can run with any provider’s API key, and extend via MCP. Not for non-technical founders — this is a terminal tool [README].
  • Cost model: Free software, but you pay for AI API usage. No per-seat licensing, no subscription to the tool itself. Cost is entirely driven by how many tokens your agents burn [README].
  • Key strength: Auto Drive (multi-agent orchestration) and Auto Review (background ghost-commit watcher) are meaningfully differentiated from vanilla Codex. The MCP integration plugs directly into Claude Desktop, Cursor, and Windsurf [README].
  • Key weakness: No third-party long-form reviews exist yet — the project is young (3,623 GitHub stars) and the community is still forming. You’re betting on a fork maintained by a small team [merged profile].

What is Code

Every Code (installed as code or coder if VS Code already owns the code command) is a local terminal coding agent. It started as a fork of openai/codex — OpenAI’s own CLI agent — but the just-every team has pushed well beyond the upstream in the ~9,147 commits since the fork [README].

The core pitch is in the GitHub description: “push frontier AI to its limits.” In practice that means taking a single-session, single-model chat agent and turning it into something closer to a local orchestration layer: multiple sub-agents handling different parts of a task, a background reviewer watching every code change, a browser integration that can screenshot running apps, and a theming system so the terminal UI doesn’t look like it was designed by a compiler.

What makes it self-hostable in the meaningful sense is the license and the API key model. Every Code is Apache-2.0, meaning you can fork it, embed it, or ship it in your own tooling without restriction [merged profile]. You bring your own API keys — OpenAI, Anthropic, or Google — so your code and prompts never route through a shared proxy. The tool runs entirely local; the only external calls are the model inference requests you initiate.

As of this review: 3,623 GitHub stars, 229 forks, Apache-2.0 [merged profile].


Why people choose it over vanilla Codex, Claude Code, and GitHub Copilot CLI

No dedicated third-party reviews of Every Code have been published yet — the project is recent enough that the search surface is thin. What follows is drawn from the README and the upstream Codex community’s documented pain points.

Versus openai/codex (the upstream). The upstream Codex CLI is competent for single-model, single-session tasks but has no orchestration story and no background processes. Every Code adds Auto Drive, Auto Review, Code Bridge, multi-agent commands, and MCP support on top. If you’re already using codex, this fork gives you all of that without switching tools [README].

Versus Claude Code. Claude Code is Anthropic-native, meaning it’s locked to Anthropic models. Every Code is provider-agnostic — you can wire it to OpenAI’s gpt-5.3-codex, Anthropic’s Claude, or any provider with a compatible endpoint. If you want to model-shop or hedge against a single provider’s pricing, Every Code’s design accommodates that. The trade-off is that Claude Code has Anthropic’s full support and documentation behind it.

Versus GitHub Copilot CLI. Copilot CLI is a GitHub product, meaning Microsoft cloud, Microsoft telemetry, and a $10–$19/month per-user subscription. Every Code is free software you run on your own machine with whatever model you want. The functionality is narrower (Copilot CLI is not trying to do multi-agent orchestration), but for teams who care about data residency, the comparison matters.

Versus Cursor and similar IDE-integrated tools. Cursor is an IDE fork, not a terminal agent. The two aren’t direct replacements — Cursor integrates into a GUI editing workflow, Every Code is terminal-native. The overlap is in agentic tasks: “fix this bug”, “implement this feature”. Some developers use both; others prefer to keep everything in the terminal.

The community that reaches for Every Code tends to be developers who are already comfortable with openai/codex, want multi-agent orchestration without leaving the terminal, and don’t want a per-seat license for infrastructure they use in CI or scripted workflows.


Features

Based on the README and project documentation:

Auto Drive — multi-agent orchestration:

  • Assigns planning, coding, and validation to separate agents in a coordinated session [README]
  • Self-heals: if a sub-agent fails, the coordinator recovers and retries
  • Model support: gpt-5.3-codex for planning, gpt-5.3-codex-spark for fast coding loops [README]
  • Reasoning control: medium | high | xhigh — you dial how hard the model thinks [README]
  • Bounded queues and history caps prevent long sessions from degrading [README]

Auto Review — background code watcher:

  • Ghost-commit watcher: runs in a separate git worktree whenever a turn changes code [README]
  • Uses codex-5.1-mini-high to review changes and surface ready-to-apply fixes [README]
  • Non-blocking: reports as history-visible notes, not foreground task injections [README]
  • Decoupled from Auto Drive: Esc returns control immediately while review finalizes in background [README]
  • Preserves branch and worktree context across Auto Drive sessions [README]

Browser Integration:

  • CDP (Chrome DevTools Protocol) support for headless browsing [README]
  • Screenshots captured inline inside the terminal [README]
  • Useful for visual regression testing or scraping tasks from within an agent session

Code Bridge:

  • Sentry-style local bridge that streams errors, console output, and screenshots from running apps back into the agent [README]
  • Ships its own MCP server [README]
  • Install by asking the agent: pull https://github.com/just-every/code-bridge [README]

Multi-agent commands:

  • /plan — breaks a task into sub-tasks and assigns agents [README]
  • /code — coding-focused multi-agent run [README]
  • /solve — problem-solving coordination across multiple agents [README]

Unified settings and theming:

  • /settings overlay for limits, approvals, provider wiring, and theming [README]
  • /themes command: switch between presets, customize accents, preview live [README]
  • Accessibility-focused preset options [README]

MCP support:

  • Extend with filesystem, databases, APIs, or custom tools [README]
  • Plugs into Claude Desktop, Cursor, and Windsurf as an MCP provider [README]

Safety modes:

  • Read-only mode, approval gates, workspace sandboxing [README]
  • Memory and project docs management for persistent context [README]

Non-interactive / CI mode:

  • Can run headless in CI pipelines [README]

Pricing: SaaS vs self-hosted math

Every Code is not a SaaS — it’s a CLI tool. There’s no subscription to the tool itself. The cost equation is entirely about AI model usage.

The tool: $0. Apache-2.0, install via npm [README].

Your actual costs:

ComponentCost
Every Code CLIFree
OpenAI API (gpt-5.3-codex)Usage-based — data not published
Anthropic API (Claude models)$3–$15 per million tokens depending on model
Google Gemini APIUsage-based

The meaningful comparison is not Every Code vs. a self-hosted Zapier — it’s Every Code vs. paying $10–$19/month per seat for GitHub Copilot, or $20/month for Claude Pro to use Claude Code interactively.

Where cost compounds: Auto Drive orchestration spins up multiple agents per task. A single complex engineering task can consume a lot of tokens across planning, coding, and review agents running in parallel. If you’re using expensive models (gpt-5.3-codex, Claude Opus 4) for every step, the per-task cost adds up. The xhigh reasoning mode intentionally burns more tokens for harder problems [README].

Where cost stays manageable: Auto Review uses codex-5.1-mini-high — a cheaper model — for background review tasks [README]. The medium reasoning mode is available for simpler coding loops. You can mix models per task type to optimize cost.

No data is available on average token consumption per session or monthly costs for typical development workflows. The project doesn’t publish benchmarks. You’d need to run it for a week and check your provider dashboard to know what it’ll cost your team.


Deployment reality check

Every Code installs in under a minute for any developer:

npx -y @just-every/code
# or
npm install -g @just-every/code

If code is already claimed by VS Code, the CLI also registers as coder [README].

Authentication options:

  • Sign in with ChatGPT (Plus/Pro/Team) — uses models available to your plan
  • Set OPENAI_API_KEY as environment variable for API key auth
  • Other providers via their respective API keys and OPENAI_BASE_URL override

What you need:

  • Node.js and npm
  • An API key for whichever model provider you’re using
  • A terminal

There is no Docker, no Postgres, no VPS required. This is a local CLI, not a web application.

What can go sideways:

  • The fork moves fast. 9,147 commits in and the changelog includes phrases like “long-session stability sweep” and “bounded drop/trim behavior” — which implies long sessions had instability problems that are actively being patched [README]. If you need stability guarantees, this is a project to watch, not immediately adopt for production workflows.
  • No dedicated managed cloud option. Unlike Activepieces or n8n, there’s no “just pay us and we host it” path. You run this locally or in CI; there’s no SaaS tier.
  • Provider dependency. Every Code abstracts providers, but you still need a working API key and billing relationship with at least one AI provider. If OpenAI or Anthropic has an outage, your tooling stops.
  • The code command conflict. VS Code’s CLI also registers as code. The project handles this gracefully with coder as a fallback, but it’s a friction point on developer machines where VS Code is already installed [README].
  • No third-party production stories yet. There are no public case studies, no Reddit threads saying “we replaced Copilot with Every Code and here’s what happened.” The project is young. The risk of adopting it is higher than for a more mature tool.

Realistic setup time for a developer: 5 minutes to first run. Understanding Auto Drive and configuring it for a real workflow: a few hours of experimentation.


Pros and Cons

Pros

  • Apache-2.0 licensed. No usage restrictions, no commercial clauses. You can fork it, embed it in your own product, ship it in CI pipelines without a legal conversation [merged profile].
  • Provider-agnostic. OpenAI, Anthropic, Google, or any compatible endpoint. You’re not locked to one company’s model releases or pricing changes [README].
  • Auto Review is a genuine differentiator. Background code review in a separate worktree that doesn’t block your main session is not a feature most terminal coding agents offer [README].
  • MCP-native. Extends well into the Claude Desktop / Cursor ecosystem as an MCP provider [README].
  • No per-seat fee for the tool. In a team of five, you pay for API tokens, not $10–$19/user/month for a software license.
  • Active development. 9,147 commits, recent stability hardening, new model support added regularly [README].

Cons

  • Young project with stability work still in progress. Changelog entries like “long-session stability sweep” and bounded queue fixes suggest the core is still being hardened [README]. Using it for critical workflows today means tolerating rough edges.
  • No third-party reviews or community track record. At 3,623 stars, this hasn’t had the adoption to generate public production war stories. You’re early [merged profile].
  • Token costs are unpredictable. Multi-agent orchestration burns tokens across multiple sub-agents. Without published benchmarks, you can’t estimate monthly costs until you’ve run it for a while.
  • Terminal-only. If your team works primarily in GUI editors and expects code suggestions inline as they type, this is the wrong tool. This is a task-runner, not an autocomplete engine.
  • Tied to OpenAI’s Codex lineage. The “gpt-5.3-codex” and “codex-5.1-mini-high” model references suggest primary testing is against OpenAI’s Codex model family. Anthropic/Google provider support may be less polished [README].
  • code command conflict with VS Code is minor but annoying on developer machines [README].
  • No offline mode. Every meaningful operation requires an outbound API call. Local/offline inference (Ollama etc.) is not documented as a supported path.

Who should use this / who shouldn’t

Use Every Code if:

  • You’re a developer already using openai/codex CLI and want multi-agent orchestration and Auto Review without switching tools.
  • You want a provider-agnostic coding agent that lets you swap between OpenAI, Anthropic, and Google models depending on task type and cost.
  • You need to run AI coding agents in CI pipelines without paying per-seat software licenses.
  • You’re building tooling that embeds a coding agent and need an Apache-2.0 base to build on.
  • You’re comfortable with “active development” and can tolerate occasional rough edges for early access to features like Auto Drive.

Skip it (use Claude Code instead) if:

  • You’re on Anthropic’s Max plan and want the deepest integration with Claude models. Claude Code is the first-party tool for that workflow.
  • You want official support and guaranteed stability for a production-critical workflow.
  • You prefer a managed service over managing your own CLI and API keys.

Skip it (use GitHub Copilot CLI instead) if:

  • Your team is already on GitHub Enterprise and Copilot is included in the license.
  • You want tight GitHub PR integration and inline IDE suggestions alongside CLI usage.

Skip it entirely if:

  • You don’t write code. This is a developer tool. Non-technical founders have no use case here.
  • You’re looking for a self-hosted app with a web UI. There isn’t one.

Alternatives worth considering

  • openai/codex — the upstream project. Simpler, stable, well-documented, but lacks Auto Drive, Auto Review, multi-agent commands, and theme system. Start here if you want to understand what Every Code is building on.
  • Claude Code — Anthropic’s first-party CLI agent. Tight Claude integration, well-maintained, similar terminal-native UX. Locked to Anthropic models.
  • Aider — the most mature open-source terminal coding agent. More established community, more third-party reviews, strong git integration. Less orchestration depth than Every Code’s Auto Drive.
  • GitHub Copilot CLI — per-seat subscription, Microsoft cloud, IDE-plus-terminal workflow. Better for teams already inside the GitHub ecosystem.
  • Continue — VS Code / JetBrains plugin, provider-agnostic, strong local model (Ollama) support. Better for developers who want inline IDE suggestions, not task-based terminal agents.
  • Cursor — IDE fork, not a terminal agent. Different use case but often compared because both do “agentic” coding tasks.

The direct comparison to evaluate carefully is Every Code vs. Aider. Both are open-source terminal agents, both are provider-flexible. Aider has a much larger community and more third-party documentation. Every Code has Auto Drive and Auto Review, which Aider doesn’t match. The choice comes down to whether you need orchestration depth or community maturity.


Bottom line

Every Code is a technically interesting fork of openai/codex that has gone further than the upstream on multi-agent orchestration, background review, and MCP integration. The Auto Review feature in particular — a background watcher that reviews every code change in a separate worktree without blocking your session — is not something you find in most competing tools. The Apache-2.0 license and provider-agnostic design are genuine advantages for developers who don’t want to be locked into a single AI vendor.

The honest caveat is that this is an early-stage project with 3,623 stars and no published third-party production reviews. The changelog still reads like active stabilization work. If you need a battle-tested coding agent today, Aider or Claude Code are safer bets. If you’re a developer who wants to be early to multi-agent terminal tooling and can tolerate some rough edges, Every Code is worth an afternoon of evaluation.


Sources

  1. Software Recommendations Stack Exchange“self hosted - Code peer review tool” — softwarerecs.stackexchange.com — https://softwarerecs.stackexchange.com/questions/56546/code-peer-review-tool
  2. CodeRabbit Docs“Configure the CodeRabbit CLI to work with your self-hosted instance” — docs.coderabbit.ai — https://docs.coderabbit.ai/cli/cli-with-self-hosted-CodeRabbit
  3. CodeRabbit Docs“Self-hosted GitHub setup for CodeRabbit Enterprise” — docs.coderabbit.ai — https://docs.coderabbit.ai/self-hosted/github
  4. Bito Docs“Install/run as a self-hosted service — AI Code Review Agent” — docs.bito.ai — https://docs.bito.ai/ai-code-reviews-in-git/install-run-as-a-self-hosted-service
  5. Jamie Tanna“code-review tag archive” — jvt.me — https://www.jvt.me/tags/code-review/

Primary sources:

Features

Integrations & APIs

  • REST API