Kodus
Kodus is a TypeScript-based application that provides enhance code quality.
AI-powered code review, honestly reviewed. What you actually get when you swap CodeRabbit for something you control.
TL;DR
- What it is: Open-source (AGPL v3) AI code review tool — think CodeRabbit, but you bring your own LLM key and the vendor can’t mark up your inference costs [README][2].
- Who it’s for: Engineering teams paying $24–$30/user/month for CodeRabbit or $17–$25/user/month for Claude Code subscriptions, who want the same automated PR reviews without per-seat LLM markups [2][3].
- Cost savings: CodeRabbit runs $24–$30/user. Kodus Teams is $10/user/month, and you pay your LLM provider directly at cost with no middleman markup. Self-hosted Community edition is free with your own API keys [2][README].
- Key strength: Model-agnostic BYOK architecture — run your reviews on GPT, Claude, Gemini, Llama, or any OpenAI-compatible endpoint, and pay the model provider directly. No markups, no lock-in [README].
- Key weakness: Only 1,010 GitHub stars as of this review — significantly younger and less battle-tested than CodeRabbit. The Community edition caps you at 10 rules and 3 plugins, which is tight for any team with real standards. And there’s no native IDE plugin [README][2].
What is Kodus
Kodus is an AI code review platform that plugs into your pull request workflow and leaves automated comments on every PR. The reviewer inside is called Kody — and the pitch the company leads with is “The Open Source Alternative to CodeRabbit” [homepage]. That’s a cleaner and more useful framing than most tools in this space, because it tells you exactly what comparison to make.
The thing that actually distinguishes Kodus from CodeRabbit and every other AI review tool is the model-agnostic BYOK architecture. You connect your own API keys for whatever LLM you want — Claude, GPT-5, Gemini, Llama, Kimi, or any endpoint that speaks OpenAI’s protocol — and Kodus routes your review requests there directly. No hidden multiplier on token costs, no vendor lock-in on the model [README]. If Anthropic raises Claude prices, you switch to Gemini in your config and nothing else changes.
The second differentiator is rule granularity. Where most tools let you set review guidelines at the repository level, Kodus lets you scope rules to specific repos, folders, files, or individual PR types — all version-controlled alongside your code [2]. You can also import rules you’ve already written for Cursor, Copilot, or Claude without rewriting them [4].
The third is context injection via MCP. Kodus can pull in Jira tickets, Notion docs, Linear issues, CI results, or Playwright test outputs and surface that context inside the same review comment [homepage][2]. The reviewer isn’t just looking at the diff — it’s looking at the diff against what the ticket said the code was supposed to do.
The project currently sits at 1,010 GitHub stars. The codebase is a monorepo with three backend services (api, webhooks, worker) and a Next.js frontend [README]. The license is AGPL v3 [README badge] — which means you can self-host and use it freely, but if you build a commercial product around it you need to open-source your changes. That’s more restrictive than MIT but more permissive than a commercial-only license.
Why people choose it over CodeRabbit, Claude Code, and GitHub Copilot
The comparison Kodus picks for itself is CodeRabbit, and it’s the most honest version of the pitch. The second-most-relevant comparison is Claude Code. Here’s how the trade-offs actually fall out.
Versus CodeRabbit. Kodus published a head-to-head comparison [2] testing both tools against real PRs from Sentry, Cal.com, Grafana, Discourse, and Keycloak — 38 bugs across 13 critical, 16 high, and 9 medium severity cases. Kodus caught 30 out of 38 (79%). CodeRabbit caught 15 out of 38 (39%). Take vendor-produced benchmarks with appropriate skepticism, but the methodology is public and the PRs tested are real open-source repositories you can inspect yourself.
The structural differences are harder to dismiss. CodeRabbit locks you to one LLM provider with no model choice or cost control. Kodus routes to whatever key you bring. CodeRabbit’s review rules apply to the whole repository; Kodus scopes them per file, folder, or PR type. CodeRabbit has no metrics dashboard or tech debt backlog; Kodus has both. The one area where CodeRabbit wins: it has a native VS Code and Cursor plugin that surfaces feedback before you push. Kodus doesn’t have that [2].
On pricing: CodeRabbit is $24–$30/user/month. Kodus Teams is $10/user/month, and the LLM inference you’re paying for directly at cost typically runs well under $5/user/month at moderate PR volume. For a team of 10, that’s roughly $140–$200/month on CodeRabbit versus $100/month on Kodus (subscription) plus direct LLM costs [2][README].
Versus Claude Code. This comparison is slightly unusual because Claude is primarily an IDE coding assistant, not a dedicated PR reviewer. But Kodus published it [3], and it reveals something useful: teams already paying $17–$25/user/month for Claude subscriptions are the exact audience Kodus wants. The pitch is that Kodus gives you structured, rule-driven PR governance — per-file rules, metrics dashboards, tech debt tracking — that Claude Code’s single CLAUDE.md approach doesn’t offer. Claude wins on IDE integration; Kodus wins on PR workflow structure and multi-model flexibility [3].
On cost transparency. Multiple customer quotes on the Kodus site emphasize the same point: the ability to bring your own LLM key and see exactly what you’re spending. One team at Ikatec notes “the best part is that we can tailor how it works for each project” [1] — the model choice and rule configuration are the specific things they’re calling out. Pedro Maia at Notificações Inteligentes frames it differently: “Kodus stepped in as our senior reviewer that never forgets anything. It doesn’t replace human review, but it’s now a required step” [1]. That’s a different value proposition than “save money on reviews” — it’s about consistency and institutional memory.
On real-world time savings. The numbers cited across customer testimonials are consistently meaningful. Brendi reported going from 125 hours per week on reviews to 40 — a 70% reduction [1]. Conta Voltz reported 40% less review time and half as many production bugs [homepage]. Pilar reported review time dropping “from hours to minutes” [homepage]. These are customer-sourced numbers, not third-party measurements, so read them accordingly. But the direction and magnitude are consistent enough to be credible.
Features
Core review engine:
- Automated PR reviews on GitHub, GitLab, Bitbucket, and Azure Repos [README]
- Inline comments with specific code suggestions [2]
- PR summary / walkthrough generation [2]
- Chat with the PR bot to ask follow-up questions [2]
- Noise filters — limit the number of suggestions per review and filter by severity threshold [2]
- Auto-pause behavior when review volume spikes unexpectedly [4]
Rules and customization:
- Version-controlled review rules scoped to repo, folder, file, or PR type [2]
- Natural language rule authoring — define standards in plain English [homepage]
- One-click community rule library for common patterns [2]
- Rule sync — auto-detects and imports rules from Cursor, Copilot, Claude, and Windsurf config files [homepage][4]
- “Kody Memories” — Kody learns team-specific coding conventions through conversation [4]
- Community edition: up to 10 rules. Teams: unlimited [README]
Context and integrations:
- MCP plugin support — Jira, Notion, Linear, CI results, Playwright tests, custom scripts [homepage][2]
- Community edition: up to 3 active plugins. Teams: unlimited [README]
- Business logic validation — compare PR diffs against Jira acceptance criteria, Linear issues, Google Docs specs [4]
Engineering metrics:
- Cockpit dashboard with deploy frequency, cycle time, bug ratio, PR sizes [homepage]
- Kody Issues — automatically creates a backlog from unresolved review suggestions, turning review comments into trackable technical debt [homepage][2]
- Teams tier only; not available in Community [README]
CLI:
kodus review— reviews working tree changes from terminal [5]--staged,--branch,--fixflags for scoped or auto-fix mode [5]--prompt-onlyflag for feeding structured output to Claude Code, Cursor, or Windsurf [5]- Pre-push hooks to block commits above a severity threshold [5]
- Install as a review skill for AI coding agents:
curl -fsSL https://review-skill.com/install | bash[5] - Trial limits (no account): 5 reviews/day, 10 files/review. Authenticated: per-plan limits [5]
Security and privacy:
- Source code never stored, never used to train models [README]
- Data encrypted in transit and at rest [README]
- Self-hosted runners supported for air-gapped environments [README]
- SOC 2 compliance: in progress for Enterprise tier [README]
Pricing: SaaS vs self-hosted math
Kodus Community (free, self-hosted or cloud):
- $0 [README]
- Bring your own API key (BYOK) required
- Unlimited PRs using your own key
- Up to 10 Kody Rules, up to 3 active plugins
- Kody Learnings and Memory included
- Quality Radar: unlimited issues
- No engineering metrics dashboard, no priority queue
Kodus Teams:
- $10/developer/month ($8/dev/month billed annually) [README]
- Plus LLM token costs you pay directly to your model provider
- Unlimited rules, unlimited plugins
- Engineering Metrics / Cockpit included
- Priority queue for Kody Agents
- No SSO (that’s Enterprise-only)
- Hosted by Kodus; BYOK required
Kodus Enterprise:
- Custom pricing [README]
- Self-hosted or hosted by Kodus
- Kodus AI Tokens API key (no BYOK) — they manage the model
- SSO, RBAC, audit logs, analytics
- SOC 2 compliance (in progress)
- Private Discord + dedicated onboarding support
CodeRabbit for comparison:
- Pro tier: $19/month per user (billed annually) or $24/month monthly
- Teams / Enterprise: $30/user/month and up
Claude Code for comparison:
- Max subscription: $100/month (individual) — covers Claude Max usage across coding tools [3]
- Teams pricing: $17–$25/user/month as described in the Kodus comparison [3]
Concrete math for a 10-person engineering team:
On CodeRabbit at $24/user: $240/month, LLM costs covered.
On Kodus Teams: $100/month subscription + LLM costs. At moderate PR volume (say 5 PRs/developer/week, ~200 PRs/month), Claude Haiku or Gemini Flash at ~$0.50–$1.00 per full review runs another $100–$200/month in direct API costs. Total: $200–$300/month, depending on PR volume and model choice.
On Kodus Community (self-hosted): $0/month subscription + LLM costs + server. A $6 Hetzner VPS covers the infrastructure. LLM costs are the same as above. For a team that can handle Docker deployment, this brings the entire review infrastructure to ~$110–$210/month — roughly equal to CodeRabbit but with no markup, full model choice, and no data leaving your infrastructure.
The math favors Kodus most aggressively if you’re already paying for LLM API access for other things (coding assistants, internal tools), making the marginal token cost of PR reviews very low.
Deployment reality check
The self-hosted path is Docker-based. The README points to a generic VM guide and a Railway one-click template — and the Railway deploy is genuinely one click if you already have a Railway account [README]. For a traditional VPS:
What you actually need:
- Linux VPS with at least 2GB RAM
- Docker and docker-compose
- PostgreSQL and Redis (bundled in default docker-compose or external)
- HTTPS domain and reverse proxy for webhook callbacks from GitHub/GitLab — GitHub won’t send webhooks to plain HTTP
- API keys for at least one LLM provider
- A GitHub App or GitLab OAuth app configured to enable PR webhooks
What can go sideways:
- The webhook setup is the friction point. Connecting GitHub to a self-hosted instance requires creating a GitHub App, configuring callback URLs, and handling SSL correctly [4]. The docs cover this, but it’s not a five-minute job.
- The CLI trial limits are real: 5 reviews/day and 10 files per review without an account [5]. You’ll hit this fast if you’re evaluating it on an active codebase.
- MCP plugin integrations (Jira, Linear, Notion) require separate configuration steps. The docs exist [4] but there are several of them.
- SSO is Enterprise-only. If your team requires SAML or centralized auth, you’re looking at custom pricing [README].
The Kodus docs [4] include a full knowledge base covering GitHub, GitLab, Azure DevOps setup, webhook troubleshooting, and LLM provider configuration (including Novita, Groq, Together AI, Fireworks AI — not just the big three). That’s a good sign for the breadth of provider support.
Realistic time estimate for a technical user: 1–2 hours to a working self-hosted instance with GitHub webhooks. For configuring custom rules, MCP plugins, and getting your team onboarded: budget another half-day.
Pros and Cons
Pros
- Model agnostic, zero markup. You bring any API key — Claude, GPT, Gemini, Llama, or any OpenAI-compatible endpoint — and pay the model provider directly. No hidden multiplier, no lock-in [README]. This is the single most differentiating feature.
- Granular rule scoping. Rules apply at repo, folder, file, or PR level — version-controlled alongside code [2]. Not a single YAML for the whole repo.
- MCP context injection. Pull Jira tickets, Notion docs, CI results, or Playwright test output directly into the PR review comment [homepage][2]. Reviewers see implementation against requirements, not just diffs.
- Automatic tech debt backlog. Unresolved review suggestions become trackable issues automatically [homepage]. Suggestions don’t die in the PR thread.
- Rule import from existing tools. Auto-detects Cursor, Copilot, Claude, and Windsurf rule files — no rewriting standards you’ve already codified [homepage].
- CLI with AI agent integration.
--prompt-onlyflag feeds structured review output to Claude Code, Cursor, or Windsurf for automated fix loops [5]. The install-as-skill curl command is a genuinely useful shortcut for teams already using coding agents. - Bug catch rate in vendor benchmark. 79% vs CodeRabbit’s 39% across 38 real bugs from open-source projects [2]. Methodology is public and inspectable.
- All four major platforms. GitHub, GitLab, Bitbucket, Azure Repos supported [README].
Cons
- Only 1,010 GitHub stars. This is a young project. Less community knowledge, fewer third-party guides, less certainty about long-term maintenance [README]. CodeRabbit and GitHub Copilot have years of production use behind them.
- Community edition is genuinely limited. 10 rules and 3 plugins is restrictive for any real engineering team. Engineering metrics and priority queue are paywalled behind Teams [README].
- No native IDE plugin. CodeRabbit has VS Code and Cursor extensions that catch issues before you push. Kodus has a CLI, which covers the same ground technically but requires deliberate invocation [2].
- SSO and RBAC are Enterprise-only. No SSO on Teams tier. If your compliance or IT requirements mandate centralized auth, you’re on custom pricing [README].
- SOC 2 not complete. Listed as “in progress” on the Enterprise tier [README]. If you’re evaluating this for a compliance-sensitive environment, that’s not there yet.
- BYOK required for Community and Teams. This is a feature for teams that already have LLM API access. It’s friction for teams that don’t — you need to set up API keys, manage rate limits, and monitor your own spend [README].
- The bug-catch comparison is vendor-produced. The Kodus vs. CodeRabbit benchmark [2] was published by Kodus. The methodology looks sound and the PRs are real, but treat it as supporting evidence rather than independent proof.
Who should use this / who shouldn’t
Use Kodus if:
- Your team is paying $20–$30/user/month for CodeRabbit and the model lock-in bothers you — you want to be able to switch to a cheaper or better model as the LLM market moves.
- You already have LLM API access (Claude, OpenAI, Gemini) for other internal tools and want to route PR reviews through the same keys without markup.
- You need PR review rules that apply to specific directories or file types, not the whole repo.
- You want to validate PRs against Jira tickets or spec documents, not just check code quality in isolation.
- Your team is using Claude Code, Cursor, or Windsurf and you want CLI-level review integrated into those flows.
Skip it (use CodeRabbit) if:
- You want a plug-and-forget setup with zero key management — CodeRabbit handles the LLM side entirely.
- You want IDE-native feedback before you even open a PR.
- You need a large community of guides, blog posts, and StackOverflow answers for troubleshooting.
- Your team is non-technical and the BYOK requirement feels like added complexity with no visible benefit.
Skip it (use GitHub Copilot’s review features) if:
- Your company is already standardized on Microsoft/GitHub infrastructure and consolidating vendors matters.
- You want code review integrated with Copilot Chat in the IDE rather than as a separate service.
Skip it (wait six months) if:
- You’re evaluating this for a SOC 2-required environment — that certification is in progress but not done [README].
- You need SAML SSO without paying Enterprise prices.
Alternatives worth considering
The Kodus profile lists CodeRabbit as the primary alternative, but the realistic comparison set is wider:
- CodeRabbit — the direct competitor. More mature, larger community, native IDE plugin, single-provider LLM, $24–$30/user. Choose CodeRabbit if you want a proven tool with less configuration overhead [2].
- GitHub Copilot (review features) — built into GitHub, no separate setup, progressively better PR summarization. Locked to GitHub infra and Microsoft’s model choices.
- Claude Code — excellent IDE coding assistant, PR workflow governance is secondary. Best if your team primarily wants in-IDE help rather than PR review governance [3].
- Cursor / Windsurf — IDE-first, not PR-flow tools. Kodus CLI integrates with both as a review skill [5].
- Sourcery — Python-focused code review automation, narrower scope.
- Reviewpad — workflow automation and code review rules via YAML, less AI-native.
- SonarQube — static analysis focused on security and code smells, not AI-synthesized review feedback. Self-hosted, mature, large enterprise install base.
For a team specifically looking to escape CodeRabbit pricing: the realistic shortlist is Kodus vs. staying on CodeRabbit. Pick Kodus if BYOK control and rule granularity matter. Stay on CodeRabbit if setup simplicity and IDE integration matter more.
Bottom line
Kodus is the most technically differentiated pitch in the AI code review space right now, and that differentiation is specific: you pick the model, you pay the model provider at cost, and you write rules that actually match your project structure. The trade-offs are real — it’s a young project with 1,010 stars, the Community tier is limited enough to push real teams toward paid plans, and there’s no IDE plugin or completed SOC 2. But for engineering teams that are already running LLM infrastructure and paying CodeRabbit markup on top of that, the math is hard to ignore. $10/user plus direct API costs versus $24–$30/user with no model choice isn’t a close call once the configuration cost is one afternoon.
If that afternoon of Docker and webhook setup is the blocker, that’s exactly what upready.dev deploys for clients. One-time fee, done, you own the stack.
Sources
- Kodus — Customer Stories (case studies and testimonials). https://kodus.io/customers/
- Kodus — Kodus vs. CodeRabbit (feature matrix, bug-catch benchmark, pricing comparison). https://kodus.io/kodus-vs-coderabbit/
- Kodus — Kodus vs. Claude (feature matrix, pricing comparison, positioning). https://kodus.io/kodus-vs-claude/
- Kodus Docs — Knowledge Base (setup guides, integration docs, troubleshooting). https://docs.kodus.io/knowledge_base/en/introduction
- Kodus Docs — CLI Overview (CLI usage, limits, environment variables, AI agent integration). https://docs.kodus.io/how_to_use/en/cli/overview
Primary sources:
- GitHub repository and README: https://github.com/kodustech/kodus-ai (1,010 stars, AGPL v3 license)
- Official website: https://kodus.io/en
- Pricing page: https://kodus.io/pricing
- Documentation: https://docs.kodus.io
Features
Authentication & Access
- Single Sign-On (SSO)
Integrations & APIs
- REST API
- Webhooks
Category
Related AI & Machine Learning Tools
View all 93 →OpenClaw
320KPersonal AI assistant you run on your own devices. 25+ messaging channels, voice, cron jobs, browser control, and a skills system.
Ollama
166KRun open-source LLMs locally — get up and running with DeepSeek, Qwen, Gemma, Llama, and more with a single command.
Open WebUI
128KRun AI on your own terms. Connect any model, extend with code, protect what matters—without compromise.
OpenCode
124KThe open-source AI coding agent — free models included, or connect Claude, GPT, Gemini, and 75+ other providers.
Zed
77KA high-performance code editor built from scratch in Rust by the creators of Atom — GPU-accelerated rendering, built-in AI, real-time multiplayer, and no Electron.
OpenHands
69KThe open-source, model-agnostic platform for cloud coding agents — automate real software engineering tasks with sandboxed execution, SDK, CLI, and enterprise-grade security.