unsubbed.co

GrowthBook

GrowthBook is a TypeScript-based application that provides powerful, developer-friendly experimentation tool for data-driven product decisions. Seamless.

Open-source feature flagging and A/B testing, honestly reviewed. No marketing fluff, just what you get when you self-host it.

TL;DR

  • What it is: Open-core feature flagging and A/B testing platform — the self-hosted alternative to LaunchDarkly and Optimizely, where your experiment data stays in your own warehouse [README][4].
  • Who it’s for: Product and engineering teams paying for LaunchDarkly, Optimizely, or similar per-seat/per-traffic SaaS tools who want to stop the bill from growing with every new experiment. Also data teams who want SQL-level transparency into how experiment results are calculated [2][3].
  • Cost savings: Optimizely uses traffic-based pricing that scales against you; GrowthBook claims you can “run 5x more experiments at 1/5th the cost” [3]. Self-hosted runs on a $5–10/mo VPS with unlimited flags and unlimited traffic.
  • Key strength: Warehouse-native architecture — GrowthBook doesn’t capture events itself, it queries your existing data warehouse (BigQuery, Snowflake, Databricks, and 8 others). Zero data duplication. Full SQL transparency [README][2].
  • Key weakness: Open-core licensing means the most interesting enterprise features (prerequisite flags at the rule level, code references, versioned metrics) are Pro or Enterprise-only, not MIT [6]. For a non-technical founder just wanting to ship safely, this matters less than it sounds.

What is GrowthBook

GrowthBook is a feature flagging and A/B testing platform. You define feature flags in the UI, ship them to your app via one of 24 SDKs, and then run experiments by routing users into variant groups and measuring the impact against metrics you define in SQL. The company’s own GitHub description puts it plainly: “Open Source Feature Flags, Experimentation, and Product Analytics” [README].

The platform’s central architectural bet is warehouse-native experimentation. Most A/B testing tools (Optimizely, PostHog, LaunchDarkly) require you to funnel event data through their own infrastructure. GrowthBook takes the opposite approach: it connects directly to your existing data warehouse and runs analysis SQL against the data you already have there. This means you don’t duplicate data, you don’t send customer behavior to a third-party server, and you get full visibility into every query the platform runs [README][2].

The project is positioned explicitly as the open-source alternative to LaunchDarkly — Product Hunt’s tagline for the listing is “The open-source LaunchDarkly alternative” [5]. It’s backed by a real company, is SOC 2 Type II certified, and GDPR compliant. As of this review it has 7,400+ GitHub stars [4].

Licensing is open-core, not pure MIT. The bulk of the code is MIT-licensed, but several directories governing enterprise features operate under the GrowthBook Enterprise License. The README is explicit about this: “GrowthBook is an Open Core product. The bulk of the code is under the permissive MIT license. There are several directories that are governed under a separate commercial license, the GrowthBook Enterprise License.” [README]. If you need to know exactly which features are MIT and which aren’t, read the LICENSE file before committing to a self-hosted deployment.


Why people choose it over LaunchDarkly, Optimizely, and PostHog

The comparison pages and Product Hunt reviews point to three consistent reasons teams choose GrowthBook.

The warehouse-native argument. PostHog requires you to send product events into PostHog’s infrastructure to measure experiments — teams often end up duplicating data between PostHog and their own warehouse, and both the infrastructure load and the bill grow with event volume [2]. GrowthBook flips this. You write SQL metric definitions against data you already own in BigQuery, Snowflake, or Databricks. One Product Hunt reviewer put it plainly: “It’s also very transparent about how the statistics work & it’s easy to troubleshoot because the queries they run to fetch & transform data are readily available.” [5]. For data teams that have spent months cleaning warehouse data, this is a significant practical advantage.

Versus Optimizely. This is the strongest cost-reduction case. Optimizely is marketed at CRO teams and uses traffic-based pricing, meaning the more traffic you run experiments against, the more you pay — which creates a perverse incentive to run fewer experiments. GrowthBook uses per-seat pricing with unlimited experiments and unlimited traffic, so the bill doesn’t scale against your experimentation volume [3]. Optimizely also requires weeks to months of setup and a dedicated team to operate; GrowthBook targets hours of setup [3]. The practical translation: if your Optimizely bill feels disproportionate to how often you actually ship experiments, GrowthBook is worth pricing out.

Versus PostHog. PostHog is an analytics platform with A/B testing bolted on. GrowthBook is an experimentation platform with product analytics bolted on — the product philosophies are different. GrowthBook supports Bayesian, frequentist, and sequential statistics with CUPED and post-stratification; PostHog’s stats are simpler and lack built-in SRM (Sample Ratio Mismatch) checks [2]. If you’re running rigorous experiments that your data science team needs to trust, the stats engine difference matters. If you just want to know whether the red button converts better, PostHog’s simpler setup wins.

Versus LaunchDarkly. LaunchDarkly is the enterprise incumbent for feature flagging. It’s polished, has a broad SDK library, and every operations runbook for flags-in-production references it. GrowthBook’s competitive pitch is that you shouldn’t need to pay LaunchDarkly per-seat rates to get a capable feature flagging system, and that you shouldn’t have to send your flag evaluation data to their servers [5][README]. The trade-off is operational maturity: LaunchDarkly has more enterprise integrations and a longer track record in high-stakes production environments.


Features

Based on the README, website, and blog posts:

Feature flags:

  • Targeting rules with any attribute (user ID, location, postal code, URL path, custom properties) [README][2]
  • Gradual percentage rollouts [README]
  • Linked experiments — run an experiment directly from a feature flag [README]
  • Stale flag detection, introduced in v2.6; Code References (v2.8) shows exactly where each flag is used in your codebase via a CI pipeline job [6]
  • Prerequisite flags — flag A only evaluates when flag B is true, with complex dependency trees [6]

Experimentation:

  • A/B, multivariate, redirect, and visual editor experiments [2][3]
  • Full-stack coverage: server-side, client-side, mobile, and edge [3]
  • Stats engines: Bayesian, frequentist, sequential — your choice. CUPED for faster experiments, post-stratification, Bonferroni/Benjamini-Hochberg corrections for multiple metrics, SRM checks [README][2]
  • Holdout experiments [3]
  • Bandits (multi-armed) [README]
  • Results documentation with screenshots and Markdown [README]
  • Automated email alerts when tests reach significance [4]

Data and metrics:

  • Warehouse-native: connects to BigQuery, Snowflake, Databricks, Redshift, Mixpanel, Google Analytics, and others (11 total) [README][4]
  • SQL-backed metric definitions — conversion rates, ratios, quantiles [README]
  • Official Metrics workflow (v2.8): store metric definitions in version control (GitHub), sync to GrowthBook, mark as verified with a badge that prevents in-UI edits [6]
  • Built-in product analytics suite for dashboards [README]

SDK performance:

  • 24 SDKs: JavaScript, React, Node.js, Python, Ruby, Go, PHP, Java, Swift, Kotlin, and more [README][2][3]
  • JS SDK is 9kb — stated as “half the size of our closest competitors” [website]
  • Local evaluation (no network requests required at runtime) — flags are evaluated client-side against a downloaded ruleset, not via an API call per flag check [website]
  • The homepage reports 100 billion+ feature flag lookups per day [website]

Developer tooling:

  • Chrome debugger extension [2][3]
  • Visual Editor for no-code website A/B tests [website][3]
  • Full REST API and webhooks [README]
  • MCP server — create features, start experiments, clean up stale flags from Claude Desktop or Cursor [README]

Security and compliance:

  • SOC 2 Type II certified [1][website]
  • GDPR compliant [1][website]
  • No customer data ever transmitted to GrowthBook’s servers in self-hosted mode [1]
  • Data at rest and in transit encrypted on cloud [1]
  • Bug bounty program [1]

Pricing: SaaS vs self-hosted math

GrowthBook Cloud:

  • Free: unlimited feature flags, unlimited traffic, unlimited experiments — no credit card required [website]
  • Pro: approximately $20/month per seat with advanced features [4]
  • Enterprise: custom pricing, contact sales

The meaningful structural difference from competitors: GrowthBook charges per seat, not per traffic or per event. Optimizely’s traffic-based model means more users in your experiments = bigger bill. GrowthBook’s bill stays flat as you scale experimentation volume [3].

Self-hosted:

  • Software license (MIT core): $0
  • VPS to run it: $5–10/month (Hetzner, Contabo, DigitalOcean)
  • Docker Compose install, 30 minutes to a running instance

Comparison with Optimizely: GrowthBook’s own positioning claims 1/5th the cost of Optimizely [3]. Optimizely doesn’t publish pricing publicly (contact sales), which is itself a signal about who their target buyer is. For any team running experiments at meaningful traffic scale, the delta is substantial.

What’s gated: The free tier is genuinely useful — unlimited flags and experiments with basic targeting. What you lose on free: advanced targeting rules (some), prerequisite flags at the rule/experiment level, Code References, Official Metrics workflow, advanced RBAC, and anything listed as Pro or Enterprise in the docs [6]. A solo founder or small product team will likely hit these limits as they grow.


Deployment reality check

The README install path is two commands: git clone and docker compose up -d. Visit http://localhost:3000. The docs page for self-hosting is at docs.growthbook.io/self-host. For a technical user, this is genuinely fast.

What you actually need:

  • A Linux VPS (2–4GB RAM for basic usage)
  • Docker and docker-compose
  • A reverse proxy with HTTPS (Caddy or nginx) if you’re not running locally
  • A data warehouse connection for experimentation analysis (BigQuery, Snowflake, etc.) — GrowthBook won’t capture events for you
  • MongoDB (bundled in the default docker-compose) for GrowthBook’s own metadata

The warehouse dependency is the real setup cost. Feature flags work out of the box — you can ship flags and do gradual rollouts without any warehouse. But A/B test analysis requires that you have a warehouse with event data in it. If you’re currently piping events nowhere, you’ll need to set up something like Segment → BigQuery or self-hosted Rudderstack before GrowthBook’s experiment analysis becomes useful. This isn’t a knock against GrowthBook — it’s the honest trade-off of the warehouse-native architecture — but non-technical founders should budget for it.

What can go sideways:

  • The Visual Editor for no-code website tests requires a browser extension and works best on simple static sites. Complex single-page apps with heavy JavaScript frameworks need the SDK approach instead [website].
  • Prerequisite flags with non-deterministic values (e.g., when the prerequisite depends on an experiment result) currently only work in the latest JavaScript and React SDKs — other SDK versions require workarounds [6].
  • Product Hunt reviews are positive but the sample is small (6 reviews) [5]. AlternativeTo has minimal user reviews. There’s no large Trustpilot corpus here, which means independent third-party sentiment data is thin.

Realistic time for a technical user with an existing warehouse: 1–2 hours to flags working in production, another day or two to get experiment analysis connected and running. For a non-technical founder without an existing warehouse: this is not a weekend project. Either hire a technical person for the setup or use GrowthBook Cloud’s free tier.


Pros and Cons

Pros

  • Warehouse-native architecture. Zero data duplication. Full SQL transparency into every metric calculation. If your data team already trusts your warehouse, they’ll trust GrowthBook’s results [README][2].
  • Genuinely unlimited on the free tier. Unlimited flags, unlimited traffic, unlimited experiments — not a “up to 10 flags” free tier. For early-stage teams, this is real value [website].
  • Serious stats engine. Bayesian, frequentist, sequential, CUPED, SRM checks, multiple-correction methods. This is the feature set of tools that cost tens of thousands of dollars per year, not $20/seat [README][2].
  • 9kb JS SDK with local evaluation. No network requests at flag evaluation time. This matters for performance-sensitive applications [website].
  • 24 SDKs. React, Python, Android, iOS, Go, Ruby, PHP, Java — the breadth is comparable to LaunchDarkly [README].
  • SOC 2 Type II + GDPR. Compliance coverage you’d otherwise pay for separately [1][website].
  • MCP server. Create and manage flags from Claude Desktop or Cursor — niche, but useful for developer-heavy teams already in AI-assisted workflows [README].
  • Trusted by real companies. Dropbox, Sony, Wikipedia, Pepsi, Character.ai are on the homepage [website]. These aren’t startups making a risky bet.

Cons

  • Open-core, not pure MIT. The license is tiered. Enterprise features — including some you’ll want as you scale (advanced RBAC, code references, versioned metrics) — require Pro or Enterprise licensing [6][README]. Read the LICENSE file carefully before building internal tooling on top of it.
  • Warehouse-native means warehouse required. Feature flags work without it. Experiment analysis doesn’t. If you don’t have a data warehouse, you’re buying into a dependency [README][2].
  • Thin independent review coverage. The third-party review corpus is sparse compared to tools like n8n or Zapier — mostly official comparison pages and a handful of Product Hunt reviews. It’s harder to calibrate real-world pain points from the outside.
  • Enterprise features gated. Prerequisite flags at the rule/experiment level, Code References, Official Metrics, and fine-grained RBAC are Pro/Enterprise-only [6]. The free/MIT tier is real but you’ll eventually hit the ceiling.
  • Visual Editor limitations. No-code website A/B tests work best on simpler sites. Complex SPAs need engineering involvement [website].
  • SDK parity gaps. Non-deterministic prerequisite flag scenarios (where the prerequisite depends on an experiment) only work in JavaScript/React SDKs currently — other SDKs need workarounds [6].

Who should use this / who shouldn’t

Use GrowthBook if:

  • You’re a product or engineering team currently paying for LaunchDarkly or Optimizely and want per-seat pricing that doesn’t scale against your experimentation volume.
  • You already have a data warehouse (BigQuery, Snowflake, Redshift) and want your experiment analysis to run against that data instead of duplicating it to a third-party platform.
  • Your data science team wants SQL-level visibility and control over metric definitions.
  • You need a serious stats engine (CUPED, Bayesian/frequentist/sequential) and don’t want to pay Optimizely rates to get it.
  • You want feature flags with zero runtime network requests — performance-sensitive apps, mobile, edge.

Skip it (stay on PostHog) if:

  • You don’t have a data warehouse and don’t want one. PostHog captures events itself; GrowthBook doesn’t.
  • You want one tool for both analytics and basic A/B testing, and you’re fine with simpler stats.
  • You’re a small team that values a unified product analytics + experimentation surface more than statistical depth.

Skip it (use LaunchDarkly) if:

  • You’re at a larger enterprise where every vendor needs an established support SLA and audit trail.
  • Your DevOps team has already built runbooks and integrations around LaunchDarkly.
  • You need the broadest possible enterprise integration surface.

Skip it entirely if:

  • You’re a non-technical founder with no existing data warehouse and no technical person available. The warehouse dependency and Docker setup will block you before you get value from the product.
  • You’re only interested in simple on/off feature toggles with no analytics. Something like Unleash or a simpler open-source flag system will be less overhead.

Alternatives worth considering

  • LaunchDarkly — the enterprise incumbent GrowthBook explicitly targets. More mature enterprise integrations, larger ecosystem, no warehouse dependency, fully closed SaaS [5][README].
  • PostHog — open-source product analytics with A/B testing. Captures its own events, simpler stats engine, better fit if you want analytics + testing in one tool without a warehouse [2].
  • Optimizely — strong for marketing/CRO teams doing UI and content testing. Traffic-based pricing scales badly, cloud-only, weeks-to-months setup [3].
  • DevCycle — developer-focused feature flagging, OpenFeature-native, listed as an alternative on AlternativeTo [4].
  • Unleash — pure open-source feature toggle platform, simpler and more focused than GrowthBook, no built-in A/B analysis.
  • Flagsmith — another open-source feature flag option, more focused on flags than experimentation.
  • Split.io — mid-market feature flagging and experimentation, SaaS-only.

The realistic shortlist for a product team considering GrowthBook: GrowthBook vs PostHog if you don’t have a warehouse; GrowthBook vs LaunchDarkly if you do and you’re trying to reduce per-seat costs.


Bottom line

GrowthBook is the most technically credible open-source answer to the question “why am I paying LaunchDarkly or Optimizely this much?” The warehouse-native architecture is a genuine differentiator — not a marketing angle. It means your data scientists trust the results, your compliance team isn’t arguing about data residency, and you’re not paying event-volume fees to a third party for analysis you could run in SQL. The stats engine is legitimately advanced for the price tier, and the SDK performance story (9kb JS, zero runtime network requests, 100B+ evaluations per day) holds up under scrutiny. The honest limitations are the open-core licensing model and the warehouse dependency — you’re not getting the full platform for free, and you’re not getting experiment analysis without infrastructure investment. But for product teams already running a warehouse and tired of watching their LaunchDarkly or Optimizely bill grow, the math is straightforward: self-host on a $10 VPS, keep your data where it already lives, and stop paying per seat for features you’ve outgrown on the SaaS tier.

If the warehouse setup or Docker deployment is the blocker, that’s what upready.dev deploys for clients. One engagement, done, you own the stack.


Sources

  1. GrowthBook Security Page — growthbook.io. https://www.growthbook.io/security
  2. GrowthBook vs PostHog Comparison — growthbook.io. https://www.growthbook.io/compare/growthbook-vs-posthog
  3. GrowthBook vs Optimizely Comparison — growthbook.io. https://www.growthbook.io/compare/growthbook-vs-optimizely
  4. GrowthBook — AlternativeTo listing (7,666 stars, 24 alternatives) — alternativeto.net. https://alternativeto.net/software/growth-book/about/
  5. GrowthBook: The open-source LaunchDarkly alternative — Product Hunt (6 reviews, 5.0/5.0) — producthunt.com. https://www.producthunt.com/products/growthbook
  6. GrowthBook Version 2.8 Release Notes (Prerequisite flags, Code References, versioned metrics) — blog.growthbook.io. https://blog.growthbook.io/growthbook-version-2-8/

Primary sources:

Features

Integrations & APIs

  • REST API
  • Webhooks