unsubbed.co

Trench

Self-hosted web analytics tool that provides analytics platform built on ClickHouse and Kafka.

Event tracking at scale, honestly reviewed. No marketing fluff — just what you get when you replace your bloated events table with ClickHouse and Kafka.

TL;DR

  • What it is: Open-source (MIT) event tracking infrastructure built on ClickHouse and Kafka — a production-ready replacement for the events table in your relational database that stops scaling around 1M users [2].
  • Who it’s for: Engineering teams and technical founders who are feeling the pain of querying a giant events table in Postgres, and want a self-hostable backend layer they can query with SQL or REST — not a full-featured product analytics UI.
  • Cost savings: Segment costs $120+/month for 10K MTUs. PostHog’s paid tiers start at $0.000225/event. Trench Cloud is $0.00003/event after 1M free — and self-hosted is just your VPS cost [website].
  • Key strength: Genuinely production-tested — the Frigade team used Trench to cut their own Postgres costs 42% and eliminate lag spikes as they scaled to millions of users, then open-sourced it [2].
  • Key weakness: This is infrastructure, not a product. There’s no built-in analytics dashboard, no funnel builder, no session replays. You query raw data via SQL or REST and build the views yourself.

What is Trench

Trench is an event ingestion and querying layer. You send events to it via a REST API (compatible with the Segment Track/Identify/Group spec), it buffers them through Kafka, stores them in ClickHouse, and lets you query them via SQL or REST in real time. Everything ships as a single Docker image that bundles ClickHouse, Kafka, and a Node.js API server.

The origin story is worth knowing because it explains exactly what problem Trench solves and what it doesn’t. Christian Mathiesen and Eric Brownrout built Frigade — a product onboarding platform — and hit the classic wall: their Postgres events table became their biggest table, their slowest query, and their most painful backup. They switched to ClickHouse and Kafka internally, cut infrastructure costs by 42%, eliminated autoscaling lag spikes, and then decided to open-source the resulting system as Trench [2].

That framing matters. Trench is not a Mixpanel replacement. It’s not a PostHog replacement. It’s the data pipeline that sits underneath a product analytics tool — the part that makes ingesting 10,000 events per second feasible without OLTP pain. What you build on top of it (dashboards, funnels, user timelines) is your problem. The demo the team published shows how to connect Trench to Grafana to build a basic Google Analytics clone — and that Grafana setup is entirely on you [README].

The project launched and hit 1,000 GitHub stars in its first week. As of this review it sits at 1,621 stars, is MIT-licensed, and is backed by a YC-funded team [2][merged profile].


Why people choose it

There’s not a large library of third-party reviews to synthesize yet — Trench is still relatively young and occupies a niche (infrastructure layer, not end-user analytics product). What the available sources tell us is that the appeal concentrates around three things.

The Postgres pain is real and well-documented. The problem Trench solves is not exotic. Stripe, Heroku, and countless other companies hit the same wall: events tables in relational databases grow without bound, slow down writes, and make range queries painful [2]. Engineers know they need a columnar store eventually; Trench is a pre-packaged path to ClickHouse that skips the six-week infrastructure project.

Segment-compatible API means low switching cost. If you’re already sending events in Segment’s format (Track, Identify, Group), you can point your SDK at Trench with a URL change. If you’re building a new product, the spec is well-known and has client libraries for every major language [README]. You don’t have to bet on a proprietary event schema.

42% cost reduction is a concrete number. Most self-hosted tools make vague claims about saving money. The Frigade team published a specific outcome from their own migration: 42% reduction in cost to serve on their primary Postgres cluster, and zero lag spikes after switching [2]. That’s a credible benchmark because it comes from the people who built it and have the most incentive to know if it’s real.

What Trench doesn’t offer is community-verified testimonials from external teams. It’s a younger project, and the third-party review corpus is thin. If you want war stories from five unrelated teams who shipped Trench to production, you’ll need to dig into their Slack community.


Features

Based on the README and website:

Ingest:

  • REST API compatible with Segment Track, Identify, and Group calls [README]
  • Handles thousands of events per second on a single node via Kafka buffering [README]
  • Public API key for sending events; private API key for querying [README]
  • Throttled, batched webhooks to forward events to other destinations [README]

Query:

  • SQL access to raw event data via ClickHouse [website]
  • REST API for querying with filtering by event type, user ID, timestamps [README]
  • Read-after-write guarantees — events are queryable in real-time [2]
  • No predefined dashboard or UI: you bring your own visualization layer (Grafana, Metabase, a custom frontend) [README]

Compliance:

  • No cookies [README]
  • GDPR and PECR compliant: users can access, rectify, or delete their data [README]
  • Data is on your infrastructure, not routed through a third party [README]

Deployment:

  • Single production-ready Docker image (ClickHouse + Kafka + Node.js API) [README]
  • Docker Compose for local dev; docker-compose.yml for production [README]
  • Works with cloud-hosted ClickHouse or Kafka if you want to decouple those [2]
  • Admin dashboard, SQL editor, and webhook builder included in the Cloud offering (unclear if these ship in the self-hosted image) [website]

What’s not there:

  • No built-in funnel analysis, cohort builder, or session replays
  • No user-facing analytics UI out of the box
  • No A/B test statistical engine
  • No autocapture (you send events; it doesn’t instrument your frontend automatically)

Pricing: SaaS vs self-hosted math

Trench Cloud:

  • Free tier: 1M events per month, admin dashboard, SQL editor, webhook builder, 99.99% SLA, autoscaling [website]
  • Paid: $0.00003 per event (after the 1M free). That’s $30 per additional million events.
  • Enterprise: volume-based, dedicated infrastructure, dedicated support, SAML 2.0 SSO, on-premises deployment option [website]

To put the per-event price in context: at 10M events/month, you’d pay for 9M events at $0.00003 each — $270/month. At 100M events/month, you’re at roughly $2,970/month before enterprise negotiations.

Self-hosted (MIT):

  • License: $0
  • VPS: $10–30/month on Hetzner or DigitalOcean for a node with 4GB RAM / 4 CPU (the README minimum) [README]
  • Storage grows with event volume: plan for additional disk or object storage costs at high volumes
  • No usage limits on events

Comparison points:

Segment: The managed Segment pipeline starts at $120/month for 10K monthly tracked users (MTUs) and scales sharply. Segment is a routing layer — it doesn’t store events itself — so you pay Segment plus downstream storage. Trench eliminates the routing middleman and handles storage.

PostHog self-hosted: PostHog is the more direct comparison — it’s also open-source, also ClickHouse-based, also MIT (product analytics features; some enterprise features are commercial). PostHog ships a dashboard, funnels, session recordings, and feature flags. It is substantially heavier than Trench and requires more infrastructure to operate at scale.

PostHog Cloud: Free up to 1M events/month, then $0.000225/event (7.5× more expensive per event than Trench Cloud) for product analytics. If all you need is the data infrastructure without PostHog’s UI, Trench Cloud is cheaper [data not available from external source — calculated from published per-event prices].

Concrete self-hosted math: A team sending 50M events per month could run Trench on a $20/month VPS (with external ClickHouse on ClickHouse Cloud or equivalent), versus $1,470/month on Trench Cloud (49M paid events × $0.00003), or $11,025/month on PostHog Cloud. The infrastructure skill requirement is real, but the cost differential at volume is dramatic.


Deployment reality check

The Docker Compose quickstart is three commands: clone, copy the env file, run docker-compose [README]. The dev server starts ClickHouse and Kafka locally and binds to http://localhost:4000. Sending a first test event is a one-line curl command.

What you actually need:

  • Docker and Docker Compose installed
  • 4GB RAM, 4 CPU cores recommended for production [README]
  • A domain and reverse proxy for HTTPS (Caddy or nginx — not bundled)
  • Persistent volumes for ClickHouse data (events are durable; you don’t want to lose them on container restart)
  • Kafka is bundled in the default image, but at very high event volumes you may want to decouple it to a managed Kafka service

What can go sideways:

ClickHouse is not a lightweight dependency. The single Docker image convenience hides real resource requirements — a 1GB RAM VPS will not run this comfortably. The README recommendation of 4GB RAM / 4 CPU is honest for a production load; under-provision it and you’ll see performance problems that are hard to debug.

Kafka inside Docker works for modest event volumes. If you’re pushing tens of millions of events per day, you should be running Kafka on dedicated infrastructure or a managed service. The bundled setup is a convenience for getting started, not a recommendation for 100M events/month.

There’s no built-in monitoring or alerting for the Trench process itself. You’ll know if the API goes down, but silent failures in the Kafka→ClickHouse pipeline (backpressure, disk full, Kafka lag) require you to instrument your own observability. This is infrastructure work.

The self-hosted image doesn’t appear to include the admin dashboard and SQL editor advertised on the pricing page for the Cloud offering. Those appear to be Cloud-tier features — the self-hosted path expects you to connect Grafana, Metabase, or your own query interface directly to ClickHouse. Confirm this before committing to the self-hosted path if you want the visual SQL editor.

Realistic time estimate for a technical founder: 2–4 hours to a working instance on a fresh VPS with HTTPS, including the reverse proxy setup. For a team that’s never touched Kafka or ClickHouse: budget a full day and expect to read ClickHouse documentation at least once.


Pros and Cons

Pros

  • Battle-tested at source. The Frigade team ran this in production for their own product at millions of users before open-sourcing it. It’s not a side project — it’s extracted infrastructure [2].
  • MIT license, no commercial gotchas. You can embed Trench in your SaaS, use it in a client project, modify it, or resell it without a commercial agreement. Genuinely permissive [README].
  • Segment-compatible API. Swap the endpoint URL; everything else stays the same. Existing Segment client libraries work as-is [README].
  • Real-time queries. ClickHouse’s columnar format means aggregation queries over millions of events complete in milliseconds. No waiting hours for materialized views [2].
  • Single Docker image. ClickHouse, Kafka, and the API layer bundled. You don’t manage three separate infrastructure components on day one [README].
  • GDPR / PECR compliant, no cookies. Straightforward story for EU founders [README].
  • 1M free events/month on Cloud. A real free tier that covers early-stage products without credit card anxiety [website].

Cons

  • No analytics UI included. Trench is plumbing, not a finished product. You build dashboards, funnels, and user timelines yourself. If you expected PostHog with a different brand, you’ll be disappointed.
  • Admin dashboard appears Cloud-only. The SQL editor and webhook builder advertised on the pricing page aren’t clearly documented for the self-hosted version [website].
  • Thin third-party review ecosystem. The project is young. There’s no G2 profile, no Trustpilot reviews, no independent third-party benchmarks to cross-reference. You’re betting on what the builders say about their own tool.
  • Kafka inside Docker is not a production architecture at scale. The convenience image is a starting point. High-volume production deployments will require decoupling Kafka and ClickHouse to dedicated infrastructure.
  • No autocapture. You instrument events manually. If your use case is “just drop a JS snippet and see what users click,” Trench is the wrong tool — look at PostHog or Plausible.
  • Heavy resource footprint. ClickHouse + Kafka require meaningful RAM and CPU. The minimum recommendation (4GB RAM / 4 CPU) is not a $5 VPS. Budget accordingly.
  • No REST API documentation for the query side at scale. The API reference exists but the query API is simpler than what you’d get from a purpose-built analytics product. Complex analytical queries will go directly to ClickHouse SQL [README].

Who should use this / who shouldn’t

Use Trench if:

  • Your Postgres events table is already your biggest table and your aggregation queries are getting slower every month.
  • You want to self-host analytics infrastructure with a clean MIT license and no usage caps.
  • You’re comfortable connecting Grafana, Metabase, or a custom frontend to ClickHouse, and you want to own the query layer.
  • You’re building a product that needs to ingest events at high volume (thousands/second) and query them in real time.
  • You want Segment-compatible event routing without paying Segment’s MTU-based pricing.
  • You care about GDPR compliance and want event data on your own infrastructure.

Skip it (use PostHog self-hosted) if:

  • You want a full product analytics suite with session recordings, funnels, feature flags, and heatmaps — and you want the UI included.
  • Your team is non-technical and needs a polished dashboard out of the box.
  • You want autocapture with minimal SDK integration.

Skip it (use Plausible or Umami) if:

  • You just want website traffic analytics. Trench is not a web analytics tool — it’s an event infrastructure layer.

Skip it (stay on managed Segment + warehouse) if:

  • You’re a larger team that needs data routing, identity resolution, and a curated connector catalog, and you’d rather pay Segment than manage infrastructure.

Skip it (build on raw ClickHouse Cloud + Kafka) if:

  • Your team has Kafka and ClickHouse expertise and prefers to operate the components separately from day one. Trench adds a convenience layer; if you don’t need the convenience, you don’t need Trench.

Alternatives worth considering

  • PostHog — the most feature-complete open-source product analytics platform. Also MIT-licensed (core), also ClickHouse-based, ships a full dashboard, session recordings, feature flags, A/B tests. Significantly heavier infrastructure footprint. The right choice if you want the whole analytics product, not just the infrastructure.
  • Plausible Analytics — lightweight, cookieless website analytics. Not an event infrastructure layer; doesn’t handle custom event schemas. The right choice if you need simple pageview analytics with a clean UI.
  • Umami — similar to Plausible, web-centric, easy to self-host. Not suitable for high-volume custom event tracking.
  • Segment (managed) — the incumbent event routing platform. Expensive at scale, closed-source, but has the widest destination connector catalog and the most mature SDK ecosystem. Makes sense if you need to route events to 15 different downstream tools.
  • Mixpanel / Amplitude — fully managed product analytics. No self-hosting. Pay per MTU or event. Simpler for non-technical teams; expensive at scale; data not on your infrastructure.
  • Apache Kafka + ClickHouse (DIY) — what Trench is, minus the packaging. More control, more operational complexity, more time to set up. Reasonable if your team has the expertise.
  • Rudderstack — open-source Segment alternative with self-hosting. Covers the routing and transformation layer; you still need a data warehouse. More complex than Trench, more capable as a full CDP.

Bottom line

Trench is a narrow tool that does one thing well: give you a production-grade event ingestion and querying backend without building it yourself. It’s the thing you reach for when your Postgres events table hits the wall and you don’t want to spend six weeks architecting a ClickHouse + Kafka pipeline from scratch. The 42% cost reduction and lag spike elimination the Frigade team reported from their own migration are credible outcomes — this is extracted production infrastructure, not a side project [2].

The honest caveat is that Trench is infrastructure, not a product. There’s no funnel UI, no session replays, no autocapture. If you came expecting “self-hosted Mixpanel,” reset expectations before you start. If you came expecting “the plumbing layer for a custom analytics stack I’m going to build on top of,” Trench is a solid foundation. The MIT license, the Segment-compatible API, and the single Docker image all lower the barrier to a working setup. Just make sure you’re ready to connect Grafana yourself.


Sources

  1. Fondo“Frigade Launches Trench: Open Source Analytics Infrastructure” (February 9, 2026). https://fondo.com/blog/frigade-launches-trench
  2. Trench GitHub README — Open-Source Analytics Infrastructure, Frigade team. https://github.com/frigadehq/trench
  3. Trench official website and pricinghttps://trench.dev

Primary sources:

Features

Integrations & APIs

  • REST API
  • Webhooks