unsubbed.co

TimescaleDB

TimescaleDB handles extend PostgreSQL for time-series data as a self-hosted solution.

Time-series data storage, honestly reviewed. No marketing fluff — just what you get when you run a PostgreSQL extension at scale.

TL;DR

  • What it is: A PostgreSQL extension that turns vanilla Postgres into a high-performance time-series database — hypertables, automatic compression, continuous aggregates, and 200+ time-series SQL functions, all layered on top of standard Postgres [5].
  • Who it’s for: Engineering teams storing sensor data, metrics, events, or logs at volume who want Postgres ergonomics without Postgres’s scaling pain at billions of rows. Not a point-and-click tool — requires SQL comfort [1][3].
  • Cost savings: WaterBridge replaced a $12,000/month SQL Server setup with TimescaleDB, compressing 14TB to 700GB and handling 10,000 data points per second [4]. Self-hosted TimescaleDB runs on a VPS you already have or a ~$10/mo Hetzner box.
  • Key strength: 90%+ compression rates confirmed independently, not just in vendor benchmarks — real-world tests show 93–97% reduction [1][4]. Continuous aggregates automate rollup logic that you’d otherwise hand-write and maintain [1].
  • Key weakness: The license situation is genuinely confusing and has bitten people. The advanced features (compression, hyperfunctions, continuous aggregates) are under the Timescale License (TSL), not Apache 2.0 — and most third-party managed services only offer the Apache build, which strips out exactly the features you want [2]. Tiger Cloud (their own managed service) has the full TSL features but costs €39/month for 0.5 vCPU and 1GB RAM [2].

What is TimescaleDB

TimescaleDB is a PostgreSQL extension. You install it on top of an existing Postgres instance, run CREATE EXTENSION timescaledb, and suddenly your ordinary tables can be converted into hypertables — time-partitioned structures that Postgres handles under the hood but that you query with exactly the same SQL you already know [5].

The core pitch: time-series data has patterns that plain PostgreSQL handles badly at scale. Inserting billions of rows into a vanilla Postgres table degrades over time as indexes grow. Deleting old data with DELETE WHERE time < now() - interval '30 days' is catastrophically slow. TimescaleDB solves both by partitioning data into time-based chunks internally, so inserts stay fast (the write only touches the current chunk) and deletes become instant chunk drops rather than row-by-row operations [5].

The project is built by Timescale, a company that recently rebranded the managed cloud product to “Tiger Cloud” under “TigerData” branding. The underlying extension retains the TimescaleDB name in its GitHub repository (22,121 stars), and the core open-source codebase continues under active development.

What makes it different from specialized time-series databases like InfluxDB or QuestDB is the deliberate choice to stay in the Postgres ecosystem. You get standard SQL, all your existing Postgres tooling, the ability to JOIN time-series data against relational tables, and zero new query language to learn. Cloudflare’s engineering team chose it precisely for this reason: “Focusing on launching initial versions of products with just a few essential parts, maybe two or three components, gives us something to ship, test, and learn from quickly” — and TimescaleDB let them avoid introducing ClickHouse as a second specialized database [3].


Why people choose it

The consistent theme across real-world accounts is that TimescaleDB wins when you’re already on Postgres and can’t justify a second database system.

Against plain PostgreSQL. This is the strongest case. At 100 million rows, TimescaleDB delivers 20x higher insert rates than unmodified Postgres, with performance staying constant even at one billion rows (where vanilla Postgres degrades sharply). Query improvements range from 1.2x for simple lookups to over 14,000x for time-range aggregations. Delete performance is 2,000x faster due to chunk drops instead of row scans [5]. These aren’t vendor numbers pulled from a sales deck — they come from a public benchmark on Azure DS4 hardware with reproducible methodology [5].

Against ClickHouse. Cloudflare built Digital Experience Monitoring on TimescaleDB specifically to avoid ClickHouse, despite ClickHouse being their standard analytical database. Their reasoning: ClickHouse is fast but introduces operational complexity — another box in the system diagram, another query language, another failure domain. TimescaleDB let the three-person DEX team ship without needing to learn OLAP-specific tooling or maintain a multi-system pipeline [3]. The trade-off they accepted: TimescaleDB won’t match ClickHouse throughput at extreme analytical scale, but for their product limits and customer volume, it didn’t need to.

Against dedicated time-series databases (InfluxDB, QuestDB). The appeal is ecosystem lock-in avoidance. InfluxDB uses its own query language (Flux, now partially InfluxQL), stores data in a proprietary format, and requires managing a separate system. WaterBridge’s three-person engineering team chose TimescaleDB because it meant communicating with Postgres and nothing else: “The appeal of Timescale was that we only needed to communicate with PostgreSQL” [4]. Their previous SQL Server setup cost $12,000/month for a 4TB tier that was running out of space; TimescaleDB compressed 14TB to 700GB and ultimately held 72TB of raw data in 3.9TB [4].

The independent compression validation. The wkrp.xyz author [1] went in skeptical of Timescale’s 80–95% compression claims and came out a believer: xp_drops table went from 495MB to 15MB (97% reduction), locations from 3,714MB to 246MB (93% reduction). These aren’t enterprise benchmarks — this is a solo developer running a hobby project tracking RuneScape game events.


Features

Core hypertable engine:

  • Automatic time-based partitioning into chunks — queries only scan relevant time ranges, not the full table [5]
  • Columnstore (Hypercore) storage: row format for writes, columnar for analytics — same table, automatically managed [README]
  • time_bucket() and 200+ time-series SQL functions for bucketing, gap-filling, interpolation [README]
  • Standard Postgres JOINs work across hypertables and regular tables

Compression:

  • Columnstore compression with 80–97% reduction on real workloads [1][4]
  • After compression, updates and deletes still work (this was a limitation in earlier versions, now resolved) [1]
  • Data retention policies automatically drop old chunks — instant, no row scanning [1][5]
  • Direct insert to columnstore for high-throughput analytical ingestion [README]

Continuous aggregates:

  • Materialized views that refresh automatically on a configurable policy [1]
  • Queries against the aggregate automatically include fresh data from the base hypertable not yet rolled up [1]
  • Enables aggressive data retention (drop raw data after 30 days, keep hourly rollups forever) [1]
  • Significant pain point: altering the schema of a table with an active continuous aggregate requires disabling compression and refresh policies, exporting data, modifying, and reimporting [1]

Tiered storage and lakehouse:

  • Hot data on SSD, colder data on object storage (S3-compatible) [website]
  • Kafka and S3 ingestion, Iceberg replication [website]

What requires TSL vs Apache license:

  • Compression, hyperfunctions, continuous aggregates, tiered storage — TSL only, not Apache 2.0 [2]
  • This is a hard wall: third-party managed Postgres providers running TimescaleDB typically ship the Apache build, giving you the hypertable partitioning but none of the features that make TimescaleDB worth using for time-series [2]

Pricing: SaaS vs self-hosted math

This section deserves more honesty than the average tool review provides.

Self-hosted (TSL community edition):

  • Software: $0
  • You can use compression, continuous aggregates, and hyperfunctions — the full TSL feature set — on your own hardware without a commercial agreement [1]
  • VPS cost: $10–20/mo on Hetzner or Contabo for a node with enough RAM for real workloads (8GB RAM recommended per the README)

Tiger Cloud (managed, TSL features):

  • Entry tier: 0.5 vCPU, 1GB RAM — €39/month [2]
  • This is steep for what you get. A €39/month Hetzner VPS gives you 2 vCPU and 8GB RAM running self-hosted TimescaleDB with full features
  • The managed pricing becomes defensible only if you need managed backups, HA, and don’t have ops capacity

Third-party managed PostgreSQL (the trap):

  • Providers like Supabase, Neon, Railway, Render all offer Postgres — but their TimescaleDB extension, if available, ships the Apache build
  • Apache build = no compression, no continuous aggregates, no hyperfunctions [2]
  • One developer summed this up on r/Database: “What is the point of having timescale for timeseries without compression? Timeseries data is typically high volume.” [2]
  • If you’re planning to run TimescaleDB on a managed Postgres provider that isn’t Tiger Cloud, verify explicitly which license build they ship before committing

Concrete WaterBridge math [4]:

  • Before: SQL Server at $12,000/month, approaching 4TB limit, performance degrading
  • After: TimescaleDB self-hosted, 72TB of raw data in 3.9TB, zero-delay monitoring for 1,200+ miles of pipeline
  • Three-person team, no custom drivers or APIs needed

Deployment reality check

TimescaleDB is a Postgres extension, which means deployment complexity largely mirrors your existing Postgres deployment, plus the extension.

Straightforward path:

# Docker (development)
docker run -d --name timescaledb \
  -p 6543:5432 \
  -e POSTGRES_PASSWORD=password \
  timescale/timescaledb-ha:pg18

For production, it’s the same as deploying Postgres plus CREATE EXTENSION timescaledb and running timescaledb-tune to optimize settings for your hardware. If you already operate Postgres in production, adding TimescaleDB is a light lift.

What requires care:

  • Schema alterations on compressed hypertables with active continuous aggregates require a multi-step process: disable compression, disable the refresh policy, make the change, re-enable. This isn’t dangerous but it’s tedious and the documentation for it is not obvious [1]
  • The RAM recommendation is real — columnstore compression and continuous aggregate refreshes are memory-intensive operations. 1GB RAM (Tiger Cloud entry tier) is insufficient for any meaningful workload [README][2]
  • The port 6543 default in development Docker images exists specifically to avoid conflicts with local Postgres on 5432 — a small thing, but catch it before it confuses your connection strings [README]

What to watch for:

  • Verify that your target deployment environment offers TSL features before building your schema around compression or continuous aggregates
  • Continuous aggregate refresh policies can lag under heavy write load — monitor the refresh job logs
  • Cloudflare’s team notes that TimescaleDB simplicity has a ceiling: if your query patterns grow toward full OLAP — wide scans, complex aggregations across petabytes — ClickHouse will eventually outrun it [3]

Pros and Cons

Pros

  • Compression that actually works. 90%+ reduction confirmed by independent real-world tests, not just vendor benchmarks [1][4]. For high-volume time-series, this is the difference between a $12K/month database bill and a $20/mo VPS [4].
  • It’s still Postgres. All your tools, ORMs, query builders, and dashboards work without modification. No new query language, no schema migration to an exotic format [3][5].
  • Continuous aggregates remove boilerplate. The automatic rollup from raw events to hourly/daily summaries, including real-time data not yet in the view, would otherwise require you to write and maintain a custom cron job + materialized view refresh logic [1].
  • Performance numbers are real. 20x insert improvement, 2,000x delete improvement, and 1.2x–14,000x query improvement over plain Postgres at scale aren’t marketing claims — they’re benchmarked with reproducible methodology [5].
  • Cloudflare uses it in production. Not a startup testimonial — this is Cloudflare’s Zero Trust product suite handling enterprise-scale telemetry, chosen specifically over ClickHouse for operational simplicity [3].
  • Free to self-host with full features. Unlike some “open core” tools where the useful features are SaaS-only, the TSL community edition self-hosted gives you compression, continuous aggregates, and hyperfunctions at $0 license cost [1].

Cons

  • License complexity will bite you. TSL vs Apache 2.0 is not academic — it determines whether you get the features that make TimescaleDB worth using. Most third-party managed Postgres offerings ship the Apache build [2]. This should be your first question before adopting.
  • Managed cloud is overpriced for what you get. €39/month for 0.5 vCPU and 1GB RAM, with full features, is hard to justify when self-hosting on a €14/month Hetzner node gives you 2 vCPU and 4GB [2]. The pricing punishes people who want managed convenience.
  • Schema alterations under compression are painful. Changing a column type or adding a NOT NULL column on a compressed hypertable with an active continuous aggregate is a multi-step operation that requires temporarily disabling safety policies [1]. Not a dealbreaker, but not what you want to discover mid-migration.
  • Not a no-code tool. This is a database extension. You need SQL fluency, Postgres operational experience, and comfort with Docker or native installation. Non-technical founders won’t self-deploy this without engineering help.
  • Company rebranding adds confusion. The product is TimescaleDB, the company is now TigerData, the cloud product is Tiger Cloud. Documentation URLs split between docs.timescale.com and docs.tigerdata.com. Not a product flaw, but disorienting when you’re searching for answers.
  • Not a replacement for ClickHouse at extreme analytical scale. If you’re doing OLAP across petabytes with millisecond SLAs, you’ll eventually outgrow TimescaleDB [3]. It’s designed for the operational analytics range, not the extreme end.

Who should use this / who shouldn’t

Use TimescaleDB if:

  • You’re already on Postgres and want to store time-series data (metrics, events, sensor readings, logs) without introducing a second database system.
  • You’re paying for a dedicated time-series SaaS (InfluxDB Cloud, Datadog’s storage tier, etc.) and want to own the data layer on infrastructure you already run.
  • Your team has Postgres operational experience and wants the compression and continuous aggregate features that would otherwise require a specialist database.
  • You’re building IoT, monitoring, or financial tick data infrastructure and need proven performance at billions of rows [4][5].

Skip it (stay on plain Postgres) if:

  • Your time-series tables are under 10 million rows and growing slowly — vanilla Postgres handles this fine, and the added extension complexity isn’t worth it.
  • Your team has no Postgres ops experience and the deployment learning curve would consume more time than the storage savings justify.

Skip it (use ClickHouse) if:

  • You’re building a dedicated analytics pipeline with complex aggregations across hundreds of millions of rows and strict query latency requirements.
  • Your team already operates ClickHouse and the ecosystem integration value outweighs the Postgres compatibility benefit.

Skip it (use InfluxDB or QuestDB) if:

  • Your team is greenfield and Postgres compatibility is not a requirement — InfluxDB and QuestDB have more purpose-built time-series ergonomics without the extension layer complexity.

Skip managed TimescaleDB (use self-hosted) if:

  • You’re evaluating managed Postgres providers and expecting TimescaleDB features — verify which license build they ship first [2].

Alternatives worth considering

  • Plain PostgreSQL with table partitioning — if you’re under 100M rows and your workload isn’t insert-heavy, native Postgres range partitioning covers the basics without adding an extension dependency.
  • ClickHouse — the go-to for pure analytical workloads. Faster than TimescaleDB for complex OLAP, but a separate system with its own query language and operational surface area [3].
  • InfluxDB — purpose-built time-series database, no Postgres dependency. Good if you don’t need relational JOINs and want a cleaner time-series API. Open-source v2 is MIT; v3 is mostly proprietary.
  • QuestDB — high-performance time-series, Apache 2.0, Postgres wire protocol compatible. Worth comparing if you’re greenfield and don’t have a Postgres dependency.
  • VictoriaMetrics — excellent for Prometheus-compatible metrics storage. Not a general time-series database, but significantly more cost-efficient than managed Prometheus at scale.
  • Prometheus + Grafana — the standard monitoring stack. Not a general-purpose time-series store but handles the operational monitoring use case without TimescaleDB’s complexity.

For teams already on Postgres, the realistic shortlist is TimescaleDB vs plain Postgres with partitioning. TimescaleDB wins decisively once you’re past 50–100M rows or need compression. For teams starting fresh, compare QuestDB or InfluxDB OSS before committing to the Postgres extension model.


Bottom line

TimescaleDB’s value proposition is real but narrow: if you’re on Postgres and your time-series data is growing past what vanilla Postgres handles well, it’s the lowest-friction path to 90%+ compression, fast deletes, and time-bucketed aggregations. Cloudflare’s endorsement [3] and WaterBridge’s storage math [4] aren’t outliers — the compression and performance numbers hold up under independent scrutiny [1][5]. The catch is the license situation, which is not obvious and has burned at least one developer who built against self-hosted TSL features only to find the managed path strips them out [2]. If you’re self-hosting, you get the full feature set for free. If you’re not self-hosting, verify what build your provider ships before you build your schema around compression. Tiger Cloud has the features but prices like a premium SaaS, not an open-source tool.

For a non-technical founder, this isn’t a self-service product — it needs an engineer to deploy and operate. But if you’re paying for managed time-series storage or watching your Postgres performance degrade under event data, it’s worth the setup. If deploying and maintaining a database extension is the blocker, upready.dev handles that one-time setup for clients.


Sources

  1. wkrp.xyz“A Small Time Review of TimescaleDB”. https://www.wkrp.xyz/a-small-time-review-of-timescaledb/
  2. r/Database — Reddit“Disappointed in TimescaleDB”. https://www.reddit.com/r/Database/comments/1r3nw0v/disappointed_in_timescaledb/
  3. Cloudflare Blog“How TimescaleDB helped us scale analytics and reporting” (Robert Cepa, Jul 8 2025). https://blog.cloudflare.com/timescaledb-art/
  4. DEV Community / TigerData“From 14TB to 700GB: Optimizing Database Storage for Real-Time Monitoring”. https://dev.to/tigerdata/from-14tb-to-700gb-optimizing-database-storage-for-real-time-monitoring-2l0p
  5. Medium / Timescale“TimescaleDB vs. Postgres for time-series: 20x higher inserts, 2000x faster deletes, 1.2x-14,000x faster queries” (Rob Kiefer, Aug 11 2017). https://medium.com/timescale/timescaledb-vs-6a696248104e

Primary sources:

Features

Integrations & APIs

  • Plugin / Extension System