Dragonfly
The most performant in-memory data store, engineered for extreme throughput at scale. A drop-in replacement for Redis and Memcached.
In-memory data store, honestly reviewed. No marketing fluff, just what you get when you swap Redis for it.
TL;DR
- What it is: An in-memory data store built as a drop-in replacement for Redis and Memcached, using a multi-threaded architecture that scales vertically on modern hardware [1][2].
- Who it’s for: Engineering teams running Redis at scale who are hitting throughput ceilings or paying large cloud bills for ElastiCache or Redis Cloud. Not a tool for non-technical founders — this is infrastructure, not software.
- Cost savings: Dragonfly’s managed cloud claims up to 80% lower infrastructure cost than Redis equivalents. A 1TB cache at 1M RPS runs $6,800/mo on Dragonfly Cloud vs $17,082/mo for Redis and $19,269/mo for Valkey [website].
- Key strength: Vertical scalability. Redis is single-threaded and hits a hard CPU wall. Dragonfly uses all available cores, delivering up to 25–30x throughput on larger instances before Redis’s architecture even begins to compete [1][2][3].
- Key weakness: The license is not open source — Dragonfly uses the Business Source License (BSL 1.1), which restricts certain commercial uses. Memory-constrained small workloads don’t benefit much; the advantages compound at scale [3].
What is Dragonfly
Dragonfly is an in-memory data store designed to replace Redis and Memcached without requiring code changes. You point your existing Redis client libraries at Dragonfly, change the endpoint, and the application doesn’t notice. What’s different is what happens under the hood.
Redis is single-threaded. That architectural decision made sense in 2009 when multi-core servers were rare and the simplicity was worth it. Today it means Redis tops out at roughly 200–400K operations per second on most cloud instances, regardless of how many CPU cores the machine has. Every additional core beyond the first is wasted on cache operations [1][2].
Dragonfly was founded in 2022 by former Google engineers to fix that specific problem [2]. The architecture divides the dataset into independent shards, each managed by its own thread. Requests are processed in parallel across all shards. For multi-key operations that span shards, Dragonfly splits the command into subcommands, runs them concurrently, and uses a Very Lightweight Locking algorithm to maintain atomicity [1]. The result: on a large enough instance, Dragonfly can process operations on all available cores simultaneously.
The project sits at 30,185 GitHub stars and is fully compatible with Redis, Valkey, and Memcached APIs — including RedisJSON, a built-in search engine (positioned as a RediSearch replacement), Bloom filters, and native OpenTelemetry support [website].
The catch worth knowing upfront: Dragonfly’s license is the Business Source License (BSL 1.1). This means the source code is publicly readable, but commercial production use above certain thresholds may require a commercial agreement. It is not MIT, not Apache-2.0, not truly open source. It’s “source available,” the same licensing model that MariaDB, CockroachDB, and HashiCorp used. This matters if you’re self-hosting at scale commercially.
Why people choose it over Redis, Valkey, and KeyDB
The case for Dragonfly gets stronger the larger your Redis deployment is. At small scale — say, a single Redis instance handling a few thousand requests per second — you won’t see meaningful benefits. At large scale, the math becomes unavoidable.
The throughput story. Benchmark testing on a 10-million-key dataset showed Dragonfly delivering up to 30x higher throughput than Redis while maintaining comparable P99 latency — meaning the worst-performing 1% of requests stayed roughly in line [1]. The repoflow.io benchmark [3], which compared DragonflyDB v1.0.0 and v1.37.0 against Redis, Valkey, and KeyDB, found Dragonfly leads in small write operations and holds competitive performance in mixed read/write scenarios. Dragonfly’s own benchmarks on AWS c6gn.16xlarge show 3.8M QPS vs Redis’s ~190K QPS on the same hardware [README]. Take vendor benchmarks with appropriate skepticism, but the third-party numbers corroborate the direction if not the exact magnitude.
The vertical scaling story. Redis’s throughput doesn’t improve when you move from a 4-core to a 16-core instance — you’re paying for 16 cores, using one for cache. Dragonfly’s throughput scales with core count [2]. This means at a certain point, one large Dragonfly instance replaces what would otherwise require a Redis Cluster with multiple nodes, multiple coordinators, and the operational overhead that comes with distributed coordination.
The memory efficiency story. The repoflow.io benchmark [3] explicitly calls Dragonfly out as “clearly the most efficient across memory usage tests, consuming notably less memory than competitors while handling identical data loads.” This is meaningful in cloud environments where memory is often the binding cost constraint.
The snapshotting story. Redis’s RDB snapshotting forks the process, which causes a memory spike that can double RAM usage momentarily. Dragonfly uses a different snapshot approach that avoids this spike, making it more predictable in memory-constrained environments [2].
Against Valkey specifically. Valkey is the Linux Foundation fork of Redis created after Redis changed its license in 2024. It’s genuinely open source (BSD-3), and it’s the most direct apples-to-apples competitor. The repoflow.io benchmark [3] found Valkey leads Dragonfly in batched request throughput — Dragonfly’s pipeline performance lags behind. If your workload is dominated by pipelined batch operations, Valkey is worth benchmarking first. If it’s random access at high concurrency, Dragonfly likely wins.
Against KeyDB. KeyDB is an older multi-threaded Redis fork (acquired by Snap, then open-sourced again). It takes a different threading approach — it adds multi-threading to Redis’s existing architecture rather than redesigning from scratch. Dragonfly’s rewrite approach produces better memory efficiency at the cost of being a less conservative compatibility target [3].
Features
Core compatibility:
- Full Redis API compatibility — all existing Redis client libraries work unchanged [website][2]
- Valkey and Memcached API compatibility [website]
- Redis CLI works against Dragonfly instances [website]
- RedisJSON support implemented natively [website]
- Built-in search engine (described as more performant than RediSearch) [website]
- Bloom filters and probabilistic data structures [website]
Performance architecture:
- Multi-threaded with shared-nothing shard design [1]
- io_uring for async disk operations [1]
- Very Lightweight Locking for cross-shard atomicity [1]
- Snapshot process that avoids Redis’s memory-doubling fork behavior [2]
- Sub-millisecond latency benchmarks at 3.8M+ QPS on large AWS instances [README]
Observability:
- Native OpenTelemetry support built in [website]
- Admin port with HTTP interface [README config flags]
Configuration (from README):
Standard flags: port, bind, requirepass, maxmemory, dir, dbfilename, memcached_port, cluster_mode, snapshot_cron, hz. Behaves like Redis configuration — familiar to anyone who has run Redis in production.
Dragonfly Cloud (managed SaaS):
- Sub-millisecond latency at scale [website]
- ML feature stores and vector search support [website]
- Dedicated infrastructure, not shared tenancy [website]
- 24x7 managed operations [website]
- Bring-your-own-cloud (BYOC), autoscaling, multi-region backups on Enterprise tier [website]
- $100 free credit on signup [website]
Pricing: SaaS vs self-hosted math
Self-hosted Community Edition: The software itself is free to run. BSL 1.1 allows self-hosting for most internal use cases without a commercial license. Consult the license text if you’re building a product that resells Dragonfly capacity to others.
Infrastructure cost to self-host a meaningful Dragonfly instance (one where the multi-threading pays off):
- Minimum useful: 4-core VPS with 8–16GB RAM — roughly $20–$40/mo on Hetzner or Contabo
- Production-grade: 8–16 core dedicated instance with fast NVMe — $100–$300/mo depending on provider and region
- Large enterprise: $500+/mo for high-memory instances where Dragonfly’s advantage fully materializes
Dragonfly Cloud (their SaaS):
- Free trial: $100 credit [website]
- Pricing is instance-based. The website example shows 1TB cache at 1M RPS peak at $6,800/mo [website pricing calculator]
- Custom quotes for Enterprise (BYOC, autoscaling, multi-region)
Redis Cloud / ElastiCache for comparison [website]:
- Same 1TB cache / 1M RPS workload: $17,082/mo for Redis, $19,269/mo for Valkey
- AWS ElastiCache cache.r7g.xlarge (26GB RAM): ~$450–$600/mo depending on region and reserved capacity pricing
Concrete savings scenario:
A startup running a Redis Cluster on AWS ElastiCache — three r7g.large nodes (13GB each) to get ~40GB usable cache — pays roughly $800–$1,100/mo. Dragonfly’s claim is that equivalent throughput fits in fewer, larger nodes due to vertical scaling. If a single Dragonfly Cloud instance replaces the cluster, you’re looking at potentially halving that bill. The website’s 80% reduction claim is for enterprise-scale workloads with very large instances, not $800/mo setups.
For self-hosters: a single $80/mo dedicated 8-core Hetzner server with 32GB RAM running Dragonfly will meaningfully outperform a three-node Redis Cluster costing $400–$600/mo on managed cloud. The trade-off is ops burden — you manage uptime, backups, and failover yourself.
Deployment reality check
Dragonfly is infrastructure-grade software. The install path is Docker or a native binary, both of which are straightforward. The GitHub quick start is a single docker run command.
What you actually need for self-hosting:
- A Linux server with multiple CPU cores (the whole value proposition disappears on a single-core VPS)
- Docker or a Linux environment capable of running the binary
- Sufficient RAM for your working dataset — Dragonfly is more memory-efficient than Redis, but in-memory means the data has to fit
- A reverse proxy if you’re exposing it externally (don’t expose Redis-protocol ports to the internet)
- A backup strategy — Dragonfly supports snapshotting but you manage retention yourself
What can go sideways:
The repoflow.io benchmark [3] flagged an interesting issue: Dragonfly v1.0.0 showed an unusual memory spike during Pub/Sub fanout testing. The newer v1.37.0 addressed this, but it’s a reminder that benchmark behavior can differ from production behavior, and Pub/Sub workloads specifically may behave differently than pure cache workloads.
Cluster mode in Dragonfly is still maturing. If your Redis deployment relies on Redis Cluster features heavily, test compatibility before migrating. The single-instance vertical scaling is the primary story; cluster mode is secondary.
The license question matters at scale. If you’re a startup self-hosting for internal caching, BSL 1.1 is unlikely to be a problem. If you’re building a product that packages Dragonfly and sells database capacity to customers, read the license or talk to a lawyer.
Realistic migration time estimate: 2–4 hours for a competent backend engineer doing a read-through cache swap. 1–2 days if you need to validate behavior under production traffic patterns, set up monitoring, and adjust your backup/snapshot schedule. The API compatibility claim is genuine — there’s no client code to change.
Pros and Cons
Pros
- Genuine throughput gains at scale. The multi-threaded architecture isn’t marketing — third-party benchmarks [1][2][3] consistently show Dragonfly outperforming Redis on multi-core instances, with the gap widening as instance size grows.
- Best memory efficiency in benchmarks. RepoFlow’s [3] cross-tool comparison found Dragonfly using less memory than Redis, Valkey, and KeyDB for identical workloads. On cloud, memory is expensive.
- True drop-in replacement. The Redis API compatibility claim holds in practice. Same client libraries, same CLI, same configuration flags. Migration is a connection string change, not a codebase refactor [2][website].
- Better snapshotting. Avoids Redis’s process-fork memory spike during RDB snapshots, making behavior more predictable in memory-constrained environments [2].
- 30,185 GitHub stars — large enough community that you’ll find answers to common questions without opening a support ticket.
- Vertical scaling eliminates cluster complexity. One big Dragonfly instance often replaces a Redis Cluster, removing coordinator nodes, replication lag, and cross-slot operation headaches.
- Native OpenTelemetry for observability without a separate exporter [website].
- Backed by a real company with enterprise support contracts available [website].
Cons
- Not open source. BSL 1.1 is “source available,” not MIT or Apache. Redis changed to BSL, Dragonfly started with BSL. Read the license if you plan commercial redistribution [website].
- P95 latency slightly elevated in some benchmarks. The repoflow.io benchmark [3] noted slightly higher p95 latency compared to Redis in standard single-key operations. Throughput wins, tail latency is a wash or slightly worse.
- Pub/Sub behavior inconsistency. Older versions showed memory spikes in Pub/Sub fanout [3]. If Pub/Sub is a core use case, test v1.37+ specifically and under load.
- Batched pipeline throughput lags Valkey. If your workload is heavily pipelined batch operations, Valkey may outperform Dragonfly [3].
- Cluster mode less mature than Redis Cluster. Single-instance vertical scaling is the proven story; cluster mode for horizontal scaling is still catching up.
- Not useful for small workloads. The architectural advantages only materialize when you have enough concurrent requests to benefit from parallelism. A dev team’s staging Redis cache on a 2-core VPS won’t notice a difference.
- Managed cloud pricing requires direct conversation for large deployments. The calculator is illustrative; real enterprise quotes involve sales.
Who should use this / who shouldn’t
Use Dragonfly if:
- You’re running Redis at scale and hitting throughput limits on your current instance size.
- You’re paying $500+/mo for Redis Cloud, ElastiCache, or a Redis Cluster and want to validate whether fewer, larger Dragonfly instances can replace it at lower cost.
- You’re a backend engineer comfortable with infrastructure who wants to self-host and run benchmarks against your actual workload patterns.
- Memory efficiency matters — you’re in an environment where RAM is the binding cost.
- You want full Redis API compatibility with no client code changes.
Skip it (stay on Redis or switch to Valkey) if:
- You’re running a small-scale deployment where the throughput ceiling isn’t a real problem today.
- You need a genuinely open-source license (MIT/Apache/BSD) for legal or compliance reasons — Valkey is the correct answer here.
- Your workload is dominated by pipelined batch operations — benchmark Valkey first [3].
- You depend heavily on Redis Cluster features and can’t afford migration risk.
Skip it (stay on managed cloud) if:
- Your team has no infrastructure engineering capacity and the ops burden of self-hosting any database isn’t acceptable.
- You need SLA-backed uptime guarantees and 24x7 on-call coverage you don’t have to provide yourself.
Skip it (Dragonfly Cloud instead of self-hosted) if:
- The throughput and cost savings math works but you don’t want to manage the instance — their managed tier is purpose-built for exactly this case.
Alternatives worth considering
- Redis — the incumbent. Single-threaded, proven at scale, massive ecosystem, largest client library support. Changed to SSPL/RSALv2 license in 2024. If you’re not hitting throughput walls, there’s no pressing reason to migrate.
- Valkey — the Linux Foundation Redis fork post-license-change. BSD-3 licensed, genuinely open source, fully Redis-compatible. Recent benchmarks [3] show Valkey ahead of Dragonfly in batched throughput. The correct choice if license purity is the primary concern.
- KeyDB — older multi-threaded Redis fork, simpler threading model than Dragonfly, open source. Less memory-efficient in benchmarks [3] but more conservative in its architecture changes.
- Memcached — still a valid choice for pure LRU caching with no data structure needs. Simpler, lower overhead, doesn’t compete with Dragonfly on features but wins on operational simplicity for basic caching.
- AWS ElastiCache / Google Cloud Memorystore / Azure Cache for Redis — managed Redis you don’t touch. Higher cost, lower ops burden. The obvious comparison point for Dragonfly Cloud pricing.
- Garnet (Microsoft) — newer entry from Microsoft Research, RESP-compatible, Apache-2.0 licensed. Early but worth watching.
The realistic decision for most teams is Dragonfly vs Valkey vs managed Redis. Valkey if open source matters. Dragonfly if throughput at scale and memory efficiency matter. Managed Redis if you just want it to work without thinking about it.
Bottom line
Dragonfly does what it says. The multi-threaded architecture genuinely outperforms Redis at scale, third-party benchmarks corroborate the direction even when they quibble with the magnitude, and the drop-in compatibility claim is real enough that migration risk is low. The case is strongest if you’re already spending meaningful money on Redis infrastructure and hitting throughput limits — the potential 80% cost reduction at the high end isn’t fiction, though it requires scale to materialize.
The honest caveats: it’s not open source (BSL 1.1, same as the new Redis), Pub/Sub and batch pipeline workloads need specific testing, and small-scale deployments won’t see meaningful benefit. If license purity matters, Valkey is the correct answer. If you want maximum throughput on a fixed hardware budget and you have the engineering capacity to run benchmarks and own the migration, Dragonfly is worth a serious evaluation.
If you want Dragonfly Cloud running in your infrastructure without spending a week on setup, that’s exactly what upready.dev deploys for clients.
Sources
-
Mohit Dehuliya, Medium — “DragonflyDB vs Redis: A Deep Dive towards the Next-Gen Caching Infrastructure” (July 13, 2024). https://medium.com/@mohitdehuliya/dragonflydb-vs-redis-a-deep-dive-towards-the-next-gen-caching-infrastructure-23186397b3d3
-
Vikash Gusain, AurigaIT — “Dragonfly DB Over Redis: The Future of In-Memory Datastores” (February 24, 2025). https://aurigait.com/blog/dragonfly-db-over-redis/
-
RepoFlow Team — “Redis vs Valkey vs DragonflyDB vs KeyDB Benchmarks” (March 20, 2026). https://www.repoflow.io/blog/redis-vs-valkey-vs-dragonflydb-vs-keydb-benchmarks
Primary sources:
- GitHub repository: https://github.com/dragonflydb/dragonfly (30,185 stars)
- Official website: https://www.dragonflydb.io
- Pricing calculator: https://www.dragonflydb.io/pricing
- Documentation: https://dragonflydb.io/docs
Replaces
Related Databases & Data Tools Tools
View all 122 →Supabase
99KThe open-source Firebase alternative — Postgres database, Auth, instant APIs, Realtime subscriptions, Edge Functions, Storage, and Vector embeddings.
Prometheus
63KAn open-source monitoring system with a dimensional data model, flexible query language, efficient time series database and modern alerting approach.
NocoDB
62KTurn your existing database into a collaborative spreadsheet interface — without moving a single row of data.
Meilisearch
56KLightning-fast, typo-tolerant search engine with an intuitive API. Drop-in replacement for Algolia that you can self-host for free.
DBeaver
49KFree universal database management tool for developers, DBAs, and analysts. Supports 100+ databases including PostgreSQL, MySQL, SQLite, MongoDB, and more.
Milvus
43KMilvus is a high-performance open-source vector database built for AI applications, supporting billion-scale similarity search with sub-second latency.