unsubbed.co

RustFS

High-performance S3-compatible distributed object storage written in Rust.

Self-hosted object storage, honestly reviewed. Built in Rust, positioned as the license-clean answer to MinIO’s exit from the open-source commons.

TL;DR

  • What it is: Open-source (Apache 2.0) S3-compatible distributed object storage built in Rust — positioned as a drop-in replacement for MinIO now that MinIO has effectively closed its community edition [3][4].
  • Who it’s for: Home lab operators, self-hosted infrastructure teams, and AI/ML engineers who need S3-compatible storage without AGPL license exposure and without paying AWS prices [1][3].
  • Cost savings: Amazon S3 pricing starts at $0.023/GB/month and compounds with request fees, egress, and data transfer. RustFS self-hosted runs on your own hardware or a $10–20/mo VPS with no per-request fees [1].
  • Key strength: Apache 2.0 license with full S3 API compatibility — the combination that MinIO used to offer and no longer does. Benchmark claims 2.3x MinIO throughput on 4KB small-object workloads [README][1].
  • Key weakness: Still explicitly beta. Distributed mode, lifecycle management, and KMS are listed as “under testing” in the feature table. The project is compelling, but not a default production recommendation yet [1][4].

What is RustFS

RustFS is a distributed object storage system written entirely in Rust. The premise is simple: take the operational model that made MinIO popular — S3-compatible API, easy Docker deployment, no metadata coordinator — and rebuild it in a language with no garbage collector and memory safety guarantees baked in at compile time [README][3].

The pitch on GitHub is blunt: “2.3x faster than MinIO for 4KB object payloads.” That benchmark covers small-object throughput specifically, on a 2-core Xeon setup with four 40GB drives and 15Gbps networking. It’s a real benchmark with a real hardware table, not a marketing assertion [README]. What it doesn’t claim — and this matters — is equivalent performance on large sequential reads, where MinIO still has the clearer edge [1].

The project sits at 23,369 GitHub stars, was recognized by Runa Capital’s ROSS Index as one of the fastest-growing open-source startups in Q4 2025, and ships support for S3 core features, object versioning, event notifications, bucket replication, OpenStack Swift API, Keystone authentication, Kubernetes Helm charts, and multi-tenancy — all marked as available today [README]. Lifecycle management, full distributed mode, and RustFS KMS are listed as under testing [README].

The license is the other thing people keep coming back to: Apache 2.0. Not AGPL. Not “Fair-code.” Apache 2.0, which means you can self-host it, embed it in a commercial product, distribute it, and modify it without opening your own source code [README][2]. That’s the same license MinIO used to have before they changed it.


Why people choose it

There’s a consistent thread across every article that covers RustFS: people aren’t choosing it because it’s technically superior to everything else. They’re choosing it because MinIO changed [3][4].

Brandon Lee at Virtualization Howto [3] captures the sentiment clearly. He had been running MinIO on a Synology NAS for S3-compatible backups and explains his move to RustFS: “With MinIO going the way of closed source, RustFS caught my attention.” The Reddit thread he links — “MinIO is no longer open source – who is replacing it? : r/sysadmin” — is a reliable proxy for the frustration. MinIO went AGPL, gutted functionality from the community release, and then gradually moved its maintainer focus to commercial offerings. RustFS stepped into the gap.

The Milvus team puts it more technically [4]: for their AI vector database, object storage is the persistence layer for binlogs, index files, and segment data. MinIO was the default backend. When MinIO updated its GitHub README to state it was no longer accepting new changes, the Milvus community needed a credible path. RustFS, along with Ceph RGW and SeaweedFS, is one of the backends they evaluated as an alternative.

The AutoMQ partnership [2] shows a different angle — a cloud-native Kafka alternative partnering with RustFS because it needs a license-clean S3-compatible backend for its compute-storage separation architecture. AutoMQ explicitly frames the problem: “lightweight solutions use the AGPL license, posing potential compliance risks and restrictions for enterprise commercial use.” Apache 2.0 removes that risk entirely.

What the Sealos blog [1] adds is nuance. It frames three specific scenarios where teams are considering RustFS:

  1. You need an Apache 2.0 MinIO alternative for dev, staging, or internal pilots.
  2. Your workload is small-object or resource-constrained (where the benchmark claim is most credible).
  3. You need to test a modern Rust-based system, not just inherit a decade-old Go codebase.

Where it explicitly pumps the brakes: “You need a storage layer that is already broadly battle-tested in production” — in that case, the guidance from Sealos is “prefer a more mature option today” [1].


Features

Based on the README and feature table:

Storage fundamentals (available now):

  • Full S3 Core API: upload, download, list, delete, multipart, presigned URLs [README]
  • Object versioning [README]
  • Bitrot protection [README]
  • Event notifications [README]
  • Bucket replication [README]
  • Multi-tenancy [README]
  • Single-node mode [README]
  • Logging [README]

Protocol compatibility:

  • S3 API — drop-in for AWS SDK, AWS CLI, and anything that already works with MinIO [README][4]
  • OpenStack Swift API [README]
  • OpenStack Keystone authentication with X-Auth-Token headers [README]

Deployment:

  • Docker and Docker Compose (single-node quickstart in the README)
  • Kubernetes via Helm charts [README]
  • apt package installation [merged profile]
  • Redis dependency (bundled or external) [merged profile]

Still under testing (not production-ready):

  • Distributed mode — the multi-node horizontal scaling story isn’t done [README]
  • Lifecycle management — automated expiration and tiering policies [README]
  • RustFS KMS — key management for encryption at rest [README]
  • Swift metadata operations — partial support [README]

AI/ML positioning:

  • Data lake support with optimization for high-throughput workloads [README]
  • Used as object storage backend for Milvus vector database [4]
  • AutoMQ integration for Diskless Kafka on top of object storage [2]
  • Compatible with any system that targets the S3 API: Velero, Longhorn, Restic, and most Kubernetes backup tools [3]

The feature gap that matters most for production use is the distributed mode being under testing. Single-node works. Scaling horizontally while maintaining consistency and fault tolerance — the part that matters when your storage node fails — is still being worked through [README][1].


Pricing: SaaS vs self-hosted math

Amazon S3 (the SaaS you’re escaping):

  • Storage: $0.023/GB/month (us-east-1 standard tier)
  • PUT/COPY/POST requests: $0.005 per 1,000 requests
  • GET requests: $0.0004 per 1,000 requests
  • Egress (data transfer out): $0.09/GB for the first 10TB/month
  • No fixed monthly fee — bills compound as you grow

A team storing 5TB of data and making moderate API calls easily pays $150–$300/month depending on egress patterns. At 50TB, you’re looking at $1,500+/month before egress.

RustFS self-hosted:

  • Software: $0 (Apache 2.0) [README]
  • VPS or bare metal: your cost, zero per-request fees
  • A 3-node cluster on Hetzner dedicated servers: roughly $80–150/month with several TB of usable NVMe storage

The honest math: If you’re paying AWS for 10TB of storage plus egress and moderate request volume, you’re likely spending $300–600/month. Self-hosting RustFS on three Hetzner AX41 servers (96GB RAM, 2× 512GB NVMe, ~$60/each) gives you roughly 3TB of usable replicated storage for $180/month — and you’re not paying per GET request [1].

The caveat: this only makes sense at meaningful data volumes. If you’re storing 20GB, stay on S3. The crossover where self-hosting wins on pure cost is typically somewhere around 2–5TB, depending on your access patterns [1].


Deployment reality check

The Virtualization Howto article [3] is the only one that walks through an actual installation. The experience for a Docker-based single-node setup is straightforward — the repository ships a docker-compose-simple.yml that gets you running in minutes. The author’s verdict: it worked, S3 tools connected cleanly, and it served as a useful MinIO replacement in a home lab context [3].

What you actually need for a working setup:

  • A Linux host (VM or bare metal) with 4GB+ RAM for single-node
  • Docker and Docker Compose
  • An existing S3-compatible client (aws CLI, rclone, Cyberduck) to verify it works
  • A reverse proxy (Caddy or nginx) for HTTPS if you’re exposing it externally
  • For multi-node: wait. Distributed mode is under testing [README]

What can go sideways:

  • Distributed mode being unfinished is a real constraint. If you need high availability across nodes, RustFS is not ready for that today [README][1].
  • The Sealos article [1] flags explicitly: “Public benchmark claims, roadmap items, and feature tables are useful inputs, but they are not a substitute for your own testing, failure drills, and compatibility checks.” Run your recovery path before you trust this with production data.
  • Lifecycle management being under testing means you’ll manage object expiration manually or via your application — there’s no automated tiering or expiry yet [README].
  • The benchmark is for 4KB objects. Large sequential reads (backups, video, model checkpoints) show a different profile — MinIO retains an edge there [1].

For Milvus users specifically: the Milvus blog [4] frames RustFS as “still experimental” in that context and explicitly says the evaluation “is not a production recommendation.” They tested it for architecture compatibility, not for production suitability.

Realistic setup time: 20–30 minutes for a working single-node Docker instance following the README. Multi-node distributed deployment: not available yet in a stable form.


Pros and cons

Pros

  • Apache 2.0 license with no commercial restrictions. This is the core reason people look at RustFS. You can embed it in a commercial product, redistribute it, fork it, and build on it without triggering license obligations on your own code [README][1][2]. That’s what MinIO used to offer and no longer does.
  • S3 API compatibility is real. Standard AWS SDKs, the AWS CLI, rclone, Velero, and S3-compatible applications connect without modification [3][4]. It’s not aspirational compatibility — it works.
  • Small-object performance claim is specific and testable. The 2.3x throughput benchmark on 4KB payloads comes with a hardware table (CPU, RAM, drive count, IOPS, network). You can reproduce it [README].
  • Written in Rust. No garbage collector means no GC pause-induced latency spikes. Memory safety eliminates a class of bugs that affect Go-based storage systems under load [3][README].
  • Serious ecosystem partnerships. AutoMQ (cloud-native Kafka) and Milvus (vector database) are both working with RustFS as a backend [2][4]. These are real engineering organizations making real bets.
  • Helm charts available. Kubernetes deployment is not an afterthought [README].
  • Active development. 2,592 commits, monthly commit activity badges, ROSS Index recognition for Q4 2025 [README].

Cons

  • Distributed mode is under testing. Single-node works. The multi-node story — the part that gives you fault tolerance and horizontal scaling — isn’t production-ready [README][1]. If a single node fails and you haven’t replicated elsewhere, you lose data.
  • Lifecycle management is under testing. Automated expiration, tiering, and transition policies don’t work yet [README]. You manage object retention yourself.
  • Explicitly beta by its own advocates. The Sealos review [1] — one of the most thorough external write-ups — says “do not treat RustFS as a drop-in decision” and recommends against it for anyone “who needs a battle-tested production storage layer today.” That’s honest, and you should take it seriously.
  • Large-file read performance is not the strength. The benchmark covers 4KB objects. For large sequential reads — backups, video blobs, ML model weights — MinIO’s performance advantage still holds [1].
  • No KMS yet. Encryption key management is under testing. If your compliance requirements include encryption at rest managed through a KMS, that feature isn’t ready [README].
  • Younger codebase. MinIO has been in production deployments for years. RustFS has 2,592 commits and a fast growth trajectory, but it hasn’t been stress-tested by the breadth of environments that MinIO has.
  • Recovery paths are unproven at scale. The Sealos article [1] recommends failure testing, migration testing, and compatibility validation before trusting it with production data. Those are the right calls, and they take time.

Who should use this / who shouldn’t

Use RustFS if:

  • You were relying on MinIO’s Apache 2.0 community edition and need a license-clean replacement that works with the same SDKs and tools.
  • You’re running dev, staging, or internal tooling and want to test a modern S3-compatible backend without license anxiety.
  • Your workload skews toward small objects (AI metadata, thumbnails, events, logs) where the benchmark claim is most relevant.
  • You’re comfortable running a single-node setup and understand that distributed HA isn’t ready yet.
  • You want to build something on Apache 2.0 storage without taking a dependency on an AGPL system [1][2].

Wait until distributed mode ships if:

  • You need genuine high availability across nodes. Running a single point of failure as your storage layer is fine for a home lab; it’s not fine for anything you’d cry over losing [README][1].
  • Your production SLA requires the storage layer to survive a node failure automatically.

Skip it (use Ceph RGW instead) if:

  • You already run Ceph for block and file storage and the operational overhead of adding another storage system outweighs simplicity [3].
  • You need battle-tested distributed storage with years of production hardening behind it.

Skip it (use a managed S3 provider) if:

  • Your data volume is under 2TB and you’re not hitting per-request limits. The managed convenience wins at small scale [1].
  • You have no one to manage infrastructure. Self-hosted storage with a single node requires someone who will handle disk failures, upgrades, and monitoring.
  • Your compliance requirements need a vendor-backed SLA. Apache 2.0 doesn’t come with one.

Alternatives worth considering

  • MinIO (older open-source version): The AGPL-licensed community edition still exists and has years of production hardening. If AGPL doesn’t conflict with your use case, it’s the most mature self-hosted S3 option [3][4]. The catch is AGPL’s requirement to publish source code for any networked service built on it.
  • Ceph RGW (Rados Gateway): The S3-compatible interface layer on top of Ceph. Extremely powerful, mature, and battle-tested at scale. Operationally complex and resource-hungry — not a fit for a home lab or small team, but the right call for organizations already running Ceph [3][4].
  • SeaweedFS: Another open-source S3-compatible system, written in Go. More mature than RustFS, with a working distributed mode. Apache 2.0 license. Less momentum than RustFS right now, but more production-proven [4].
  • Garage: Minimalist, Rust-based distributed object storage. Designed specifically for geo-distributed setups. Apache 2.0. Smaller community than RustFS.
  • Amazon S3 / Cloudflare R2: If the point is to stop paying AWS egress fees, Cloudflare R2 (S3-compatible, zero egress fees, $0.015/GB/month storage) is worth evaluating before building self-hosted infrastructure. Data not available on exact threshold where self-hosted beats R2 — depends heavily on your access patterns.
  • Backblaze B2: $0.006/GB/month storage, S3-compatible. Much cheaper than AWS S3. Not self-hosted, but dramatically cheaper than AWS and avoids infrastructure management entirely.

The realistic shortlist for someone leaving MinIO: RustFS for dev/staging and small-object workloads where the license matters and you can absorb beta risk, SeaweedFS or Ceph RGW if you need production-grade distributed storage today, or Cloudflare R2 / Backblaze B2 if managed is acceptable and the goal is purely cost reduction.


Bottom line

RustFS is the most credible answer to the question “what fills the hole MinIO left?” — but it’s an answer that comes with an important asterisk. The license is genuinely clean (Apache 2.0), the S3 compatibility is real, and the small-object performance story is backed by a reproducible benchmark. The Rust foundation removes a class of production instability that Go-based systems can exhibit under load. The ecosystem momentum is real: AutoMQ, Milvus, and a growing home lab community are all betting on it.

The asterisk is that distributed mode — the feature that makes self-hosted object storage genuinely fault-tolerant — is still under testing. Until that ships and gets field validation, RustFS is a serious dev/staging replacement and a compelling single-node home lab option, not a production storage layer you bet your business on. Check the GitHub feature table before deploying it. Run your own recovery tests. If those caveats fit your situation, it’s worth evaluating now. If they don’t, keep watching the project — the trajectory is promising and the licensing problem it solves is real.

If the setup is the blocker, that’s exactly what unsubbed.co’s parent studio upready.dev handles for clients. One-time deployment, you own the infrastructure.


Sources

  1. Sealos Blog“What Is RustFS? Apache 2.0 MinIO Alternative (2026)” (last verified March 13, 2026). https://sealos.io/blog/what-is-rustfs/
  2. AutoMQ on Medium“AutoMQ × RustFS: Building a new generation of low-cost, high-performance Diskless Kafka based on object storage” (November 20, 2025). https://medium.com/@AutoMQ/automq-rustfs-building-a-new-generation-of-low-cost-high-performance-diskless-kafka-based-on-efbb53cda6ca
  3. Brandon Lee, Virtualization Howto“I Built My Own S3 Storage in My Home Lab (And It Actually Works)” (April 2, 2026). https://www.virtualizationhowto.com/2026/04/i-built-my-own-s3-storage-in-my-home-lab-and-it-actually-works/
  4. Min Yin, Milvus Blog“MinIO Stops Accepting Community Changes: Evaluating RustFS as a Viable S3-Compatible Object Storage Backend for Milvus” (January 14, 2026). https://milvus.io/blog/evaluating-rustfs-as-a-viable-s3-compatible-object-storage-backend-for-milvus.md

Primary sources: