unsubbed.co

Garage

Garage handles self-contained distributed object storage as a self-hosted solution.

Distributed object storage, honestly reviewed. Not for everyone — but for the people it’s for, nothing else comes close.

TL;DR

  • What it is: AGPL-3.0 distributed object storage with full Amazon S3 API compatibility, designed for small-to-medium self-hosted deployments across multiple physical locations [website].
  • Who it’s for: Sysadmins and technically-minded founders who need durable, geo-distributed file storage without paying AWS S3 prices — and who have at least two or three machines to put it on.
  • Cost savings: AWS S3 bills $0.023/GB/month plus $0.0004 per GET request. A 500GB dataset runs you roughly $12–15/month on S3, plus egress. Garage on your own hardware costs $0 in licensing and runs on machines you already own [pricing comparison].
  • Key strength: Single dependency-free binary, serious redundancy model, genuinely low hardware requirements (1GB RAM), and compatibility with the existing S3 ecosystem — Nextcloud, Rclone, Mastodon, PeerTube, Matrix all work out of the box [website][5].
  • Key weakness: Not designed for single-server deployments as a primary use case (though it works). The real value shows only when you have nodes in multiple physical locations. There is no SaaS tier, no managed offering, and essentially no third-party reviews yet — you’re buying into a project backed by a small French cooperative, not a VC-funded company.

What is Garage

Garage is an S3-compatible distributed object storage server. You run it on your own machines, and applications that know how to talk to Amazon S3 can talk to it without modification.

The project comes from Deuxfleurs, described as “an experimental small-scale self-hosted service provider” that has been running Garage in production since 2020 — which means the project isn’t research vaporware, it’s infrastructure that people actually depend on [website][README].

The pitch is specific: storage clusters made of nodes at different physical locations, replicating data across them, staying available even when some servers go offline. This is not “run on one VPS and call it distributed.” The architecture assumes you have machines in multiple datacenters, colocations, or homes — and it makes that setup easy instead of painful [website].

As of this writing, Garage sits at 3,246 GitHub stars. The license is AGPL-3.0, which means the source is always open and any modifications you deploy must also be shared — but you can self-host without any commercial agreement [merged profile].

Notably, the project has received sustained public funding: from NGI POINTER (2021–2022), NLnet/NGI0 Entrust (2023–2024), and NLnet/NGI0 Commons Fund (2025), covering the equivalent of 1–3 full-time engineers over multiple years [website]. This is meaningful for a small open-source project. It means the team isn’t burning personal savings to maintain it.


Why People Choose It

The only real-world first-person account we found is from Joel Oliveira, an engineering manager in Boston who was previously running MinIO and started looking for alternatives after the MinIO maintainers shifted licensing direction [1]. He writes: “With the velocity and severity of these changes, it made sense to take a look at something that doesn’t risk having an even more severe rug-pull moment.”

That’s the core reason people land on Garage: they were burned or scared by MinIO’s licensing pivot (from Apache 2.0 to AGPL, with commercial restrictions for certain use cases), and they want something with a stable, predictable open-source future [1].

Oliveira ran a single-node Garage instance using Docker Compose, paired with an open-source web UI (garage-webui), and found the setup straightforward. His experience: Docker Compose up, generate secrets with openssl rand, assign disk layout, done [1]. No complaints about the setup process in his writeup — notable because he’s the kind of person who’d say something if it was painful.

The other common motivation is price. AWS S3 is the default object storage for most SaaS applications, and it becomes expensive at scale. Backblaze B2 and Cloudflare R2 cut costs significantly, but they’re still third-party SaaS with egress fees. Garage is zero licensing cost on hardware you own, with no per-request charges and no egress fees to yourself [pricing comparison].

The S3 API compatibility matters more than it sounds. Every tool in the ecosystem — Rclone, awscli, MinIO Client (mc), s5cmd, Cyberduck — already works with S3 and therefore already works with Garage. Nextcloud, Mastodon, PeerTube, and Matrix all have S3 backend support. You don’t need to learn a new protocol or adapt your existing tooling [website][1].


Features

Core storage:

  • Full Amazon S3 API compatibility — buckets, objects, multipart uploads, presigned URLs [website][5]
  • Configurable replication factor: data is replicated across zones (physical locations) of your choosing [website]
  • Runs as a single dependency-free binary on Linux (x86_64, ARMv7, ARMv8) [website]
  • Docker images available (dxflrs/garage) [1][5]
  • Configuration via a single TOML file [5]
  • SQLite (default) or other database backends for metadata [5]

Cluster and redundancy:

  • Nodes can be spread across datacenters, homes, or colocation facilities — designed for WAN operation with up to 200ms latency [website]
  • Zone-based layout: you assign nodes to named zones, Garage ensures each zone has a replica of your data [website]
  • Automatic failover — the cluster stays readable and writable when nodes are unreachable [website]
  • Layout versioning: changes to cluster topology are applied as versioned operations [5]

S3 web hosting:

  • Separate endpoint for hosting static websites directly from buckets [5]
  • index.html serving and configurable root domain [5]

Admin API:

  • REST admin API on a separate port (3903 by default) — separate from the S3 API, protected by its own token [5]
  • Prometheus-compatible metrics endpoint [5]
  • garage CLI for cluster management, key creation, bucket management, layout operations [1][5]

Compatible applications (officially listed): Nextcloud, Matrix (as media backend), Cyberduck, Mastodon, Rclone, PeerTube [website]. Any S3-compatible application works.

What it doesn’t have: No web dashboard built in (there is a community-built garage-webui that works well for basic management [1]). No built-in identity federation, no IAM policies at the complexity level of AWS, no object lifecycle rules as of the current version.


Pricing: SaaS vs. Self-Hosted Math

Garage has no SaaS tier. This is not a hosted product. The relevant comparison is what you’re currently paying for object storage versus running Garage yourself.

AWS S3 pricing (US East, standard tier):

  • Storage: $0.023/GB/month
  • GET requests: $0.0004 per 1,000
  • PUT requests: $0.005 per 1,000
  • Egress to internet: $0.09/GB after first 100GB free

500GB stored + 50GB monthly egress + typical request volume: roughly $15–25/month. 5TB stored: roughly $120/month before egress.

Backblaze B2:

  • Storage: $0.006/GB/month
  • Free egress to Cloudflare partners
  • 5TB stored: ~$30/month

Cloudflare R2:

  • Storage: $0.015/GB/month, free egress
  • 5TB stored: ~$75/month

Garage self-hosted:

  • Licensing: $0 (AGPL-3.0)
  • Hardware: whatever machines you already run, or second-hand servers at $50–200 one-time
  • Electricity and bandwidth: depends on your setup
  • Minimum useful deployment: two or three machines in different locations

The math only works if you already have the machines. If you’re buying dedicated servers just for Garage, the economics get complicated for small datasets. For 500GB on a machine you’re already running for other purposes, the marginal cost is essentially zero — which is the scenario Garage is designed for [website].


Deployment Reality Check

Joel Oliveira’s single-node setup from his October 2025 writeup [1] is the most practical deployment reference available. Here’s what the process actually looks like:

Step 1: Write a garage.toml config file. Specify metadata directory, data directory, replication factor, and RPC/S3/admin bind addresses. Generate secrets with openssl rand -hex 32 [1][5].

Step 2: Run docker compose up -d. The image is dxflrs/garage:v2.1.0 (check current releases). Four ports: 3900 (S3 API), 3901 (RPC), 3902 (S3 web hosting), 3903 (admin API) [1][5].

Step 3: Assign layout. This is the non-obvious step. After the container starts, you get the node ID from meta/node_key.pub, then run garage layout assign <node-id> -z dc1 -c 10G to tell Garage how much disk space to use, then garage layout apply --version 1 [1].

Step 4: Create buckets and keys. Via the admin API, the garage CLI, or a UI like garage-webui [1].

Step 5: Connect an S3 client. awscli, Rclone, mc, s5cmd — all work with the credentials you created [1].

Oliveira’s estimate (implied by his writeup): under an hour for a technical user doing a single-node instance. For a multi-node geo-distributed setup, budget a full afternoon and read the official cluster documentation carefully.

What can go wrong:

  • The layout step is easy to miss. Garage starts but refuses to store data until disk space is explicitly allocated. This surprises first-time users.
  • Single-node deployments have replication_factor = 1, which means no redundancy. Garage will warn you. That’s by design for testing; don’t run this in production [5].
  • WAN deployments require the rpc_public_addr to be correctly set to your public IP for each node — if you misconfigure this, nodes can’t find each other across networks [5].
  • The garage-webui (community project by khairul169) is a separate container, not an official Garage component. It works, but don’t confuse it for an officially supported dashboard [1].

Hardware minimums (from the website):

  • CPU: any x86_64 from the last 10 years, or ARMv7/ARMv8
  • RAM: 1GB
  • Disk: at least 16GB
  • Network: ≤200ms latency, ≥50Mbps [website]

These are genuinely low. A Raspberry Pi 4 can participate in a Garage cluster. A repurposed home server works fine.


Pros and Cons

Pros

  • Stable open-source alternative to MinIO. AGPL-3.0 with no commercial licensing complications. The MinIO licensing drift is what pushed people to look for this [1].
  • Actually designed for geo-distribution. Most S3-compatible alternatives (MinIO in single-node mode, SeaweedFS) are designed for a single datacenter. Garage’s entire architecture assumes nodes at different physical locations, across a WAN, with high latency between them [website].
  • Minimal hardware requirements. 1GB RAM, any ARMv7+ chip. Runs on hardware you already have, not hardware you need to buy [website].
  • Single binary. No dependency management, no runtime environment, no package conflicts. Copy the binary, write a config file, run it [website].
  • S3 API ecosystem compatibility. Nextcloud, Mastodon, PeerTube, Rclone, any S3-aware application — it all works without modification [website][1].
  • Sustained public funding. Three separate NLnet/NGI grants through 2025 means full-time developers paid to maintain this. Not a side project at risk of abandonment [website].
  • Heterogeneous hardware support. You can mix old x86 machines with ARM boards in the same cluster [website].

Cons

  • Not for single-server deployments. The replication model requires multiple nodes in different zones to deliver its value. A single-node Garage instance is technically valid but you lose everything that makes Garage better than just running MinIO [website][5].
  • No managed/SaaS option. If you want the benefits of Garage without running it yourself, there is no cloud offering. You’re fully responsible for operations.
  • Very limited third-party documentation. One real user review found [1]. Compare this to MinIO, which has thousands of tutorials, Stack Overflow threads, and YouTube videos. You’ll be reading the official docs (which are good) or debugging alone.
  • AGPL license implications for application developers. If you’re embedding Garage in a SaaS product you distribute, AGPL’s copyleft requirements apply. For pure self-hosting this doesn’t matter, but if you’re building a product around Garage, read the license carefully.
  • No built-in web UI. The community garage-webui project exists and works, but it’s not official [1].
  • Small community relative to MinIO or Ceph. 3,246 GitHub stars vs MinIO’s 52,000+. Less Stack Overflow help, fewer integrations documented in the wild.

Who Should Use This / Who Shouldn’t

Use Garage if:

  • You have two or more machines in different physical locations and want durable, replicated object storage across them.
  • You left MinIO because of licensing changes and want something with a stable AGPL foundation and no commercial upsell risk.
  • You’re running self-hosted applications (Nextcloud, Mastodon, PeerTube, Matrix) that need an S3-compatible backend and you want to own that backend.
  • Your hardware is constrained — ARM boards, old machines, low RAM — and you need something that runs on what you have.
  • You’re a sysadmin comfortable reading documentation and debugging network configuration without a community tutorial for every step.

Skip it (use MinIO in single-node mode) if:

  • You only have one server and you need basic object storage without replication. MinIO single-node is simpler to set up for this case, has a better built-in web UI, and has vastly more documentation.

Skip it (use Ceph) if:

  • You’re running large-scale enterprise storage (dozens of nodes, petabytes of data) where Ceph’s maturity and operational tooling are worth the complexity.

Skip it (use Cloudflare R2 or Backblaze B2) if:

  • You don’t have machines to run Garage on. Managed S3-compatible storage at $0.006–0.015/GB is still much cheaper than AWS S3 and removes all operational burden.

Skip it (stay on AWS S3) if:

  • You need IAM policies, bucket notifications, S3 Select, Lambda triggers, or the full AWS feature surface. Garage implements core S3, not the full service.

Alternatives Worth Considering

  • MinIO — the obvious comparison. More stars (52K+), better web UI, more documentation, runs well on a single node. The Apache → AGPL license pivot is what drove some users to Garage. If licensing doesn’t concern you, MinIO is the easier path for most deployments.
  • SeaweedFS — another Go-based distributed file store with S3 compatibility. Designed more for high-throughput single-datacenter deployments. Different architecture, different trade-offs.
  • Ceph (RadosGW) — the enterprise-grade option. Full S3 and Swift API compatibility, handles petabyte scale, but operationally complex. Not the right tool for small clusters.
  • Backblaze B2 — managed S3-compatible storage at $0.006/GB. If you’re not already running servers, this is almost certainly cheaper than buying hardware for Garage.
  • Cloudflare R2 — $0.015/GB with zero egress fees. Better economics than AWS S3 for most workloads, no operational overhead.
  • AWS S3 — the incumbent. Largest feature set, largest ecosystem, most expensive at scale, no self-hosting option.

For a non-technical founder looking at this list: Garage is not for you directly. It’s for the person you’d hire to run your infrastructure. The real decision for founders is usually Backblaze B2 or Cloudflare R2 vs AWS S3, not whether to self-host a distributed object store.


Bottom Line

Garage is a niche tool solving a specific problem well: durable, geo-distributed object storage on hardware you control, at minimal cost, with full S3 API compatibility. If that’s your problem, Garage is probably the right answer. If you have machines in two different locations and you’re tired of paying AWS S3 or you migrated away from MinIO after the licensing change, Garage is worth the afternoon it takes to set up. The AGPL license is clean, the hardware requirements are genuinely low, and the project has sustained funding through 2025 — not a fly-by-night thing.

The honest caveat: the community is small, third-party documentation is sparse, and the tool’s value is basically zero if you only have one server. Garage shines in the specific setup it was built for. Outside that setup, reach for MinIO or a managed service instead.

If self-hosting this sounds right but the setup process is the blocker, that’s exactly what upready.dev deploys for clients — one-time setup, you own the infrastructure from day one.


Sources

  1. Joel Oliveira — “Self-hosting Garage for object storage” (Oct 26, 2025). https://joeloliveira.com/2025/10/26/self-hosting-garage-for-object-storage
  2. Garage HQ — “Quick Start” (official documentation). https://garagehq.deuxfleurs.fr/documentation/quick-start/

Primary sources:

Features

Integrations & APIs

  • REST API