unsubbed.co

MinIO

High-performance, S3-compatible object storage for AI, analytics, and cloud-native workloads. Deploy on-premises or in any cloud with a single binary.

Self-hosted object storage, honestly reviewed. Read this before you build your stack on it.

TL;DR

  • What it is: High-performance, S3-compatible object storage you run on your own servers — the open-source alternative to Amazon S3 [2][4].
  • Who it’s for: Engineers and technical founders who need S3-API-compatible storage on-premises, for AI/ML pipelines, backups, or large-scale file management — without paying AWS by the gigabyte.
  • The catch: The open-source community edition GitHub repository entered maintenance mode in late 2025. No new features, no new binary releases (source-only now), and only critical security fixes evaluated case-by-case. MinIO the company is pushing everyone toward their commercial “AIStor” product [1].
  • Cost savings vs AWS S3: Data not available for a precise apples-to-apples comparison since MinIO self-hosted pricing depends entirely on your hardware. The directional story is clear: your own VPS + MinIO = no per-GB egress fees, no request charges, no surprise bills.
  • Key strength: 60,000+ GitHub stars, S3 API parity, runs on anything from a Raspberry Pi to a 100-node cluster, sub-10ms latency claims, and deep integration with AI/ML tooling (PyTorch, Iceberg) [4][website].
  • Key weakness: The open-source project is effectively dead. Community edition is source-only with no new features. The company wants you on AIStor. If you adopt MinIO today, you’re adopting a product in sunset mode, not a growing ecosystem [1].

What is MinIO

MinIO is an S3-compatible object storage server written in Go. You point it at a directory (or a cluster of drives), and it exposes the same API that Amazon S3 uses — same SDKs, same tools, same bucket/object model, same presigned URLs. Any application that writes to S3 can be pointed at MinIO instead, usually by changing one environment variable [2][4].

The pitch has always been clean: run it yourself, pay nothing per GB, own your data. It became the default answer to “how do I do S3 without AWS” for Kubernetes clusters, on-prem AI pipelines, and self-hosted stacks everywhere. At 60,501 GitHub stars it was genuinely the most-adopted open-source object store [profile].

The license is AGPL-3.0, which matters: any derivative work you distribute must also be open-sourced under AGPL. For internal use (private deployments, not selling a service built on MinIO), this is generally fine. For companies embedding it in a commercial SaaS product, the AGPL creates real legal friction — which is exactly why MinIO’s enterprise tier exists [profile][README].

The painful update as of late 2025: the GitHub repository now carries a prominent notice that it is no longer maintained. The company’s strategy is to funnel users toward AIStor Free (a separately distributed, free-licensed community build) and AIStor Enterprise (paid, distributed, commercially supported). The README itself now says: “Alternatives: AIStor Free — Full-featured, standalone edition for community use (free license)” [README]. The open-source era of MinIO, at least in the classic sense, is over.


Why People Choose It (And Why They’re Now Reconsidering)

The original draw was straightforward and remains valid for evaluating the archive of MinIO knowledge:

Cost. AWS S3 pricing is not just storage — it’s storage plus egress plus request charges. For teams moving large datasets (AI training runs, video, backups), the egress fees alone can be brutal. MinIO on-prem means paying for hardware once and owning the pipe. A dev.to article from 2025 frames it plainly: no licensing fees compared to usage-based cloud pricing, deployment anywhere, full customization [2].

S3 API compatibility. This is the real moat. MinIO doesn’t claim to be “S3-like” — it implements the actual S3 API. That means boto3, s3fs, PyArrow, MinIO’s own SDKs, and every tool in the AI/ML ecosystem that speaks S3 works out of the box [2][4]. You don’t rewrite your application; you change the endpoint URL.

Performance. The Geekflare review from earlier testing cites read/write speeds around 170GB/s in clustered configurations — numbers that only matter at scale but signal that the architecture is genuinely optimized for throughput [4]. The MinIO website now claims “19.2 TiB/s” saturating hardware at exabyte scale [website], which should be taken as marketing headroom rather than your dev server benchmark.

Data sovereignty. For EU companies under GDPR, or any organization with compliance requirements around data location, self-hosted MinIO answers the question cleanly: your data never leaves your infrastructure [2].

The growing concern: open governance. After MinIO’s maintenance mode announcement, the InfoQ piece summed up the community reaction well. Senior DevOps engineer Alexey Minin put it directly: “In 2025, ‘Open Source’ isn’t enough. We need to look for Open Governance. Projects backed by foundations like CNCF or Apache offer better protection against such abrupt transitions.” Developer Mangla Ram Choudhary called it the end of MinIO as the default open-source S3 engine [1].

A real-world bug report in the dev.to comments section adds a data point worth noting: one developer reported data loss from a tiering feature bug, which triggered discussion about the reliability trade-offs of running open-source storage in production [2]. Storage bugs are in a different severity category than workflow automation bugs. Data loss is not recoverable.


Features

Based on the README, website, and third-party coverage:

Core storage:

  • S3 API compatibility — buckets, objects, versioning, presigned URLs, multipart upload [2][4][README]
  • Erasure coding for data protection (data survives drive failures) [4]
  • Built-in encryption: AES-CBC, AES-256-GCM, ChaCha20 [4]
  • Event notifications for bucket/object changes [4]
  • Federation across clusters using etcd and CoreDNS [4]
  • Multi-tenancy support [2]

AI and analytics:

  • Native integration with PyTorch, Iceberg (Apache Iceberg table format), and major AI frameworks [website]
  • SFTP support in AIStor [website meta]
  • Optimized for large-scale data pipelines and GPU-bound training workloads [website]

Operations:

  • Web-based console (browser UI) for bucket and object management [4]
  • mc CLI client for scripted management and sync [4][README]
  • Kubernetes-native via Helm charts and the MinIO Operator [README]
  • Docker deployment [README]
  • Self-healing, self-managing architecture claims [website]

Deployment modes:

  • Single-node single-drive (SNSD) — development/small workloads [5]
  • Multi-node setups for distributed storage [5][4]
  • Source-only community build now; Docker image build from provided Dockerfile [README]

Enterprise (AIStor):

  • Commercial support SLAs [website]
  • Distributed edition [website]
  • Compliance certifications — the community edition does not carry SOC2, ISO, or HIPAA certifications; those require AIStor Enterprise [1]

Pricing: Self-Hosted vs Cloud Math

MinIO community (open-source):

  • Software: $0 (AGPL-3.0)
  • Distribution: Source-only as of late 2025; build from Go source or Docker [README]
  • Running cost: Your hardware or a VPS

AIStor Free (the new community offering):

  • MinIO’s replacement for the community edition
  • Free license, standalone, distributed separately from min.io/download [README]
  • No pricing published for support; presumably none included

AIStor Enterprise:

  • Pricing: Contact sales, min.io/pricing [website]
  • Includes distributed setup, commercial support, compliance certifications [website]

Amazon S3 for comparison:

  • Storage: ~$0.023/GB/month (standard tier)
  • Egress: $0.09/GB out to internet (first 10TB)
  • PUT/GET requests: $0.005 per 1,000 PUT, $0.0004 per 1,000 GET
  • Egress costs dominate for AI training and analytics workloads where you’re reading data repeatedly

The savings math is real for the right workload. A team reading a 10TB dataset repeatedly for model training pays ~$900/month in egress alone on S3. On self-hosted MinIO, the marginal cost of reading your own data is zero. But this math only holds if you have the infrastructure to run it and the staff to maintain it. “Free software” is not the same as “free to operate.”

Specific pricing comparison for smaller workloads: data not available from provided sources.


Deployment Reality Check

The happy path: MinIO is genuinely easy to start. The Geekflare walkthrough shows a working server in three commands — download binary, make executable, run [4]. The web console is clean. Default credentials are minioadmin:minioadmin and the startup output reminds you to change them immediately.

The less happy path emerges at integration time. A NocoDB user in a Kubernetes deployment spent weeks trying to configure MinIO as NocoDB’s storage backend, hitting CrashLoopBackOff states, unclear error messages, and environment variable confusion between NC_S3_* and MinIO-specific settings [3]. This isn’t a knock on MinIO specifically — it’s how all infrastructure software behaves when you connect it to other systems. But the error messages aren’t always helpful, and the gap between “works on localhost” and “works on Kubernetes” is real.

What you need for a minimal setup:

  • A Linux server or VM (MinIO runs on Linux, macOS, Windows, but Linux is the supported production path)
  • Go 1.24+ installed, if building from source (no more pre-compiled binaries for community edition) [README]
  • Or Docker, to build and run the container image
  • A reverse proxy (nginx, Caddy, Traefik) for HTTPS in production
  • Storage — either local drives or a mounted volume

What the maintenance mode means for deployments: The community edition now gets only critical security fixes, evaluated case-by-case. No new S3 API features, no bug fixes that aren’t security-related. If you deploy MinIO community today, you’re running a software version that will accumulate functional debt over time while the world around it moves forward. For compliance-sensitive organizations, this is a hard blocker — SOC2, ISO, and HIPAA certifications require maintained software [1].

Appcircle’s migration docs [5] are a useful window into operational complexity: migrating from a multi-node MinIO setup to single-node setup in their self-hosted product required a planned downtime window, careful disk space calculations (need 50% free of current volume usage), and a scripted migration process. This is the reality of self-hosted storage operations at scale — not terrible, but not zero-effort either.

Realistic time to a working single-node MinIO: 30–60 minutes for someone familiar with Linux and Docker. For a non-technical founder: this is not a solo project without help.


Pros and Cons

Pros

  • S3 API compatibility is genuine. Not “S3-like” — actual S3 API. Your existing S3 code works [2][4].
  • Performance is real. 170GB/s benchmarks in clustered configurations put it in a different class from general-purpose file servers [4].
  • Massive adoption history. 60,000+ GitHub stars, 2 billion+ downloads claimed by the website [website][profile]. The tooling, documentation (even if aging), and community knowledge are extensive.
  • AI/ML ecosystem integration. Works natively with PyTorch, Iceberg, and essentially any tool that speaks S3 — which is most of the modern data stack [website][4].
  • Zero egress costs. If your data stays on your network, you pay nothing to read it. For high-throughput AI training, this is the entire economic argument [2].
  • Small binary. ~50MB, Kubernetes-friendly, no external metadata database required — it handles metadata internally [4].

Cons

  • Community edition is in maintenance mode. No new features, source-only distribution, only critical security patches. This is not a growing project — it’s a snapshot [1][README].
  • AGPL-3.0 license. For companies building commercial products on top of MinIO, the copyleft requirements create legal risk. You either comply (open-source your product) or pay for AIStor Enterprise [README].
  • AIStor pivot is opaque. The company rebranded and redirected the community to AIStor Free without clear documentation of what changed, what’s different, or what the long-term commitment to the free tier is [1][README].
  • No pre-compiled binaries. The community must now build from source or build a Docker image from the Dockerfile. Non-trivial for non-technical teams [README].
  • Data loss bug reports. At least one community member reported data loss from the tiering feature — this is the kind of issue that doesn’t show up in marketing copy but appears in comment threads [2].
  • Integration friction. Real-world deployments (NocoDB + MinIO on Kubernetes) show that error messages are poor and debugging requires significant expertise [3].
  • No open governance. MinIO is a venture-backed company’s project, not a foundation-governed open-source project. The maintenance mode transition happened without community input. It can happen again [1].

Who Should Use This / Who Shouldn’t

Use MinIO (or AIStor Free) if:

  • You’re building or running infrastructure where S3 API compatibility is required and you want zero vendor lock-in to AWS.
  • You have an engineering team that can build from source and maintain the deployment.
  • You’re running AI/ML training pipelines where egress costs are material.
  • You need data sovereignty — data cannot leave your servers.
  • You understand you’re adopting a product in transition and have a migration plan if AIStor Free changes terms.

Skip it (use AIStor Enterprise) if:

  • You need SOC2, ISO, or HIPAA compliance on your storage layer — the community edition doesn’t cover this [1].
  • You need production SLA-backed support and commercial guarantees.
  • You’re a larger organization where a support contract is worth more than the license savings.

Skip it entirely (use a foundation-governed alternative) if:

  • You need confidence that the project will still be actively developed in three years. Look at RustFS (Apache 2.0), SeaweedFS (Apache 2.0), or Garage (AGPL v3) instead [1].
  • You’re building something where storage is a foundational bet and you need open governance, not a single company’s roadmap.

Stay on AWS S3 if:

  • You don’t have infrastructure engineers to maintain self-hosted storage.
  • Your storage costs are under $200/month — the operational overhead doesn’t pay for itself at small scale.
  • You need the full AWS S3 feature surface (S3 Intelligent-Tiering, S3 Glacier, cross-region replication with AWS-managed infrastructure).

Alternatives Worth Considering

Given the maintenance mode situation, these are now the serious alternatives:

  • RustFS — Apache 2.0 license, S3-compatible, written in Rust, focused on data lake performance. The InfoQ piece specifically flags this as an emerging post-MinIO alternative [1]. No major production track record yet.
  • SeaweedFS — Apache 2.0, written in Go, S3-compatible, designed for small files at scale. More active development than MinIO community [1]. Worth evaluating if you need continued feature development.
  • Garage — AGPL v3, explicitly targets small self-hosted deployments (homelab, small organizations). The InfoQ piece mentions it as a community option [1]. Smallest scale target of the three.
  • Ceph (RadosGW) — The enterprise-grade open-source option backed by the CNCF. Much higher operational complexity but genuine open governance. If you need what MinIO promised at serious scale with a community that won’t pivot, Ceph is the answer.
  • Amazon S3 — If you’re escaping AWS costs, this is what you’re escaping from. But for teams who don’t want to operate storage infrastructure, it remains the lowest-effort path.

Bottom Line

MinIO built the right product at the right time — S3 compatibility without AWS lock-in, genuinely fast, runs everywhere. The 60,000 GitHub stars weren’t an accident. But the maintenance mode announcement changes the calculation for anyone evaluating it today. You’re not adopting an active open-source project; you’re adopting a company’s transition strategy toward their commercial AIStor product, with the community edition frozen in place.

If you’re already running MinIO in production, it still works and the S3 compatibility is genuine. Your near-term path is evaluating AIStor Free and deciding whether the terms work for you long-term. If you’re evaluating fresh today, start with SeaweedFS or RustFS if open governance matters, or budget for AIStor Enterprise if SLA-backed support is the requirement. The original MinIO promise — free, fast, S3-compatible, self-hosted — still exists, just not in the GitHub repository that made it famous.

If the infrastructure work is the blocker, upready.dev deploys and configures self-hosted object storage for clients as a one-time engagement. One setup fee, you own the stack.


Sources

  1. InfoQ“MinIO GitHub Repository in Maintenance Mode” (Dec 2025). https://www.infoq.com/news/2025/12/minio-s3-api-alternatives/
  2. dev.to (Oumnya)“MinIO: The Open-Source S3 Alternative That Cuts Costs and Boosts Flexibility”. https://dev.to/oumnya/minio-the-open-source-s3-alternative-that-cuts-costs-and-boosts-flexibility-348g
  3. NocoDB Community“Unable to configure minio” (May 2025). https://community.nocodb.com/t/unable-to-configure-minio/1859
  4. Geekflare“Try MinIO - Self-Hosted S3-Compliant High Performance Object Storage”. https://geekflare.com/cloud/minio-object-storage/
  5. Appcircle Docs“MinIO Migration”. https://docs.appcircle.io/self-hosted-appcircle/install-server/linux-package/configure-server/minio-migration

Primary sources: