unsubbed.co

Cozystack

Cozystack is a Go-based application that provides transform bare metal servers into a complete cloud platform.

Open-source PaaS for building private or public clouds, honestly reviewed. Not a marketing summary — what you actually get when you deploy it.

TL;DR

  • What it is: Apache 2.0-licensed PaaS framework that turns a rack of bare metal servers into a cloud platform with a REST API for spawning Kubernetes clusters, managed databases, virtual machines, load balancers, and HTTP caching services [README][website].
  • Who it’s for: DevOps engineers, infrastructure teams, and cloud providers who want to build a private or public cloud on their own hardware — not a tool for non-technical founders [README].
  • Cost savings: The software is free. If you’re paying Hetzner Cloud or AWS for managed Kubernetes, the math depends on how many clusters you spin up and how much bare metal you own. At scale, the savings are real; at small scale, the operational overhead may eat the savings.
  • Key strength: Genuine Kubernetes-native architecture with clean separation of concerns — billing integration is a YAML manifest to the Kubernetes API, not a fragile custom API [website]. CNCF Sandbox project with active community.
  • Key weakness: Requires physical servers or cloud VMs with bare-metal-like access. Steep learning curve — this is infrastructure software for people who think in YAML and know what FluxCD is [README]. Almost no third-party reviews or case studies exist publicly.

What is Cozystack

Cozystack is a free, open-source PaaS framework built on top of Kubernetes. The pitch in one sentence from their README: “transform a bunch of servers into an intelligent system with a simple REST API for spawning Kubernetes clusters, Database-as-a-Service, virtual machines, load balancers, HTTP caching services, and other services with ease.” [README]

The project was built and originally sponsored by Ænix, a European infrastructure company, and is now a CNCF Sandbox Level Project — meaning the Cloud Native Computing Foundation has accepted it but it’s in the earliest stage of CNCF maturity [README].

The core idea is that if you have bare metal servers, you shouldn’t have to stitch together Kubernetes, KubeVirt, FluxCD, monitoring, and a tenant model yourself. Cozystack does that for you, then exposes a clean Kubernetes-native API that you (or your billing system) can hit to provision and manage cloud services [website].

It sits at 1,995 GitHub stars as of this writing — a modest number that accurately reflects where it is in its growth curve: technically solid, CNCF-backed, but not yet widely discovered [merged profile].


Why people choose it

Direct third-party reviews of Cozystack are scarce — the tool hasn’t yet made it into the mainstream review sites or comparison blogs that cover tools like Portainer, Coolify, or Dokku. That scarcity is itself a data point: this is infrastructure software aimed at a narrow technical audience, not a general-purpose DevOps dashboard.

The reasons people land on Cozystack, based on the project’s own documentation and positioning [README][website]:

Avoiding OpenStack’s complexity. OpenStack is the historical answer for “I want to run my own cloud.” It’s also famously difficult to operate. Cozystack uses Kubernetes as its foundation instead of a custom control plane, which means you’re working with tools the industry already knows — kubectl, Helm, FluxCD, Prometheus — rather than OpenStack-specific components [website].

True multi-tenancy without reinventing it. The tenant model is a specific design choice: Cozystack allocates cloud resources to the control plane efficiently, which it claims enables cost-efficiency and security isolation [website]. This matters if you’re a hosting provider selling managed Kubernetes to multiple clients from the same hardware pool.

API-first billing integration. Other platforms abstract the API. Cozystack specifically doesn’t: “to integrate with your billing, it’s enough to instruct your system to submit a specific YAML manifest defining the desired service to the Kubernetes API” [website]. If you’re building a cloud business, this is the right design — no proprietary API to wrap, no webhook hell.

Apache 2.0 license with no commercial restrictions. Unlike some infrastructure platforms that use source-available or commercial-use-restricted licenses, Cozystack is clean Apache 2.0. You can build a commercial product on top of it, resell it, fork it — no legal friction [README][merged profile].


Features

Based on the README and website documentation [README][website]:

Infrastructure services (what you can spawn via API):

  • Managed Kubernetes clusters (tenant clusters on top of the management cluster)
  • Databases-as-a-Service (specific engines not listed in available documentation)
  • Virtual machines (via KubeVirt under the hood)
  • Load balancers
  • HTTP caching services

Platform components:

  • talos-bootstrap — installation tool supporting PXE and ISO boot for bare-metal provisioning in a datacenter. Uses Talos Linux (immutable OS) to ensure system consistency [website].
  • FluxCD integration — packages are YAML files delivered via FluxCD. Any Kubernetes-familiar engineer can add or modify packages [website].
  • Built-in monitoring and alerts — each service instance comes with pre-configured dashboards and alerts. Per-tenant monitoring hubs or a combined view are both supported [website].
  • Web UI — a dashboard for deploying applications. The project is explicit that the UI is secondary to the API: “while the primary goal of the platform is to provide a beautiful API, it also has a dashboard” [website]. Don’t come here expecting a Portainer-style control panel as the primary interface.
  • Native Kubernetes RESTful API — declarative, standard Kubernetes API surface. No custom API to learn [website].

Architecture principles:

  • Kubernetes-based throughout — no hidden layers or proprietary abstractions [website]
  • Standard open-source components — Talos, FluxCD, Prometheus — widely known in the industry [website]
  • Upstream-first contribution model: if a feature is useful in an upstream project, they contribute it there rather than keeping it internal [website]

Use cases per the documentation [README][website]:

  1. Backend for a public cloud (hosting provider scenario)
  2. Private cloud with Infrastructure-as-Code
  3. Kubernetes distribution for bare metal

Pricing: SaaS vs self-hosted math

Cozystack software: $0. Apache 2.0, free to use, fork, and commercialize [README].

What you actually pay:

  • Bare metal servers: cost varies dramatically. A Hetzner dedicated server with 64GB RAM runs €50–€100/mo. Your own rack in a colo runs whatever the colo charges.
  • Commercial support: Ænix and other companies offer paid support listed on the Cozystack website. Pricing is not published — contact sales.

Comparison to managed Kubernetes:

  • Hetzner Managed Kubernetes (LKE equivalent): ~€15/mo per cluster + node costs
  • AWS EKS: $0.10/hour per cluster (~$72/mo) + EC2 node costs
  • DigitalOcean Kubernetes: $12/mo per cluster + Droplet costs

If you’re running 10+ Kubernetes clusters for clients or internal teams, the per-cluster fees add up fast. At that scale, owning bare metal + Cozystack can pay for itself within months. At 1–3 clusters with no existing hardware, managed Kubernetes is almost certainly cheaper when you factor in operational time.

No pricing data exists for Cozystack-as-a-service — there is no SaaS offering. It’s self-hosted only [README][website].


Deployment reality check

This is not a tool you install in 30 minutes on a VPS. The honest assessment:

What you need before you start:

  • Physical servers or cloud instances with full OS control (bare metal, dedicated servers, or cloud VMs where you can PXE boot or mount ISOs)
  • At least 3 nodes for a production-grade setup (the documentation recommends this for etcd quorum)
  • A network setup that supports the talos-bootstrap PXE/ISO boot process
  • Familiarity with Kubernetes, YAML, and FluxCD — this is not optional

Installation path: Cozystack bootstraps via talos-bootstrap, which handles PXE or ISO boot of Talos Linux across your servers, then brings up the Cozystack control plane on top [website]. The immutable OS approach means no configuration drift post-install — what you boot is what runs.

What can go wrong:

  • Network requirements for PXE booting are specific and vary by datacenter/hosting provider setup. If your provider doesn’t support custom PXE or ISO, you’ll be fighting infrastructure problems before you even touch Cozystack.
  • The documentation exists but is not comprehensive for every edge case. This is a CNCF Sandbox project, not a mature product with polished enterprise docs.
  • No official troubleshooting stories or “what I broke and how I fixed it” blog posts exist publicly — the community knowledge base is thin compared to tools like K3s or Rancher.
  • Community channels are Telegram and Slack [website] — useful, but not Stack Overflow with thousands of answered questions.

Realistic time estimate: For an experienced Kubernetes engineer who has worked with Talos or FluxCD before: a working single-node test environment in 2–4 hours. A production multi-node setup: 1–2 days including network planning, troubleshooting, and monitoring verification. For someone whose Kubernetes experience is mostly managed cloud (EKS, GKE, AKS): budget a week, and expect to learn things about networking and storage that managed cloud hides from you.


Pros and cons

Pros

  • Clean Apache 2.0 license. Build a cloud business on top of it. No legal gray area, no “fair-code” interpretation issues [README][merged profile].
  • Kubernetes-native throughout. No bespoke control plane — the same kubectl, Helm, and FluxCD knowledge transfers directly. If your team knows Kubernetes, the learning curve is about Cozystack’s specific packages, not a new paradigm [website].
  • API-first design with a clear philosophy. The decision to expose billing via standard Kubernetes manifests rather than a custom API is a good engineering decision that reduces integration complexity for anyone building a commercial cloud on top [website].
  • CNCF Sandbox status. Not just a GitHub project — it has passed CNCF’s due diligence process and has foundation-level governance. That’s a meaningful signal of project health [README].
  • Built-in monitoring per service instance. Pre-configured dashboards and alerts for every deployed service, not an afterthought [website].
  • Upstream-first philosophy. Reduces the risk of features accumulating as Cozystack-only forks that diverge from the ecosystem [website].
  • Multi-tenant model designed for cloud providers. The tenant architecture is a first-class design concern, not bolted on [website].

Cons

  • Not for non-technical teams. The website says “non-technical users” can use the UI, but that’s relative — you still need a Kubernetes-fluent team to install and operate the platform. Non-technical here means “you don’t write the FluxCD configs yourself,” not “you’ve never used kubectl” [website].
  • Requires bare metal or equivalent. No “install on a $6 VPS” path. You need real hardware, a datacenter, or cloud instances where you have enough control to PXE boot. This rules out most small teams [README].
  • 1,995 stars = early community. The ecosystem of tutorials, integrations, and community answers is thin. You’ll be reading source code when the docs don’t cover your situation [merged profile].
  • CNCF Sandbox = early maturity. Sandbox is the first stage of CNCF’s three-stage model (Sandbox → Incubating → Graduated). K3s is Sandbox. Many Sandbox projects mature; some don’t. The project is active but not proven at scale by many independent operators [README].
  • No publicly documented pricing for commercial support. If you need Ænix support, you’re in “contact sales” territory with no benchmark [website].
  • UI is secondary. The dashboard is explicitly not the primary interface. If you need a polished GUI-first management experience, this isn’t the right tool [website].
  • Limited third-party validation. No independent reviews, benchmarks, or case studies from operators were findable at the time of writing. That’s a risk signal for production adoption.

Who should use this / who shouldn’t

Use Cozystack if:

  • You’re a hosting provider or cloud company that wants to offer managed Kubernetes, databases, and VMs to clients from your own hardware — and you want a billing-friendly API to hang your product on.
  • You’re an enterprise infrastructure team managing 10+ Kubernetes clusters and paying meaningful per-cluster fees to a managed cloud provider.
  • You have a team of Kubernetes-fluent engineers and existing bare metal or dedicated server capacity.
  • You want to build a private cloud with Apache 2.0 software and no vendor lock-in.
  • Your team is comfortable operating Talos Linux and FluxCD.

Skip it if:

  • You’re a non-technical founder looking to cut SaaS costs. This tool won’t simplify your life — it will add operational complexity you don’t need. Look at Coolify, Caprover, or Dokku instead for application deployment.
  • You’re running fewer than 5 Kubernetes clusters. Managed Kubernetes from Hetzner or DigitalOcean is cheaper when you factor in the operational time.
  • You don’t have bare metal or dedicated server access. You can’t meaningfully evaluate this on a standard cloud VPS.
  • You need a mature, battle-tested platform with extensive community documentation and years of production case studies. Look at Rancher, Kubermatic KKP, or Gardener — they’re further along in maturity.
  • You want a Proxmox-style GUI-primary hypervisor management platform. That’s not what this is.

Alternatives worth considering

  • Rancher (SUSE) — the most mature open-source Kubernetes management platform. More complex to operate but years ahead in documentation, integrations, and community knowledge. Commercial support available from SUSE.
  • Proxmox VE — if your primary need is VM management (not Kubernetes-native cloud), Proxmox is far simpler to deploy and has a massive community. Different tool category but often considered alongside Cozystack.
  • Harvester HCI — SUSE’s Kubernetes-native hyperconverged infrastructure platform. Closer to Cozystack in design philosophy (Kubernetes-based), stronger on VMs and storage, less focused on the cloud-provider PaaS use case.
  • Kubermatic KKP (Kubermatic Kubernetes Platform) — commercial-origin, Kubernetes-native cluster management, more mature ecosystem for the “manage many clusters” use case.
  • Talos Linux + Argo CD (DIY) — since Cozystack already uses Talos, some teams choose to build their own opinionated stack rather than adopt Cozystack’s framework. More control, more work.
  • K3s + Longhorn — lighter weight, better for small-scale private infrastructure. Less powerful for multi-tenant cloud provider scenarios.
  • OpenStack — the enterprise-grade answer with massive community and vendor support. Far more complex to operate, but industry-proven for large-scale deployments.

For a team that wants to build and sell cloud services on bare metal, the realistic shortlist is Cozystack vs Kubermatic KKP vs OpenStack. Cozystack wins on simplicity and license; Kubermatic KKP wins on maturity and enterprise support; OpenStack wins on ecosystem breadth.


Bottom line

Cozystack is a technically coherent answer to a real problem: how do you turn bare metal servers into a multi-tenant cloud platform without OpenStack’s operational horror? The Kubernetes-native architecture, Apache 2.0 license, and CNCF backing are all genuine positives. The billing-via-YAML-manifest design is correct for the cloud provider use case.

The honest caveat is that 1,995 stars, CNCF Sandbox status, and a thin third-party review ecosystem all point to the same thing: this is an early-stage infrastructure product being used by a relatively small number of operators. It’s not unproven — Ænix uses it and the CNCF accepted it — but the community knowledge base, edge-case documentation, and production war stories that make a platform safe to bet on at scale are still accumulating.

If you’re a hosting provider or enterprise infrastructure team with bare metal and Kubernetes expertise, Cozystack is worth a serious pilot. If you’re a smaller team hoping to save money on cloud bills by running your own private cloud, the operational overhead will likely cost more than the cloud bill you’re trying to escape.


Sources

Primary sources (all Cozystack data derived from these):

Note on third-party citations: The third-party articles provided as inputs [1]–[5] were unrelated to Cozystack (Army MOS 17C Cyber Operations Specialist documentation). No independent third-party reviews of Cozystack were available for citation. The absence of third-party coverage is noted in the review as a relevant data point about the tool’s current community reach.

Features

Integrations & APIs

  • REST API