unsubbed.co

Zenko CloudServer

Self-hosted file management & sharing tool that provides zenko CloudServer, an implementation of a server handling the Amazon S3 protocol.

Open-source S3 emulation, honestly reviewed. Built for local development and multi-cloud abstraction — not a MinIO replacement.

TL;DR

  • What it is: Open-source (Apache-2.0) Amazon S3-compatible object storage server, written in Node.js. Part of Scality’s Zenko multi-cloud data controller project [1][2].
  • Who it’s for: Developers who need a local S3 endpoint for CI/CD testing, integration tests, or building S3-based applications without an AWS account. Not aimed at non-technical founders running production workloads.
  • Cost savings: The software is free. Running it locally eliminates AWS S3 API costs during development — relevant when you’re running hundreds of integration tests per day that would otherwise hit real S3 endpoints [1].
  • Key strength: Drop-in S3 API compatibility with zero AWS account required. Supports multiple storage backends simultaneously — local disk, in-memory, and cloud (AWS S3, Azure Blob Storage, Google Cloud Storage) through a single API [1][2].
  • Key weakness: The README documents a node.js 10.x requirement — a runtime that reached end-of-life in April 2021. The website copy reads like it was written in 2016–2017, with “coming soon” features that should have shipped years ago [1][2]. Independent third-party reviews are essentially nonexistent, which itself is a signal.

What is Zenko CloudServer

CloudServer (formerly Scality S3 Server) is a Node.js server that speaks the Amazon S3 API. You point your S3 client — any SDK, any tool — at it instead of AWS, and it handles buckets and objects as if it were the real thing. The project is part of Scality’s broader Zenko platform, which Scality describes as an “open-source multi-cloud data controller” [2].

The pitch from the website is multi-layered but the core is simple: one S3 API surface, multiple storage backends underneath. You can store objects locally on disk, in memory (useful for tests), or route them to AWS S3, Scality’s proprietary RING product, or — per the roadmap language on the site — Azure Blob Storage and Google Cloud Storage [2].

Scality is a French enterprise storage company that released the open-source S3 Server in June 2016. By their own account, it had 600,000 Docker pulls within the first year [2]. The Apache-2.0 license means you can use it in any commercial application, embed it in your own product, and modify it freely.

The GitHub repository is at https://github.com/scality/cloudserver. The merged profile for this review lists stars as unavailable, which prevented us from reporting a current count — check the repository directly for the live figure.


Why people choose it (or don’t)

There are no substantial independent reviews of Zenko CloudServer available at the time of writing. The product occupies a niche that doesn’t generate much consumer-facing commentary: it’s a developer infrastructure tool, used mostly in CI pipelines rather than as a user-facing application. What follows draws from the primary documentation and the general developer community reputation around S3-compatible servers.

The case for it, from Scality’s documentation [2]: The website quotes Carlo Daffara, CEO of NodeWeaver: “Zenko gave NodeWeaver a set of fundamental capabilities — complete AWS S3 compatibility, object storage replication, load balancing — in a way that is simple, consistent and reliable.” And Stef Van Dessel, Chief Engineer at Telenet: “With the S3 Server we now have a simple way to help develop applications that work with S3 and object storage, which will give us more business and deployment flexibility.”

Both quotes are enterprise infrastructure users, not small-team developers. The tone matches the product — this is infrastructure that large companies use to abstract over multiple cloud backends, not a home-lab NAS replacement.

The honest signal from the documentation: The README documents that building and running CloudServer requires node.js 10.x and yarn v1.17.x [1]. Node.js 10 reached end-of-life on April 30, 2021. A project that still documents an EOL runtime in its main README is either in maintenance mode, updated infrequently, or the documentation has drifted badly from reality. None of those options are reassuring if you’re betting a production workflow on the project.

The website still lists certain cloud platform support as “coming soon” [2]. Given the timeline of the project (the “coming soon” features were presumably planned in 2016–2017), this language is a reliable indicator that the website is not actively maintained.


Features

Based on the README and website [1][2]:

Storage backends:

  • File backend — objects and metadata stored on local disk. Default mode. Data persists across restarts [1].
  • Memory backend — in-memory, wiped on restart. Designed for tests where you want a clean state per run [1].
  • Multiple backends — route objects to different locations using the x-amz-meta-scal-location-constraint header. One CloudServer instance can write to local disk, AWS S3, and Scality RING simultaneously [1].
  • Cloud passthrough — objects written in “raw” form to the target cloud storage, without Scality transforming or fragmenting data. The website claims this means you can read objects directly from the cloud provider’s browser console without CloudServer in the path [2].

S3 API surface:

  • Full Amazon S3 API compatibility — buckets, objects, multipart upload, presigned URLs [1][2]
  • AWS S3 SDK compatibility out of the box (any language — Python boto3, AWS JS SDK, Go SDK, etc.) [2]
  • REST API on port 8000 [1]

Deployment:

  • Docker image available at zenko/cloudserver on Docker Hub [1]
  • npm package for direct Node.js installation [1]
  • Default credentials on startup: access key accessKey1, secret verySecretKey1 — obviously for dev use only [1]
  • Internal ports 9990 and 9991 for metadata and data transfer respectively [1]

Optional Vault integration:

  • Pluggable user management via Vault, Scality’s proprietary identity system [1]. This is the path to real multi-user access control, but Vault is a separate, proprietary component — not included.

What it doesn’t include:

  • A web UI for browsing objects (you use any S3-compatible client)
  • Built-in user management without Vault
  • Automatic TLS — you handle that at the reverse proxy level

Pricing: SaaS vs self-hosted math

There is no Zenko CloudServer SaaS. The software is free, Apache-2.0 licensed, and you run it yourself [2]. Scality sells enterprise storage products separately, but CloudServer is not a tiered product with a free vs. paid split.

The relevant cost comparison is development costs against AWS S3:

Running integration tests against real AWS S3 endpoints accumulates costs: S3 PUT requests are $0.005 per 1,000, GET requests $0.0004 per 1,000, plus storage and data transfer. For a CI pipeline running 500 tests per commit across a team making 20 commits per day, that’s 10,000 test runs daily. If each test makes 10 S3 API calls, you’re at 100,000 API calls per day — roughly $0.50/day in PUT costs alone, or ~$180/year, before storage and data transfer. CloudServer running in CI eliminates that entire line item [own calculation from AWS public pricing].

The infrastructure cost to run CloudServer is effectively zero in CI: it starts as a Docker container in your pipeline alongside your application container, runs for the duration of the test suite, and stops. No dedicated server needed.

For a persistent local development environment: a VPS or spare machine running CloudServer uses minimal resources — the Node.js process is lightweight for dev workloads.


Deployment reality check

Getting it running:

The fastest path is Docker [1]:

docker run -d --name cloudserver -p 8000:8000 zenko/cloudserver

That’s it for a local dev instance. Point your S3 client at http://localhost:8000 with access key accessKey1 and secret verySecretKey1.

For persistent storage, mount a volume for the data and metadata directories. For CI, add it as a service in your docker-compose.yml or as a container step in GitHub Actions / GitLab CI.

What can go sideways:

The node.js 10.x documentation is a real concern [1]. Whether the project has quietly updated its runtime requirements without updating the README, or whether the codebase genuinely requires the old runtime, is unclear from the documentation alone. Before committing to this in production, you’d need to verify the actual supported Node.js version against recent commits.

The default credentials (accessKey1 / verySecretKey1) are hardcoded in the README [1]. If you accidentally expose port 8000 to a network, any S3 client with those credentials has full access. This is acceptable for local dev — it’s not acceptable if you’re running CloudServer on a shared server or inside a VPC with other services.

Multi-user support requires Vault, which is proprietary and separately deployed. If you need more than one set of credentials, either you use Vault (enterprise path) or you work around it (not supported in the open-source package) [1].

Realistic time estimates:

  • CI integration with Docker: 15–30 minutes
  • Local dev instance: 5 minutes
  • Multi-backend configuration with real cloud passthrough: several hours, documentation is sparse
  • Production deployment with TLS, real auth, monitoring: data not available — no independent deployment guides found

Pros and Cons

Pros

  • Apache-2.0 license — genuinely free for commercial use, no CLA, no “fair-code” restrictions [2].
  • Zero-cost CI/CD S3 emulation — eliminates AWS API costs in test pipelines. This is the use case the project was built for, and it works.
  • Multi-backend abstraction — the ability to route S3 API calls to different storage backends through a single endpoint is useful for multi-cloud architectures [1][2].
  • Any S3 SDK works — if your application already speaks S3, CloudServer requires zero application code changes to use in development [2].
  • Raw data passthrough — objects written to cloud backends are not transformed, so you can read them directly from the cloud provider’s console without CloudServer in the path [2].

Cons

  • Node.js 10.x documented requirement — EOL since April 2021 [1]. This alone is a reason to audit the project’s maintenance status before adopting it.
  • Sparse independent documentation — no third-party setup guides, no community forum with searchable troubleshooting, no comparison reviews found at time of writing. You’re largely on your own.
  • Website is visibly dated — “coming soon” features, 2016-era copywriting, and no documented pricing page suggest the project is not receiving active marketing attention [2].
  • No built-in web UI — bucket browsing requires a separate S3 client (MinIO Console, Cyberduck, S3 Browser).
  • Multi-user auth requires proprietary Vault — the open-source package ships with static credentials only [1].
  • No TLS out of the box — you add a reverse proxy [1].
  • No documented production hardening guide — the README covers dev setup, not production deployment patterns.
  • MinIO is the de facto standard now — CloudServer predates MinIO’s rise. MinIO has a larger community, more active development, a built-in console, Kubernetes-native deployment, and substantially more documentation. If you’re starting fresh, the burden of proof is on CloudServer to justify the switch.

Who should use this / who shouldn’t

Use Zenko CloudServer if:

  • You need a quick S3 endpoint for integration tests in CI and you want Apache-2.0 rather than MinIO’s AGPL license.
  • You’re already deep in the Scality ecosystem and want the officially supported development emulation layer.
  • You’re building a multi-cloud abstraction layer and need a single S3 API surface over mixed backends (AWS S3 + local disk + Azure) [2].

Don’t use it if:

  • You’re a non-technical founder looking for a self-hosted file storage solution for your team — this is a developer infrastructure tool, not a file management application.
  • You want active community support, tutorials, and troubleshooting threads — those don’t exist for this project in any meaningful volume.
  • You’re building a new project and haven’t picked an S3 emulator yet — start by evaluating MinIO first. It has more recent development activity, a web UI, and a much larger documentation footprint.
  • Production object storage is your goal — this is explicitly positioned as a development and abstraction tool, not a primary storage system [1][2].

Alternatives worth considering

  • MinIO — the obvious direct comparison. AGPL-3.0 (which can be a problem for commercial embedding, unlike Apache-2.0), but has a web console, active development, Kubernetes operator, and the dominant community in the S3-compatible server space. For most new projects, MinIO is the starting point, not CloudServer.
  • LocalStack — broader AWS emulation (not just S3) for local development. Useful if your application touches multiple AWS services. The free tier covers S3; the paid Pro tier covers other services.
  • Garage — newer, Rust-based, Apache-2.0, distributed S3-compatible server built for home labs and geographically distributed deployments. Worth evaluating for production self-hosted object storage.
  • SeaweedFS — Go-based, highly scalable, S3-compatible. Better for workloads that need to scale beyond a single machine.
  • Real AWS S3 — for production workloads, or teams where the cost of running infrastructure exceeds the cost of S3 API usage.

Bottom line

Zenko CloudServer is a real, working piece of infrastructure that solves a specific problem: running an S3-compatible endpoint locally for development and testing without touching AWS. The Apache-2.0 license is genuinely useful, and the multi-backend abstraction is a differentiator for teams building cloud-agnostic applications. But the node.js 10.x requirement in the README, the dated website copy, and the near-total absence of third-party community documentation are honest signals that this project is not where active development energy in the S3-compatible server space is concentrated. For new projects, evaluate MinIO before defaulting to CloudServer — and if CloudServer wins on licensing or feature grounds, verify the actual current runtime requirements before you deploy. The Apache-2.0 license advantage over MinIO’s AGPL is real and matters for certain commercial use cases; just confirm you’re not inheriting a maintenance liability along with the license.


Sources

  1. Zenko CloudServer GitHub README — primary technical documentation, installation instructions, configuration. https://github.com/scality/cloudserver
  2. Zenko CloudServer Official Website — product positioning, feature claims, customer quotes. https://www.zenko.io/cloudserver
  3. Docker Hub — zenko/cloudserver — Docker image distribution. https://hub.docker.com/r/zenko/cloudserver

Features

Integrations & APIs

  • REST API