unsubbed.co

VersityGW

VersityGW is a Go-based application that provides file system and S3 object interface bridge.

Open-source S3 translation layer, honestly reviewed. If you have a NAS or HPC filesystem and need S3 compatibility, this is the project to know.

TL;DR

  • What it is: Apache 2.0-licensed S3 gateway that translates incoming S3 API calls into POSIX filesystem operations — your data stays where it is, VersityGW just adds the S3 interface on top [README].
  • Who it’s for: Teams with existing file-based storage (NAS appliances, HPC clusters, local drives) who need S3-compatible access for backup tools, cloud-native apps, or data pipelines — without migrating data to cloud object storage [README][website].
  • Cost savings: AWS S3 runs $0.023/GB/month plus request fees. VersityGW lets you point those S3-hungry apps at storage you already own, at no per-GB cost beyond the hardware you’re already running.
  • Key strength: Truly stateless, single-binary deployment — one command to spin up a functioning S3 server on your local filesystem. Clusterable out of the box with no shared state between gateway nodes [README].
  • Key weakness: This is a niche, infrastructure-focused tool. Community is small (1,314 GitHub stars), third-party user reviews are nearly nonexistent, and you’re on your own unless you pay for Versity enterprise support. Documentation lives mostly in the GitHub wiki rather than polished docs [README][website].

What is VersityGW

VersityGW is not object storage. That distinction matters, and it’s the first thing to understand. Tools like MinIO or Ceph actually store your data in their own format. VersityGW does something different: it sits in front of storage you already have and speaks S3 to anything that asks.

Send it a PUT object request, it writes a file. Send it a GET, it reads one back. From the client application’s perspective, it looks like AWS S3. Behind the curtain, it’s just your filesystem [README][website].

The use case this solves is real and underserved: you have a NAS, a ZFS pool, an HPC parallel filesystem, or even a local disk, and you have software — backup tools, media servers, CI artifact stores, ML training pipelines — that speaks S3 and only S3. Previously your options were: pay AWS, deploy and manage MinIO (which rewrites your data into its own internal format), or configure a complex Ceph cluster. VersityGW adds a fourth option: leave the data exactly where it is and wrap it in S3.

The project is built in Go using the Fiber HTTP framework, chosen specifically for performance. It uses the official aws-sdk-go-v2 for S3 protocol compatibility, which means it stays close to AWS’s own spec rather than reimplementing from scratch [README]. The architecture is stateless by design — every gateway instance is interchangeable, which means you can run two, five, or twenty of them behind a load balancer and throughput scales linearly [README][website].

Backend support as of this review: generic POSIX filesystem (any filesystem the OS can mount), Versity’s own ScoutFS filesystem (purpose-built for HPC workloads), Azure Blob Storage, and other S3 servers as a proxy target [README]. The modular backend design is explicitly called out in the documentation as an extension point for the community.

Versity Software is a real company with paying customers in research computing. Collaborators on the project include Los Alamos National Laboratory and the Pawsey Supercomputing Research Centre in Australia [website] — which tells you something about the target audience: people running serious storage infrastructure who don’t want to throw it away to get S3 compatibility.


Why people choose it over MinIO, Ceph, or just paying AWS

User reviews of VersityGW are sparse — 1,314 stars and a GitHub discussions forum don’t give you the Trustpilot corpus you’d get for a polished SaaS product. So instead of synthesizing five detailed reviews, here’s what the project’s design choices tell you about why organizations pick it.

The POSIX co-access argument. The biggest differentiator is that files remain files. Applications that need POSIX access (HPC jobs, rsync, local tools, NFS shares) and applications that need S3 access (cloud-native software, backup agents, Kubernetes operators) can hit the same data simultaneously without conversion or duplication. MinIO doesn’t give you that — once data goes into MinIO, it lives in MinIO’s format [README]. In research and enterprise storage environments like those running TrueNAS-based NAS arrays [1][2], this co-access model is often the only viable option short of maintaining two copies of everything.

Single-binary simplicity. The README’s “Turn your local filesystem into an S3 server with a single command” pitch isn’t marketing copy — it’s literally one line:

ROOT_ACCESS_KEY="testuser" ROOT_SECRET_KEY="secret" ./versitygw --port :10000 posix /data

That’s it. MinIO is similarly simple for basic setups, but it rewrites your data. Ceph requires weeks of planning and multiple nodes before it’s production-ready. VersityGW just reads and writes the files that are already there [README].

Performance via Fiber + Go. The README explicitly calls out that Fiber outperforms older Go HTTP frameworks like gorilla/mux, and Go’s concurrency model is the right tool for a high-throughput proxy that’s doing I/O translation rather than heavy computation [README][website]. For HPC use cases — Los Alamos-scale data movement — throughput per gateway node matters.

Apache 2.0 license with no strings. This is a practical differentiator against tools with more restrictive licensing. You can deploy it in commercial products, resell managed services built on top of it, run it at clients without a licensing conversation [README][website].


Features

Core S3 translation:

  • S3 API endpoint supporting standard operations: PutObject, GetObject, DeleteObject, ListObjects, CreateBucket, and more [README]
  • Object versioning with a configurable versioning directory [README]
  • HTTPS support
  • IAM: flat JSON file-based accounts for testing; external IAM integrations for production [README]
  • Full REST API via the S3 interface itself

Backend support:

  • POSIX (any standard filesystem: ext4, XFS, ZFS, NFS-mounted, anything) [README]
  • ScoutFS (Versity’s open-source HPC filesystem) [README][website]
  • Azure Blob Storage [README]
  • Proxy to other S3-compatible endpoints [README]

Deployment:

  • Single binary: Linux amd64/arm64, macOS amd64/arm64, BSD amd64/arm64 [README]
  • Docker and Helm for Kubernetes deployments [profile]
  • Stateless clustering: run multiple instances behind any load balancer, no shared state required [README][website]

Optional WebGUI: The project added an optional web-based management and explorer interface (documented separately on the wiki). It gives non-CLI users a way to browse buckets, inspect objects, and manage configuration without dropping into a terminal [README].

What’s missing:

  • No built-in multi-tenancy beyond basic IAM
  • No erasure coding or data redundancy — that’s the underlying storage system’s job
  • No paid cloud SaaS tier — it’s purely self-hosted

Pricing: the actual cost math

VersityGW has no SaaS version and no tiered pricing page. The software is free under Apache 2.0. Enterprise support (SLAs, professional services, integration help) is available via Versity Sales at undisclosed pricing [website].

The relevant cost comparison is what VersityGW replaces:

AWS S3:

  • $0.023/GB/month for standard storage (us-east-1)
  • $0.005 per 1,000 PUT/COPY/POST requests; $0.0004 per 1,000 GET requests
  • Data transfer out: $0.09/GB

Self-hosted with VersityGW:

  • Software: $0 (Apache 2.0) [README]
  • Storage: whatever your existing hardware costs — if the storage already exists, the marginal cost is near zero
  • A VPS with 1TB attached storage: $15–30/mo on Hetzner or Contabo
  • VersityGW process overhead: minimal (stateless Go binary) — runs comfortably alongside other workloads

Concrete example: A startup storing 10TB of ML training datasets and build artifacts on AWS S3 pays roughly $230/month just for storage, before request and egress fees. The same 10TB on a Hetzner dedicated server with VersityGW in front runs roughly $40–80/month all-in. If you already have a NAS at the office or a colocated server, the marginal cost is the electricity and the gateway binary — which costs nothing.

The math only holds if you already have storage infrastructure or are willing to run a server. If you’re starting from scratch with a $5 VPS and no existing data, MinIO or Backblaze B2 may be simpler entry points.


Deployment reality check

The quickstart is genuinely quick. Single binary, set two environment variables, point it at a directory, done. The README’s example works — this is not a project that oversells ease of setup [README].

What you actually need for production:

  • A server or VPS with the storage you want to expose (or a mounted NFS/CIFS share)
  • The VersityGW binary (or Docker image)
  • A reverse proxy (Caddy or nginx) for TLS termination — the gateway handles HTTP/HTTPS but you’ll want proper cert management in front
  • An IAM decision: flat JSON files work for development and simple setups; larger deployments will want external IAM

Clustering: Multiple gateway instances with no shared state means horizontal scaling is genuinely straightforward — throw a load balancer in front and run as many gateway processes as throughput demands. Each instance reads from the same filesystem. No Raft consensus, no cluster bootstrapping [README][website].

What can go wrong:

  • Concurrent write semantics between POSIX applications and S3 applications hitting the same data are on you to manage. S3 object semantics and POSIX file semantics are not identical — race conditions are possible if both access patterns are active simultaneously.
  • The flat JSON IAM is explicitly marked as “for testing” in the README — don’t run it in production for multi-user deployments without a proper IAM backend [README].
  • Community is small. If you hit a bug or edge case, you’re looking at GitHub discussions and the Versity Sales contact page for help. There’s no Stack Overflow community, no dedicated subreddit, no third-party tutorials at scale.
  • Documentation quality is uneven — the GitHub wiki covers basics but there are gaps for advanced configurations.

Realistic setup time: 15–30 minutes for a basic POSIX backend on a machine you already manage. A production clustered deployment with proper TLS, IAM, and load balancing: half a day for someone who’s deployed Go services before.


Pros and cons

Pros

  • Non-destructive S3 layer. Your files stay files. POSIX and S3 access coexist without data migration or format conversion [README]. This is the core feature that MinIO can’t match.
  • Apache 2.0 license. Commercial use, embedding, reselling — no legal conversation needed [README][website].
  • Stateless and clusterable. Linear throughput scaling by adding gateway instances. No cluster state to manage [README][website].
  • Genuine simplicity. One binary, one command, working S3 server. The quickstart isn’t aspirational — it works [README].
  • Multi-arch. Linux, macOS, BSD on both amd64 and arm64. Runs on ARM servers and Apple Silicon without fuss [README].
  • Reputable collaborators. Los Alamos National Laboratory and Pawsey Supercomputing Research Centre aren’t marketing partners — they’re real HPC institutions with serious storage requirements [website].
  • Optional WebGUI. Not every self-hosted tool ships a management interface. Having an explorer and admin panel reduces the CLI barrier for less technical operators [README].

Cons

  • Small community. 1,314 stars. Almost no user-written tutorials, community guides, or third-party reviews. You’re largely on your own outside the GitHub wiki and official support.
  • HPC/enterprise positioning. The design decisions and documentation assume you know what POSIX means and why you’d run a stateless cluster. This is not a tool for non-technical founders exploring self-hosting for the first time.
  • No public enterprise pricing. “Contact Versity Sales” for production support means no ability to budget without a sales conversation [website].
  • POSIX/S3 semantic mismatch is your problem. The gateway translates calls but doesn’t resolve the fundamental differences between object storage semantics and file semantics. Concurrent mixed-access patterns require application-level care.
  • Ecosystem is nascent. No large community of operators sharing configs, no Helm chart repositories with battle-tested values, no cloud provider marketplace listing.
  • Backend options are limited. POSIX, ScoutFS, Azure Blob, and S3 proxy. There’s no native support for NFS shares as a distinct backend type, no SMB, no Ceph RADOS direct backend — you go through POSIX for most things [README].

Who should use this / who shouldn’t

Use VersityGW if:

  • You have existing file-based storage (a NAS, a ZFS pool, an HPC filesystem, a mounted drive) and need S3-compatible access for backup software, ML pipelines, or cloud-native apps.
  • You need POSIX and S3 access to the same data simultaneously without maintaining two copies.
  • You’re migrating off AWS S3 or another object store and want a self-hosted S3 endpoint that applications can point to without code changes.
  • You’re comfortable running Go binaries and doing basic Linux server administration.
  • You need Apache 2.0 licensing for commercial products or client deployments.

Skip it (use MinIO instead) if:

  • You don’t have existing storage and want a full, self-contained S3-compatible object store.
  • You want a polished out-of-box experience with an extensive community, tutorials, and ecosystem.
  • You need erasure coding, tiering, or advanced object lifecycle management built into the storage layer.

Skip it (use AWS S3 or Backblaze B2) if:

  • You don’t manage any servers and don’t want to start.
  • Your storage needs are modest and the simplicity of managed cloud outweighs the cost.
  • Durability SLAs matter more than infrastructure ownership.

Skip it (use Rclone + existing storage) if:

  • You just need to sync files to/from S3-compatible endpoints and don’t need a live S3 server on top of your filesystem.

Alternatives worth considering

  • MinIO — the dominant self-hosted S3-compatible object store. More mature, larger community, richer feature set. But it stores data in its own format — no co-POSIX access. Choose MinIO when you want a standalone object store, not a gateway.
  • Ceph with RGW — the enterprise-grade answer. Massively scalable, full S3 compatibility, erasure coding, multi-site replication. Also massively complex. Minimum viable Ceph cluster is multiple nodes and days of setup. For serious scale or mixed workloads.
  • Rook — Ceph for Kubernetes. Same power and complexity, Kubernetes-native.
  • SeaweedFS — another Go-based distributed file store with S3 API. Simpler than Ceph, more featured than MinIO. Different use case — it manages its own storage rather than sitting on top of POSIX.
  • Garage — Rust-based, designed for geo-distributed home-lab setups with inconsistent connectivity. Different threat model than VersityGW.
  • AWS S3 — the original and still the benchmark. Unlimited scale, no ops burden, per-GB and per-request cost that hurts at volume.
  • Backblaze B2 — S3-compatible, $6/TB/month, strong value for cold/warm storage. No self-hosting option.

The practical shortlist for “I have POSIX storage and need S3” is VersityGW vs MinIO. VersityGW if the data needs to stay accessible as files. MinIO if you’re comfortable with a dedicated object store that owns its data.


Bottom line

VersityGW solves one specific problem well: exposing existing POSIX storage via S3 without moving or reformatting the data. It’s not trying to be a general-purpose object store, and it doesn’t pretend to be. If you’ve ever had to choose between “keep using the NAS your team already understands” and “use an S3-native backup tool,” VersityGW is the answer to that false dilemma. One binary, one command, your existing filesystem speaks S3.

The honest caveat: this is infrastructure software for people who manage infrastructure. The community is small, documentation has gaps, and enterprise support costs money. If you’re a non-technical founder looking for a weekend self-hosting project, start with something that has a larger community behind it. If you’re a developer or sysadmin with an existing storage investment and S3 compatibility as a requirement, VersityGW is worth a serious look — and the Apache 2.0 license means you can evaluate it, deploy it, and build on it without a procurement conversation.


Sources

  1. TrueNAS 25.04 (Fangtooth) Version Notes — TrueNAS Documentation Hub. https://www.truenas.com/docs/scale/25.04/gettingstarted/scalereleasenotes/

  2. TrueNAS 24.10 (Electric Eel) Version Notes (Archived) — TrueNAS Documentation Hub. https://www.truenas.com/docs/scale/24.10/gettingstarted/scalereleasenotes/

Primary sources:

Features

Integrations & APIs

  • REST API

Mobile & Desktop

  • Mobile App