unsubbed.co

Unpackerr

For media management & *arr, Unpackerr offers a self-hosted way to download and extracts media for import into *arr apps.

Open-source archive extraction for Radarr, Sonarr, and friends, honestly reviewed. Built for the self-hosted media crowd — not the marketing team.

TL;DR

  • What it is: A background daemon that watches your Radarr, Sonarr, Lidarr, and Readarr queues, extracts compressed archives (RAR, ZIP, 7z, and more) as downloads complete, and cleans up after import [docs].
  • Who it’s for: Anyone running a *arr-based media stack where downloads arrive as compressed archives. If you’ve ever seen a download sit forever in your Radarr queue because it couldn’t import a nested RAR file, Unpackerr is the fix [docs].
  • Cost savings: No SaaS equivalent — this is a purely self-hosted utility, MIT licensed, zero dollars. The comparison isn’t Unpackerr vs. a paid product; it’s Unpackerr vs. hours of manual extraction or janky shell scripts [GitHub].
  • Key strength: Tight, purpose-built integration with the Starr app ecosystem. It doesn’t try to do everything — it does one thing (extract archives so *arr apps can import them) and does it reliably, with Prometheus metrics, Grafana dashboards, and webhook support as bonuses [docs].
  • Key weakness: Very narrow scope by design. If you don’t use the Starr apps, the tool has limited value unless you configure it in standalone folder-watch mode. Documentation is functional but sparse — you won’t find a large community of how-to guides [docs].

What is Unpackerr

Unpackerr is a lightweight background service (daemon) that solves a specific, annoying problem in self-hosted media automation: downloads that arrive as compressed archives (.rar, .zip, .7z, etc.) that Radarr, Sonarr, Lidarr, and Readarr can’t import on their own.

The typical scenario: you’ve got Sonarr managing your TV library, connected to a download client (qBittorrent, SABnzbd, NZBGet). A usenet release arrives as a multi-part RAR archive. Sonarr marks it downloaded but can’t import it — the episode files are buried inside the archive. Without an extraction step in between, that download sits in your activity queue indefinitely, or you extract it manually every time.

Unpackerr sits between your download client and your *arr apps and handles this automatically. It polls Radarr, Sonarr, Lidarr, and Readarr at a configurable interval, finds items marked as completed by the download client, checks for extractable archives, runs the extraction into a staging folder, and then moves the files back so the *arr app’s Completed Download Handling can import them normally. When the import completes and the item drops out of the queue, Unpackerr deletes the extracted files [docs].

It can also run in a standalone folder-watch mode with no *arr apps involved — point it at a download directory and it extracts whatever shows up. This makes it usable even if you’ve abandoned the Starr ecosystem entirely [docs].

The project lives at https://unpackerr.zip, is MIT licensed, has 1,374 GitHub stars, and is maintained under the golift organization. It’s not a big-name project, but it has a clear purpose and an active enough community on Discord.


Why People Choose It

The honest answer: people don’t typically search for Unpackerr and then evaluate it against alternatives. They run into the “stuck in queue” problem, Google it, and find that Unpackerr is the accepted solution in the Servarr community. It’s the kind of tool that gets installed once and forgotten.

One real-world example from the available evidence: a homelab operator running an Ansible-automated setup specifically called out Unpackerr as one of the resource-intensive media processing containers they orchestrate through Kestra, scheduling it during off-peak hours to avoid impacting Plex playback [1]. That’s the pattern — Unpackerr isn’t a daily interaction; it’s infrastructure you configure once and trust to handle extraction quietly in the background.

The argument for Unpackerr over DIY scripts is reliability and observability. Shell scripts that watch a folder and call unrar x are brittle — they don’t know when a download is actually done vs. still downloading, they don’t know when the *arr app has imported the files (so they can’t safely clean up), and they produce no metrics. Unpackerr handles all three by integrating directly with the *arr app APIs [docs].

The argument for Unpackerr over SABnzbd’s built-in extraction: SABnzbd can extract archives itself, but it doesn’t communicate with Radarr/Sonarr about import status, which means it may extract too early or delete files before import completes. Unpackerr’s integration is tighter [docs].


Features

Based on the official documentation:

Core extraction engine:

  • Polls Radarr, Sonarr, Lidarr, Readarr (and Whisparr) APIs at configurable intervals [docs]
  • Checks download client status — only extracts when status is Completed [docs]
  • Extracts to a temporary staging folder, then moves back to the download location for Completed Download Handling [docs]
  • Deletes extracted files after the *arr app imports and drops the item from its queue [docs]
  • Recursive extraction: handles archives within archives, deep folder structures [docs]
  • Extracts subtitle files alongside media [docs]
  • Can extract to a different location than the source (configurable) [docs]

Archive format support: The docs list: rar, tar, tgz, gz, zip, 7z, bz2, tbz2, iso. Multi-file archives are supported for RAR and 7ZIP. Password-protected archives are supported for RAR and 7ZIP. ISO is disabled by default (you enable it explicitly). Archives are detected by file extension [docs].

Standalone folder-watch mode: No *arr apps required. Point Unpackerr at a directory, it extracts everything it finds. Useful for people who download outside the Starr ecosystem or want a general-purpose extraction daemon [docs].

Observability:

  • Prometheus metrics endpoint — actual structured metrics, not just log lines [docs]
  • Pre-built Grafana dashboard for visualizing extraction activity [docs]
  • Webhook support: sends events on extraction start, success, failure [docs]
  • Script/command execution hooks based on extraction events [docs]
  • Described in the docs as having “rich logs” [docs]

What it doesn’t do:

  • No web UI — it’s a headless daemon [docs]
  • No built-in download client (it doesn’t fetch anything, only extracts)
  • No media management or renaming (that’s the *arr apps’ job)
  • No scheduling (it polls on an interval, not at specific times — if you want schedule-based control, external tools like Kestra or cron handle that [1])

Pricing: SaaS vs Self-Hosted Math

There is no SaaS alternative to Unpackerr. This is a purely self-hosted, MIT-licensed utility. It costs $0 to run.

The math for running it:

  • Software: $0 (MIT license, no commercial tier, no cloud version) [GitHub]
  • Hardware: it runs on the same machine as your download client or seedbox; resource consumption is minimal (a Go binary watching API endpoints and calling unrar)
  • If you’re already paying for a VPS or homelab server for your *arr stack, Unpackerr adds essentially no marginal cost

The comparison isn’t “Unpackerr vs. a paid product.” It’s “Unpackerr vs. your time.” If you’re manually extracting archives once a week, that’s 10–30 minutes of tedious work you can fully eliminate. If you’re running a large media library with frequent usenet or private tracker downloads, manual extraction becomes genuinely painful — which is why this tool exists.


Deployment Reality Check

Unpackerr runs on Linux, macOS, Windows, FreeBSD, and Docker. The docs explicitly note you can run it from your home folder on a seedbox [docs][homepage]. Docker is the recommended path for most homelab setups, and it’s the most straightforward.

Basic Docker setup: The docker-compose is simple — one container, a config volume, and a mount to your media/download directory. A real deployment from a community member’s Ansible playbook [1] shows the practical setup: the golift/unpackerr:latest image, a 1000:100 user (matching the Plex/Sonarr user), port 5656 exposed for the metrics endpoint, a config volume for persistence, and a single bind mount to /mnt/data for the download directory.

Configuration: The primary config file is a TOML or environment variable-based setup where you list each *arr app with its URL and API key, set polling intervals, and configure which archive types to handle. There’s no web UI — you edit the config file and restart. For anyone comfortable with basic config files, this is fine. For someone expecting a GUI, it may feel primitive.

What you actually need before installing:

  • Your *arr apps (Radarr, Sonarr, etc.) already running and connected to a download client
  • API keys from each *arr app
  • Docker or a package manager (DEB/RPM packages available via packagecloud, Homebrew on Mac) [docs]
  • Network access from Unpackerr’s container to your *arr app containers
  • Shared volume mounts so Unpackerr can read the download directory

What can go sideways:

  • Volume mount misconfiguration is the most common issue — Unpackerr needs to see the exact same paths that your *arr apps see, or extraction will happen but *arr won’t find the files for import. This is a Docker networking/volume problem, not an Unpackerr bug, but it trips up new users.
  • If your download client extracts archives itself (SABnzbd has this built in), you can end up with double extraction or conflicts. Disable the download client’s built-in extraction when using Unpackerr.
  • ISO support is off by default — you enable it explicitly if your rips are in disc image format [docs].
  • No guidance in the official docs for what to do when a password-protected archive fails (logs the error, but you’re on your own for figuring out the password).

Realistic setup time for someone already running a *arr stack: 15–30 minutes to a working installation. Most of that is looking up API keys and getting the volume mounts right in your docker-compose. If you’ve never run Docker before, add a few hours for general Docker setup first.


Pros and Cons

Pros

  • Solves a real, specific problem. If RAR-packed downloads are getting stuck in your *arr import queues, this is the correct fix — not a workaround [docs].
  • MIT licensed, zero cost. No commercial tier, no “premium features,” no pricing page. The full thing, free [GitHub].
  • Low resource footprint. Written in Go; it’s a small binary polling APIs and shelling out to archive utilities. It won’t compete with your transcoder for CPU.
  • *Tight arr integration. Knows when a download is actually complete (vs. still in progress), when *arr has imported it, and only cleans up extracted files after import confirms. DIY scripts can’t match this without significant effort [docs].
  • Observability out of the box. Prometheus endpoint + Grafana dashboard is more than most single-purpose utilities bother to provide [docs].
  • Broad archive support. RAR, ZIP, 7z, tar variants, ISO, recursive extraction, encrypted archives — covers the full range of what you’ll encounter on usenet or private trackers [docs].
  • Cross-platform. Linux, macOS, Windows, FreeBSD, Docker, even seedbox home-folder installs [docs][homepage].
  • Webhook + script hooks. Can trigger downstream automation on extraction events — useful for pipelines like the Kestra-based scheduling approach [1][docs].

Cons

  • Very narrow scope. If you’re not running *arr apps and don’t need folder-watch extraction, there’s no reason to install this. It solves one problem.
  • No web UI. Configuration is a flat config file. There’s no dashboard to show “currently extracting X,” no way to trigger a manual extraction through a browser. The Grafana dashboard shows historical data, not current state.
  • Sparse documentation. The docs cover the basics, but edge cases (troubleshooting specific extraction failures, handling specific download client quirks) rely on Discord or GitHub issues. Not a problem once you’re running, but onboarding friction is real.
  • Volume mount pain in Docker. Getting paths consistent between Unpackerr, your download client, and your *arr apps in a Docker environment requires care. Not Unpackerr’s fault, but it’s where most setup failures happen.
  • No built-in scheduling. It polls on an interval continuously. If you want to run extraction only during off-peak hours (to not impact Plex playback, as one user specifically called out [1]), you need an external scheduler like Kestra, cron, or a start/stop Ansible playbook.
  • 1,374 GitHub stars — small community relative to the *arr apps it serves. Less likely to find a ready-made guide for your specific setup.

Who Should Use This / Who Shouldn’t

Use Unpackerr if:

  • You run Radarr, Sonarr, Lidarr, or Readarr and your download source (usenet especially, some private trackers) delivers compressed archives.
  • You’ve ever seen a download sit in your *arr activity queue marked complete but unimportable.
  • You want extraction to happen automatically without writing or maintaining shell scripts.
  • You want metrics and logs on what’s being extracted and when.

Skip it if:

  • Your download client handles extraction reliably and your *arr apps are importing successfully — don’t fix what isn’t broken.
  • Your downloads don’t use compressed archives (direct file torrents, for example).
  • You don’t use the *arr ecosystem at all — the standalone folder-watch mode is functional but less compelling than dedicated extraction utilities.
  • You want a web UI for everything — Unpackerr is CLI/config only.

Alternatives Worth Considering

  • Download client built-in extraction (SABnzbd, NZBGet): Both support extraction natively. The trade-off is they don’t know when your *arr app has finished importing, so cleanup timing is less precise. For simple setups, this may be enough.
  • FileBot: A powerful media renaming and organization tool that also handles extraction. Much broader scope (renaming, metadata, organization), steeper learning curve, and the current version requires a license fee for some features.
  • Manual unrar/7z scripts via cron: Technically free, but you lose the *arr API integration entirely — your script doesn’t know when import is done, so it can’t clean up safely, and it doesn’t know when a download is actually complete vs. still in progress.
  • Bazarr: This is for subtitle management, not archive extraction — but it’s commonly installed alongside the same *arr stack and worth knowing about.
  • Nothing: If you’re using modern torrent releases in MKV/MP4 directly (common with public trackers), you may not need extraction at all. Usenet users are far more likely to need Unpackerr than torrent-only users.

There is no direct SaaS competitor to Unpackerr because this is an infrastructure utility, not a product category.


Bottom Line

Unpackerr is a textbook example of a well-scoped utility: it does one thing, does it correctly, and integrates cleanly with the ecosystem it was built for. If you run the Starr apps and pull from usenet (or any source that packages content in compressed archives), this is the correct solution to the import-queue-stuck problem — not a workaround, not a script you maintain yourself. The MIT license, Go binary footprint, and built-in Prometheus/Grafana support make it easy to adopt and easier to trust long-term.

The honest caveat: it’s a daemon you configure once and forget, which means the documentation gaps and lack of a web UI matter during setup and rarely after. Budget 30 minutes to get volume mounts right, expect to consult the Discord at least once, and then expect it to run quietly in the background indefinitely.

If you’re building or refining a self-hosted media stack and haven’t hit the RAR-extraction problem yet — install it now before you do. If you’re spending time manually extracting archives so Radarr can import them, that’s exactly the unsubbed.co use case: one afternoon of setup, one recurring annoyance permanently removed.


Sources

  1. Blog.php-systems.com“Automating Ansible playbooks with Kestra” — describes deploying Unpackerr via Ansible/Docker and scheduling it through Kestra to avoid impacting Plex playback. https://blog.php-systems.com/automating-ansible-playbooks-with-kestra/

Primary sources: