unsubbed.co

Tdarr

Tdarr handles distributed transcode automation as a self-hosted solution.

Distributed media transcoding, honestly reviewed. No marketing fluff, just what happens when you let software decide which files are too fat for your drives.

TL;DR

  • What it is: A distributed transcoding automation system that scans your media library, applies conditional rules (codecs, containers, bitrate thresholds), and sends files through FFmpeg or HandBrake — automatically, on a schedule, across multiple machines [README][1].
  • Who it’s for: NAS owners and homelab operators running Plex or Jellyfin whose video libraries have grown past what their storage can comfortably handle, specifically people who don’t want to manually re-encode thousands of files [1][4].
  • Cost savings: Tdarr itself is free to self-host. The savings come from storage reclaimed — H.264 to H.265 conversion cuts file sizes 40–50% [README]. One How-To Geek writer saved 7TB without re-ripping a single disc [4].
  • Key strength: Plugin stack system lets you build conditional processing chains (transcode this, strip those subtitles, add stereo audio if missing) that run entirely in the background once configured [1][4].
  • Key weakness: License is non-standard (not MIT, not Apache — the repo lists NOASSERTION and the actual LICENSE.md contains custom terms). Plugin customization requires JavaScript. Not a tool you hand to a non-technical family member [README].

What is Tdarr

Tdarr is a distributed transcoding automation system. You point it at your media library, define what “good” looks like (H.265, stereo audio, no embedded subtitles, specific bitrate ceiling), and it processes every file that doesn’t meet those criteria — using FFmpeg or HandBrake under the hood. The README describes it plainly: “a cross-platform conditional based transcoding application for automating media library transcode/remux management” [README].

The architecture is a two-component web app: Tdarr_Server (the central coordinator) and Tdarr_Node (the worker processes that do actual encoding). Both can run on the same machine or across multiple boxes — say, a NAS running the server and a beefier desktop acting as an additional node when it’s idle [README][1].

Each library you set up gets its own transcode settings, filters, and schedule. Workers are split into four types: Transcode CPU, Transcode GPU, Health Check CPU, and Health Check GPU. GPU hardware transcoding is supported on Nvidia hardware (via unRAID plugin or Nvidia runtime container on Ubuntu) [README].

As of this review the GitHub repository sits at approximately 4,000 stars with 117 forks. The project is maintained by HaveAGitGat and has been in active development since at least 2019.

One important note upfront: Tdarr is not open-source in the traditional sense. The repository’s license is listed as NOASSERTION by GitHub’s detection, and the actual LICENSE.md contains custom terms. The software is free to self-host for personal use, but the license is not MIT or GPL — you cannot freely fork, redistribute, or embed it in a commercial product without reviewing the terms directly [README].


Why people choose it

The decision to run Tdarr usually comes from one specific pain point: you have a media library full of H.264 files from an era when H.264 was fine, and now you’re watching your NAS fill up faster than you’re adding drives.

The storage math is the whole argument. The README states explicitly that converting H.264 to H.265 saves 40–50% in size [README]. The How-To Geek writer Patrick Campanale ran Tdarr against his Plex library and came out 7TB lighter: “using Tdarr in my setup saved me 7TB of storage without having to do much else besides set it up at the beginning” [4]. That’s the pitch. Not a philosophy about open source, not a monthly bill comparison — 7TB you don’t have to buy.

The alternative is doing it manually, which no one does. XDA’s Dhruv Bhutani describes the problem well: “Sorting through thousands of files and optimizing manually is out of the question due to the sheer commitment it requires” [1]. Tdarr solves exactly this by becoming a background task you configure once and then largely forget.

Sonarr/Radarr users slot it in naturally. Tdarr was designed to work alongside Sonarr and Radarr [README]. If you’re already running the *arr stack, adding Tdarr is a logical extension — it handles your existing library the same way Radarr handles quality upgrades going forward.

Scaling to spare hardware is a real feature. Most tools in this category are single-machine affairs. Tdarr’s distributed model means an old desktop in the corner can become a dedicated transcode node. Campanale notes: “if you have more advanced needs, you can set up external nodes to handle heavier processing, even across multiple servers or machines, for faster transcoding” [4]. For a household homelab with a mix of hardware generations, this is genuinely useful.


Features

Based on the README and hands-on coverage from reviews:

Core transcoding engine:

  • Conditional plugin stack system — rules fire only when conditions match (e.g., only transcode files above a certain bitrate) [1][4]
  • FFmpeg and HandBrake support — you pick per library [README]
  • CPU and GPU workers — Nvidia hardware acceleration supported [README]
  • Health Check workers — separate worker class for detecting corrupted files [README]
  • Tested against a 1,000,000-file dummy library [README]

Library management:

  • Per-library transcode settings, filters, and schedules [README]
  • 7-day, 24-hour scheduler with granular time windows [README]
  • Folder watcher — new files get picked up automatically [README]
  • Worker stall detector — recovers from hung transcode jobs [README]
  • Load balancing across libraries and drives [README]
  • Search files by hundreds of properties (codec, bitrate, resolution, container, stream count) [README]
  • Library statistics dashboard [README]

Plugin system:

  • Community plugins available immediately from the Tdarr_Plugins repo [README]
  • Plugin creator interface for building custom plugins (JavaScript) [README]
  • Example stack from the README: transcode non-HEVC files, remove subs, strip metadata if titled, add AAC stereo if missing, remove closed captions [README]
  • Each plugin is conditional — it only runs if the condition is met [README]

Flows (newer addition):

  • More precise flow-based file handling beyond the classic plugin stack [1]
  • Allows step-by-step control over how individual files move through processing [1]

Infrastructure:

  • Cross-platform: Windows, macOS, Linux (including ARM/ARM64), Docker [README]
  • Web interface with guided first-run walkthrough [1]
  • Docker Compose deployment with minimal configuration [1]
  • Runs as a standard self-hosted web app — no cloud account required [README]

Pricing: storage savings math

Tdarr has no subscription. The software is free to self-host locally. The economic question isn’t monthly SaaS cost — it’s storage cost avoided.

What Tdarr costs to run:

  • Software: $0 [README]
  • Hardware: whatever you already have (NAS, homelab server, spare desktop)
  • If you want a dedicated VPS node: $5–15/mo on Hetzner or Contabo, though most people run it on existing hardware
  • Electricity: marginal — it runs during scheduled windows and shuts workers down when idle [README]

What it saves:

  • H.264 → H.265 conversion: 40–50% file size reduction [README]
  • 7TB recovered from a real-world Plex library of mixed H.264/H.265 content [4]
  • At $0.02–0.04/GB for NAS expansion drives, 7TB represents roughly $140–280 of drive costs deferred

Alternative: buying storage instead

  • A 6TB NAS drive runs $80–120
  • If your library grows 2–4TB/year in H.264, Tdarr effectively buys you 1–2 extra years before you need more drives
  • Cloud storage for a 50TB media library (if you were somehow paying for it) would run $200–500/mo minimum — not a real comparison, but the scale shows why local transcoding matters

The actual calculation for a typical NAS user: If you’re running a 20–40TB media library that’s 70% H.264 and you convert that to H.265, you recover 6–14TB. At current NAS drive prices that’s $100–250 in deferred hardware, plus the operational breathing room to stop managing drive space constantly.

There’s no paid tier for local self-hosting. Tdarr does not appear to gate features behind a subscription for personal use. This is the honest cost picture.


Deployment reality check

Both reviewers mention setup as accessible by homelab standards, which means Docker-comfortable — not non-technical.

The install path:

  • Docker Compose is the recommended route and the one both XDA and How-To Geek used [1][4]
  • The provided docker-compose file needs media path configuration and then deploys as a stack [1]
  • Server and Node run as separate containers but both deploy from the same compose file [1][README]
  • Web interface launches after container startup; first run includes a guided walkthrough [1]

What you actually need:

  • A Docker-capable host (NAS, Linux server, or desktop)
  • 2–4GB RAM minimum; more if you’re running multiple workers simultaneously
  • Storage paths mounted correctly into the container
  • Network access to the web UI (no reverse proxy required for LAN-only use)
  • GPU passthrough configuration if you want hardware transcoding (Nvidia-specific, documented separately)

What can go sideways:

The plugin/flow configuration is where complexity lives. The XDA review describes a smooth setup experience for the basics, but Campanale at How-To Geek notes: “It might take a bit to get set up, but once it’s configured, you’ll be surprised at just how much space you can save” [4] — which is polite for “expect to spend time on initial configuration.”

Custom plugins require JavaScript. The README is upfront: “written in JavaScript so if none of the plugins do what you want then you can modify/create new plugins if you have a bit of coding experience” [README]. Community plugins cover the common cases (HEVC conversion, subtitle stripping, audio normalization), but edge cases require you to write or adapt code.

Hardware GPU transcoding needs extra setup — Nvidia plugin on unRAID or Nvidia runtime container on Ubuntu. This isn’t automatic and the documentation is more fragmented than the basic CPU path [README].

Concurrent transcoding is CPU and power intensive. If your NAS is a low-power ARM device, expect slow processing. The distributed node model solves this but adds setup complexity.

Realistic time estimate: 1–2 hours for a working Docker Compose deployment with basic library rules configured. 4–8 hours for a tuned setup with custom plugins, GPU transcoding, and scheduler windows dialed in. Hardware transcoding on non-Nvidia hardware: budget extra research time.


Pros and cons

Pros

  • Does exactly what it says. Automated background transcoding works. The 40–50% H.264→H.265 size reduction is real, and the 7TB recovered by a real user isn’t a synthetic benchmark [README][4].
  • Distributed node architecture. Most open-source media tools are single-machine. Tdarr genuinely scales across hardware — useful if you have a NAS and a spare desktop [README][1].
  • Community plugin library. The common operations (codec conversion, subtitle removal, audio normalization) are already written and available. You don’t start from zero [README].
  • Conditional processing. Rules only apply when conditions are met. Campanale set his to only touch files above a certain bitrate — already-optimized files were skipped automatically [4].
  • Cross-platform. Windows, macOS, Linux, Docker, including ARM — covers the full range of NAS hardware [README].
  • Passive operation. Once configured, it runs in the background with scheduling, folder watching, and stall detection. Bhutani describes it as essentially setting up and forgetting [1].
  • *Integrates with arr stack. Designed to work alongside Sonarr/Radarr — it fills a gap those tools don’t cover (existing library optimization) [README].
  • Free for personal use. No subscription, no usage limits for local processing [README].

Cons

  • Non-standard license. Not MIT, not GPL — the license is custom and GitHub’s detector can’t classify it. If this matters for your use case (commercial deployment, redistribution), read the actual LICENSE.md before committing [README].
  • JavaScript required for custom plugins. The community plugins cover common cases, but anything unusual means writing JS. Non-technical users hit a wall here [README].
  • GPU transcoding setup is not trivial. Hardware acceleration needs extra configuration and is Nvidia-centric. AMD and Intel GPU support exists but is less documented [README].
  • Web UI is functional, not polished. Neither reviewer calls it beautiful. It’s a utility dashboard — dense with information, not optimized for casual users [1][4].
  • Initial configuration investment. Both reviewers flag that setup “takes a bit” — the first-run walkthrough helps but the plugin stack and flow system has a learning curve [4][1].
  • Processing is resource-intensive. Transcoding is CPU/GPU heavy by nature. Low-power NAS hardware (Celeron, Atom-class processors) will transcode slowly. This isn’t a flaw but a real constraint to plan around [README].
  • 4,000 GitHub stars is a modest number for a tool that’s been around since at least 2019. The community exists (there’s an active Reddit at r/Tdarr and a Discord) but it’s smaller than Jellyfin or Sonarr [README].
  • No REST API for external integration. Tdarr operates as a standalone system — triggering processing from external tools or building automations around it requires workarounds.

Who should use this / who shouldn’t

Use Tdarr if:

  • You’re running a NAS-based Plex or Jellyfin server and your H.264 library is eating storage faster than you want to buy drives.
  • You’re comfortable with Docker Compose and can tolerate an afternoon of initial configuration.
  • Your media library has 1,000+ files — below that threshold, manual re-encoding is faster than setting up Tdarr’s plugin system.
  • You have spare hardware (old desktop, second NAS) that could serve as a transcode node for faster processing.
  • You want to standardize your entire library to a specific codec/container without touching each file manually.

Skip it if:

  • You’re not comfortable with Docker. Tdarr can be installed as a binary, but the Docker path is far better documented and the binary path adds its own complexity.
  • Your media library is already H.265 or AV1 and you’re not adding much H.264 content. The problem Tdarr solves may not exist for you.
  • You need a polished consumer UI. This is a homelab tool, not a home theater product.
  • You’re a non-technical family member managing a shared server — the plugin system is not self-explanatory.
  • You need a commercial-use or redistributable license — the custom license makes this unsuitable without explicit permission.

Use Handbrake Web or FFmpeg scripts instead if:

  • You have a small library (under a few hundred files) where one-time batch processing is faster than setting up a persistent system.
  • You want full control over encoding parameters per file, without a conditional rules system.

Alternatives worth considering

  • Unmanic — similar automated library transcoding, different plugin system, arguably simpler initial setup. Worth comparing directly if Tdarr’s plugin complexity is a concern.
  • HandBrake Web — browser-based HandBrake interface for one-off or small-batch transcoding without the automation layer [5]. More manual, but simpler.
  • FFmpeg scripts — if you’re comfortable with Bash, a cron job running FFmpeg with find is zero-dependency and infinitely configurable. No UI, no stall detection, no distributed processing.
  • FileFlows — newer distributed media processing tool, similar concept to Tdarr, with an arguably cleaner plugin system. Less established community.
  • Jellyfin’s built-in transcoding — Jellyfin transcodes on-the-fly during playback. This doesn’t optimize stored files, but eliminates the need if your hardware can keep up with real-time transcoding for all simultaneous streams.
  • Plex’s built-in transcoder — same caveat as Jellyfin. On-demand transcoding doesn’t reduce storage; it just lets incompatible clients play files.

For a homelab NAS operator whose problem is specifically “my H.264 library is too big,” the realistic choice is Tdarr vs. Unmanic. Both solve the same problem. Tdarr has more community plugins and a longer track record; Unmanic has a cleaner UI and simpler setup. If you’re also considering one-time batch work, HandBrake Web or scripted FFmpeg are lower-overhead options.


Bottom line

Tdarr is the right tool for one specific problem: a large existing media library full of inefficient H.264 files that you don’t want to re-encode manually. For that problem, it works — the distributed node architecture, conditional plugin stacks, and scheduling system are well thought out for a homelab context, and the real-world storage savings (7TB in at least one documented case [4]) justify the setup investment for libraries of meaningful size. What Tdarr is not is a beginner-friendly or non-technical tool. The plugin system requires JavaScript for anything beyond community-provided defaults, the license is non-standard, and GPU transcoding setup requires separate research. If your library is measured in terabytes and you run Docker on your NAS, the afternoon spent configuring Tdarr will pay back in storage headroom within weeks. If you’re looking for something to hand to a non-technical family member, look elsewhere.


Sources

  1. Dhruv Bhutani, XDA Developers“Tdarr is the perfect tool to optimize your movie and TV show storage on your NAS” (Sep 19, 2025). https://www.xda-developers.com/tdarr-perfect-tool-optimize-your-movie-and-tv-show-storage-nas/
  2. Patrick Campanale, How-To Geek“Homelab projects to try this weekend (March 6–8)” (Mar 6, 2026). https://www.howtogeek.com/homelab-projects-to-try-this-weekend-march-6-8/
  3. Ethan Sholly, selfh.st“This Week in Self-Hosted (16 August 2024)” (Aug 16, 2024). https://selfh.st/weekly/2024-08-16/

Primary sources:

Features

Integrations & APIs

  • Plugin / Extension System