unsubbed.co

bitmagnet

Bitmagnet gives you bitTorrent indexer on your own infrastructure.

A self-hosted BitTorrent indexer and DHT crawler, honestly reviewed. No marketing fluff — just what you get when you run your own metadata search engine.


TL;DR

  • What it is: An MIT-licensed, self-hosted BitTorrent indexer that crawls the DHT network to discover and classify torrents entirely on its own, with no reliance on external trackers or indexers [website].
  • Who it’s for: Power users who want a personal, self-contained torrent search engine; Servarr stack operators (Sonarr, Radarr, Prowlarr) looking for an additional indexer source; home media managers building a metadata-rich library catalog [1][5].
  • Cost savings: No SaaS pricing exists for this category — the direct comparison is against public torrent indexers that run advertising or require paid accounts, and against hosted indexer proxies like Jackett/Prowlarr that still depend on external sites staying alive.
  • Key strength: The DHT crawler is genuinely unique. bitmagnet discovers torrents from the distributed hash table network itself — it doesn’t scrape external websites, doesn’t depend on any tracker staying online, and keeps growing its index autonomously after you start it [website][1].
  • Key weakness: The software is explicitly in alpha. No authentication has been implemented yet, meaning the web UI and API expose destructive actions to anyone who can reach them. Not suitable for public exposure, and API/database schema changes are expected before any 1.0 release [website][1].

What is bitmagnet

bitmagnet is a self-hosted BitTorrent indexer built around one core idea: crawl the DHT network instead of scraping torrent sites. Most torrent discovery tools — Jackett, Prowlarr, NZBHydra — work by querying external torrent sites or tracker APIs and returning their results. When a site goes down or blocks scrapers, those tools lose that source. bitmagnet takes a different approach entirely.

The BitTorrent DHT (Distributed Hash Table) is the network that BitTorrent clients use to find peers without a central tracker. Less commonly known: you can also crawl the info hashes that DHT nodes know about. bitmagnet does exactly this — it joins the DHT as a node, discovers info hashes as they propagate across the network, then fetches metadata about each one. From there it classifies content, enriches it with data from The Movie Database (TMDB), and makes everything searchable through its own web UI and GraphQL API [website].

The result is a torrent index that grows autonomously after you start it, fed directly from the global BitTorrent network, with no external dependencies beyond TMDB enrichment (which is optional). It indexes not just movies and TV shows but any content type, with metadata fields for language, resolution, and source format (BluRay, webrip, etc.) [website].

The project sits at 3,918 GitHub stars and is licensed under MIT. Its own homepage leads with the word “Important” in a warning box: the software is currently in alpha, and schema changes before a theoretical 1.0 release are expected [website]. That’s the context everything else has to be read against.


Why people choose it

The review coverage for bitmagnet is thin — which is itself meaningful data. The project has just shy of 4,000 GitHub stars and was noticed in the German Linux podcast LinuxLounge at the v0.1.0 release as “a BitTorrent client without central trackers, a crawler for the DHT” [4]. That’s roughly the level of visibility it has: known to the self-hosting community, not yet reviewed by mainstream tech outlets.

The practical reason people reach for bitmagnet comes through clearly in how AIOStreams (a Stremio addon aggregator) describes its built-in support: “Scrape your self-hosted Bitmagnet instance — a BitTorrent indexer and DHT crawler” [5]. That framing captures why it appeals to the Servarr crowd: it’s a locally-controlled source that integrates with Prowlarr through a Torznab-compatible endpoint, feeding Sonarr and Radarr the same way any external indexer would, except the index lives on your server and never goes offline because someone sent a DMCA notice [website].

The other angle is content discovery without site dependencies. Public torrent indexes get blocked, go dark, require registration, or serve aggressive ad networks. bitmagnet sidesteps all of that because it isn’t scraping sites — it’s listening to the DHT network directly. If you want a searchable index of what’s out there and you don’t want to depend on any third party to keep it alive, this is the only self-hosted option that actually builds that index itself.


Features

What’s implemented now:

  • DHT crawler and protocol implementation — the core feature. Crawls the DHT network autonomously and continuously [website].
  • Generic BitTorrent indexer — not limited to DHT; torrents can be imported from external sources via the /import endpoint. The RARBG backup import is called out explicitly as a supported use case [website].
  • Content classifier — identifies movies, TV shows, and other content types. Enriches results with language, resolution, source format, and TMDB metadata [website].
  • Torrent search engine — search across everything the local index has discovered [website].
  • GraphQL API — single search query endpoint with an embedded GraphQL playground at /graphql [website].
  • Web UI — Angular-based, responsive, multilingual [website].
  • Torznab endpoint — integrates with the Servarr stack (Sonarr, Radarr, Prowlarr) as an indexer source [website][5].
  • Dashboard — monitors queue throughput and worker health [1].
  • Import facility — ingest bulk torrent data from external sources, including the RARBG backup [website].

High-priority features not yet built:

  • Authentication, API keys, access levels — explicitly listed as missing [website][1].
  • Saved searches / custom feeds [website].
  • Bi-directional Prowlarr integration — currently bitmagnet can be added as an indexer in Prowlarr; the reverse direction (bitmagnet pulling from all Prowlarr sources) isn’t done yet [website].

Pipe-dream features on the roadmap:

  • In-place seeding of existing local files [website].
  • Federation between instances [website].
  • Something resembling a decentralized private tracker [website].
  • BitTorrent v2 protocol support [website].

Pricing: SaaS vs self-hosted math

bitmagnet has no SaaS version and no commercial offering — it’s MIT software you run yourself. The relevant cost comparison isn’t against a bitmagnet cloud plan; it’s against the ecosystem alternatives.

bitmagnet self-hosted:

  • Software: $0 (MIT license)
  • VPS to run it on: $5–10/month (Hetzner, Contabo, or DigitalOcean)
  • Disk: plan for ~80GB per 10 million torrents indexed [1]. After several months of crawling you’ll want a machine with 100–200GB available.
  • RAM: ~300MB for bitmagnet itself, at least 1GB for PostgreSQL [1]. A 2–4GB VPS is the realistic minimum.

What you’d pay for alternatives:

Public indexer sites don’t charge you directly, but they require trusting third parties, tolerate downtime, and often require VPN anyway. Usenet indexers charge $10–20/year for API access. Jackett and Prowlarr are free but depend on those same external sites staying alive. There is no direct paid equivalent to what bitmagnet does (autonomous DHT crawling), which is partly why it exists.

The honest cost is your time and disk space, not money. If your VPS runs out of disk after a few months of crawling, you either expand storage (typically $2–5/month per 50GB on most providers) or tune the crawler to be more selective.


Deployment reality check

The FAQ page is the most useful deployment document [1]. Key requirements:

  • Linux or macOS. Windows users have reported issues and are advised to run the software on Linux instead [1].
  • ~300MB RAM for bitmagnet itself, at least 1GB for PostgreSQL [1].
  • ~80GB disk per 10 million torrents — “there is no upper limit to how many torrents might ultimately be crawled” [1].
  • Docker Compose is the standard install path.
  • TMDB API is blocked in some countries; you’ll need to either disable TMDB integration or configure a personal API key if you’re affected [1].
  • VPN is recommended: “bitmagnet may download metadata about illegal and copyrighted content. It is possible that rudimentary law enforcement and anti-piracy tracking tools would incorrectly flag this activity.” The FAQ notes no one has reported getting into trouble for using metadata crawlers like this, but recommends Mullvad or ProtonVPN for caution [1].

First-run behavior: After starting, expect up to 10 minutes before torrents appear in the UI (cache TTL). New torrents run through a background queue before becoming searchable, so bulk imports take time [1].

The authentication gap is not a minor caveat. The web UI exposes destructive actions, and the API has no access controls yet [website][1]. This means bitmagnet should only be accessible on a private network or behind authentication enforced at the proxy level (Authelia, basic auth on nginx, Tailscale). Do not expose the bitmagnet port publicly without that layer.

Realistic setup time for a technical user: 30–60 minutes for a working Docker Compose instance. Add time for a reverse proxy with TLS and any VPN setup. For someone new to self-hosting, budget 3–5 hours including reading documentation.


Pros and Cons

Pros

  • Genuinely unique approach. No other self-hosted tool builds its own index by crawling the DHT network. Jackett and Prowlarr are site scrapers; bitmagnet is a protocol participant [website].
  • MIT license. No commercial restrictions, no vendor lock-in, no “sustainable use” clause [website].
  • Zero external dependencies after setup. Once running, it discovers content autonomously. No external site needs to stay online [website][1].
  • Servarr stack integration. The Torznab endpoint means it slots directly into Prowlarr alongside traditional indexers, feeding Sonarr and Radarr [website][5].
  • Content classification with TMDB enrichment. Automatically identifies movies and TV shows, with resolution, language, and source format metadata [website].
  • Import facility. Ingest the RARBG backup or any bulk torrent source to seed the index quickly [website].
  • Actively in development. The roadmap is substantive — bi-directional Prowlarr integration and authentication are explicitly high-priority [website].

Cons

  • Alpha software. Schema changes before 1.0 are explicitly warned about. You may face breaking migrations [website].
  • No authentication. The single biggest operational risk. Every exposed endpoint can perform destructive actions [website][1].
  • Disk appetite. 80GB per 10 million torrents with no upper crawl limit — this grows without bound if you don’t tune it [1].
  • Windows support is unreliable. Documented issues; Linux/macOS only in practice [1].
  • TMDB dependency. Content classification degrades without TMDB access, and TMDB is blocked in some regions [1].
  • Small community. ~3,918 GitHub stars and limited third-party documentation. Troubleshooting means reading source code or asking on GitHub.
  • No REST API. The only query interface is GraphQL. That’s fine for integrations that support it, but limits ad-hoc tooling [website].
  • VPN recommended but not enforced. The project rightly flags the legal ambiguity of crawling torrent metadata; users who skip this step are taking a small but nonzero risk [1].

Who should use this / who shouldn’t

Use bitmagnet if:

  • You run a Servarr stack (Sonarr, Radarr, Prowlarr) and want a locally-controlled indexer that doesn’t depend on external sites.
  • You want a personal, searchable torrent metadata catalog and don’t mind alpha-stage software.
  • You’re comfortable managing Docker containers and a PostgreSQL database.
  • You have adequate disk space and want the index to grow autonomously over months.
  • You want to ingest the RARBG backup or similar bulk datasets into a searchable local index.

Skip it (for now) if:

  • You need production-stable software with an authentication model — this is missing and the project says so explicitly [website].
  • You’re a non-technical founder who wants plug-and-play. This requires Docker, PostgreSQL tuning, VPN setup, and ongoing disk management.
  • You’re running on Windows [1].
  • Disk space is constrained — the index will grow without a hard ceiling.

Skip it (use Prowlarr + Jackett instead) if:

  • You want a mature, stable indexer with an established community, existing guides, and a defined feature set.
  • You’re happy pulling from established public indexers and don’t need the DHT-native discovery angle.

Alternatives worth considering

  • Prowlarr — the standard indexer manager for the Servarr stack. Stable, mature, large community, works with hundreds of external indexers. Doesn’t build its own index — relies on external sites. Compatible with bitmagnet (bitmagnet can be added as a Prowlarr indexer source) [website].
  • Jackett — older indexer proxy, similar role to Prowlarr, still widely used. Same dependency on external sites.
  • Magnetissimo — another DHT crawler and torrent indexer. Less actively developed than bitmagnet; bitmagnet’s content classification and Servarr integration are more mature.
  • The Pirate Bay / 1337x / Nyaa.si — public indexers. No setup, no maintenance, but dependent on third-party availability and often blocked by ISPs. AIOStreams wraps several of these [5].
  • Zilean / DMM hashlist scrapers — similar DHT-adjacent tooling used in the Stremio/debrid stack [5]. Different use case (streaming focus) rather than building a personal searchable index.

For a Servarr user, the practical choice is whether to add bitmagnet as an additional source alongside Prowlarr, not as a replacement for it. The two are complementary: Prowlarr covers established indexers, bitmagnet adds DHT-discovered content that may not appear anywhere else.


Bottom line

bitmagnet is a technically interesting project solving a real problem: building a torrent metadata index that doesn’t depend on any external site staying alive. The DHT crawler is the only implementation of this approach in the self-hosted space that has Servarr integration, a web UI, and active development. For the right user — someone running a Servarr stack, comfortable with Docker and PostgreSQL, with adequate disk space and a VPN — it’s a compelling addition.

The honest caveat is that it’s alpha software with a missing authentication layer, which puts it firmly in the “not for production exposure” category until that changes. If you can enforce auth at the network layer (Tailscale, Authelia, private LAN) and you’re comfortable running software that may need a database migration before it’s stable, it’s worth running. If you need something stable and fully featured today, Prowlarr plus established indexers is the safer bet — with bitmagnet as a future addition once it ships authentication and reaches a stable schema.


Sources

  1. bitmagnet FAQ — bitmagnet.io. https://bitmagnet.io/faq.html
  2. (Source not used — article content was unrelated to bitmagnet.)
  3. awesome-selfhosted mirror — git.osmarks.net. https://git.osmarks.net/mirrors/awesome-selfhosted
  4. LinuxLounge LL275 — “Aufstieg in Fahrtrichtung OpenSource” (Nov 12, 2023) — rec.theradio.cc. https://rec.theradio.cc/item/ll275-aufstieg-in-fahrtrichtung-opensource/
  5. AIOStreams — ElfHosted Stremio Addons — stremio-addons.net. https://stremio-addons.net/addons/aiostreams

Primary sources:

Features

Integrations & APIs

  • REST API