unsubbed.co

Dagu

Released under GPL-3.0, Dagu provides powerful Cron alternative on self-hosted infrastructure.

Self-hosted workflow orchestration, honestly reviewed. Built for developers who are tired of maintaining more infrastructure than actual workflows.

TL;DR

  • What it is: GPL-3.0 workflow orchestration engine — define pipelines in YAML, execute them with a single binary, no external database or message broker required [1][2].
  • Who it’s for: Small engineering teams and solo developers who want Airflow-style DAG orchestration without the Airflow operational overhead. Also developers building AI agent pipelines who need deterministic execution scaffolding [1][2].
  • Cost savings: A minimal Airflow self-hosted setup runs 6+ services and costs $200+/mo at minimum on managed infrastructure; Dagu runs on a $5–7/mo VPS with under 128MB memory footprint, no PostgreSQL or Redis required [1][2].
  • Key strength: True zero-dependency single binary. dagu start-all is the entire deployment command. The homepage diagram of “6+ services to manage” versus one command is not a marketing exaggeration [1].
  • Key weakness: GPL-3.0 license (not MIT) means commercial embedding or SaaS redistribution requires legal review. Community is small — 3,185 GitHub stars versus Airflow’s 40K+ — and independent third-party reviews are sparse.

What is Dagu

Dagu is a self-hosted workflow engine that runs as a single compiled binary. You write workflows as YAML files — DAGs (directed acyclic graphs) in the workflow engine sense, not the machine learning sense — and Dagu schedules and executes them across your infrastructure [1]. The structural proposition is simple: everything a small team needs to orchestrate shell commands, Docker containers, SSH sessions, HTTP calls, and LLM-based AI steps, without standing up a PostgreSQL cluster, a Redis instance, and a Python environment first [1][2].

The project’s own comparison is worth quoting directly, because it’s accurate:

Traditional Orchestrator           Dagu
Web Server                         
Scheduler                          dagu start-all
Worker(s)                          
PostgreSQL                         Single binary.
Redis / RabbitMQ                   Zero dependencies.
Python runtime                     Just run it.
6+ services to manage

Dagu stores workflow state in the filesystem rather than a database. That means your workflow definitions can be committed to Git natively, crash recovery doesn’t require database migration scripts, and your on-call rotation doesn’t need to know how to debug a PostgreSQL connection pool [1].

The project is licensed under GPL-3.0 with 3,185 GitHub stars [3]. It’s evolving beyond pure workflow orchestration: recent additions include a built-in LLM agent that writes and edits workflows from natural language, a “Workflow Operator” for Slack and Telegram that lets you ask your running workflow engine what failed and why, and human approval gates for AI agent actions [1][2]. The positioning has shifted from “small-team Airflow alternative” toward something more opinionated about AI-in-the-loop pipelines — with deterministic execution as the foundation rather than an afterthought.


Why people choose it over Airflow, Prefect, and Temporal

Based on the primary project sources, the decisive factor is operational simplicity combined with language agnosticism.

Versus Airflow. Airflow is the tool Dagu most explicitly targets. The comparison is fair: Airflow requires PostgreSQL as a metadata store, a message broker (Celery with Redis or RabbitMQ, or the newer LocalExecutor for single-machine setups), separate scheduler and webserver processes, and Python everywhere. If you want to run a bash script, you write a Python DAG that shells out to the bash script. Dagu eliminates the indirection: the bash script stays a bash script, and you orchestrate it in YAML. The website puts minimum Airflow cost at $200+/mo on managed infrastructure [2]. The more honest cost in small teams is the engineering hours spent keeping the platform running.

Versus Prefect. Prefect requires Python decorators on your code — @task, @flow. Dagu describes itself as “Zero Intrusion”: your existing scripts stay untouched, you orchestrate around them [2]. For teams with shell scripts, Python scripts, Go binaries, or anything else they’ve accumulated over the years, this matters. You don’t have to rewrite working code to add scheduling and monitoring.

Versus Temporal. Temporal is built for long-running durable business workflows with exactly-once semantics across distributed microservices. Dagu is not competing here — it doesn’t claim that durability model. Temporal is also operationally complex (requires its own cluster with Cassandra or PostgreSQL). For internal automation, cron replacement, and AI pipelines, they serve different purposes.

For AI agent pipelines specifically. The type: agent step is genuinely novel. It invokes an LLM (OpenAI, Anthropic, Gemini) mid-workflow and makes the output available to subsequent steps — letting you chain deterministic shell steps with AI reasoning steps in the same YAML file [1]. The Slack/Telegram Workflow Operator is the other interesting addition: rather than opening a dashboard to diagnose a failed run, you ask the operator in plain English and it answers in context, in your existing chat thread [1][2]. For teams building agentic automation that needs guardrails, the built-in human approval gates are a practical feature most workflow tools don’t include at all [2].

One caveat: the website includes a comparison table against “OpenClaw” and Airflow [2]. Without independent head-to-head testing, take the vendor comparison as orientation rather than benchmark data.


Features

Based on the README and official website:

Core workflow engine:

  • YAML DAG definitions — no Python runtime required [1]
  • Cron scheduling with timezone support, start/stop/retry [2]
  • Parallel execution with dependency management [2]
  • Conditional steps based on step outputs [2]
  • Automatic retries with exponential backoff [2]
  • Durable execution — DAG-level retry with backoff, managed by the scheduler [2]
  • Nested workflows: compose complex pipelines from reusable sub-DAGs [1][2]
  • Output variable passing between steps [1]
  • Queue management for concurrent run control [2]

Executor types (19+):

  • Shell commands, any language, pipes and redirects [1]
  • Docker containers with full control [2]
  • SSH remote execution [1][2]
  • HTTP calls [1]
  • S3, PostgreSQL, SQLite [3]
  • GitHub Actions — run any of 20,000+ GitHub Actions locally [2]
  • type: agent for LLM invocations with tools and memory [1][2]

AI features:

  • Built-in LLM agent: creates, edits, and debugs workflows from natural language in the Web UI [1]
  • type: agent step type with multi-provider support (OpenAI, Anthropic, Gemini) [1]
  • Workflow Operator for Slack and Telegram: persistent AI that monitors runs, debugs failures, handles approval requests in-chat [1][2]
  • Human approval gates for AI agent actions [2]
  • Token-efficient agentic workflow execution [2]

Infrastructure and operations:

  • Single binary, file-based storage, ~128MB memory footprint [1]
  • Air-gapped ready — runs fully offline, no external service calls [1][2]
  • Live demo available at demo-instance.dagu.sh (credentials: demouser/demouser) [1]
  • Web UI: DAG visualization (kanban-style cockpit), execution history, logs, built-in editor [1]
  • REST API [3]
  • Webhook triggers from external systems [3][2]
  • Git sync for workflow version control [2]
  • Distributed execution across multiple workers [1][2]
  • Built-in document management for runbooks [2]

Security and access:

  • RBAC with role-based API keys [3][2]
  • Authentication: Basic Auth, OIDC (SSO), built-in JWT [2]
  • Built-in user management [2]

Pricing: SaaS vs self-hosted math

Dagu has no SaaS or managed cloud tier — it is self-hosted-only [1][2]. There is no “pay us and we’ll run it for you” option. Every deployment is your infrastructure.

Software cost: GPL-3.0, free for self-hosted internal use [3]. Commercial redistribution has implications (see Cons).

Infrastructure cost:

Because Dagu requires no external database or message broker, the infrastructure footprint is minimal:

ComponentDaguAirflow
ApplicationSingle binaryWeb server + scheduler + workers
DatabaseNone (file-based)PostgreSQL required
Message brokerNoneRedis or RabbitMQ
Python runtimeNoneRequired everywhere
Minimum VPS$5–7/mo, 2GB RAM$200+/mo on managed services [2]

Concrete example:

A team running 20 workflows that fire several times daily on Airflow, managed on AWS (Managed Workflows for Apache Airflow starts at $0.49/hr for the smallest environment = ~$355/mo before compute), spends more on the platform than on the actual work it orchestrates. On Dagu, the same workflows run on a $6/mo Hetzner VPS. The difference is roughly $4,200/year — assuming you don’t factor in the engineering time Airflow requires for upgrades and incident response, which is where the real cost compounds.

For Prefect: Prefect Cloud’s free tier limits you to 3 deployments and basic features; the Pro tier starts at $19.90/mo and scales with usage. Self-hosted Prefect is free but requires their infrastructure components. Dagu has no usage limits on self-hosted.


Deployment reality check

The README install paths are as friction-free as claimed.

Fastest path (guided installer):

curl -fsSL https://raw.githubusercontent.com/dagu-org/dagu/main/scripts/installer.sh | bash

The installer adds Dagu to PATH, sets it up as a background service, creates the first admin account, and optionally installs the Dagu AI skill for Claude Desktop, Cursor, or Windsurf [1]. Windows users get a PowerShell equivalent.

Docker (one command to running state):

docker run --rm -v ~/.dagu:/var/lib/dagu -p 8080:8080 ghcr.io/dagu-org/dagu:latest dagu start-all

State persists to ~/.dagu. No configuration file needed to start [1].

Kubernetes (Helm):

helm repo add dagu https://dagu-org.github.io/dagu
helm install dagu dagu/dagu --set persistence.storageClass=<your-rwx-storage-class>

Requires a StorageClass with ReadWriteMany support [1]. This is a real dependency — not all cloud providers default to RWX-capable storage classes.

What you actually need:

  • Any Linux, Mac, or Windows machine with 1–2GB RAM (128MB for Dagu itself, remainder for your workflow steps)
  • No database
  • No Redis
  • Optional: domain name and reverse proxy (Caddy or nginx) for external HTTPS access

What can go sideways:

  • Kubernetes deployment requires RWX storage class — check your cluster’s capabilities before assuming the default works
  • Distributed mode (workers across multiple machines) adds coordination complexity that the single-binary narrative simplifies; the README acknowledges it as a mode, not a default
  • AI features require external LLM provider API keys — there’s no documented built-in Ollama/local LLM path [1][3]
  • GPL-3.0: if you’re embedding Dagu in a product that ships to customers, this needs legal review before you commit to the architecture

Realistic time estimates:

  • Developer with Linux experience: 10–20 minutes via curl installer or Docker
  • Kubernetes with custom storage configuration: 1–2 hours
  • Non-technical user following a guide: Dagu is genuinely not designed for this audience — budget an afternoon minimum and consider having a developer do the initial deployment

Pros and Cons

Pros

  • True zero-dependency single binary. No PostgreSQL, no Redis, no Python runtime. dagu start-all is the complete deployment command [1]. This eliminates an entire class of operational failure modes.
  • Language-agnostic by design. Your existing shell scripts, Python scripts, Go binaries — none of them need to change. Dagu orchestrates around them [2]. Most workflow tools require you to rewrite code in their framework.
  • Air-gapped ready. Fully offline operation with no external service calls [1][2]. Meaningful for compliance environments, secure networks, and edge deployments.
  • Broad executor coverage. Shell, Docker, SSH, HTTP, S3, GitHub Actions, LLM agents, PostgreSQL, SQLite — 19+ types without plugin installation [1][2].
  • AI-native architecture. type: agent steps, natural-language workflow creation, Slack/Telegram Workflow Operator — these are integrated into the execution model, not bolted on [1][2].
  • Human approval gates. Built-in human-in-the-loop for AI actions [2]. Most orchestrators leave this to users to implement themselves.
  • OIDC/SSO available in the open version. Unlike tools that gate SSO behind commercial tiers, Dagu includes OIDC authentication in the community edition [3][2].
  • ~128MB memory footprint. Meaningful on small VPS instances where RAM is metered [1].
  • Git sync built in. Version-control your workflow definitions natively [2].

Cons

  • GPL-3.0, not MIT. For internal self-hosted use, this is irrelevant. For anyone building a commercial product around Dagu — embedding it, redistributing it, offering it as a service — the GPL creates legal complexity that MIT wouldn’t [3]. This is the most consequential difference from tools like Activepieces (MIT) or n8n.
  • Small community. 3,185 GitHub stars at time of writing [3]. Airflow has 40K+; n8n has 100K+. Fewer tutorials, fewer Stack Overflow answers, fewer people to ask when you hit an edge case. The Discord exists but is not large.
  • No managed cloud option. No vendor-hosted tier. If self-hosting is a blocker, Dagu is not currently an option [1][2].
  • File-based storage has scale limits. For small teams this is fine. At scale, file-based state storage raises questions about backup, disaster recovery, and concurrent access that a managed database handles more cleanly. The distributed mode addresses some of this but adds setup complexity [1].
  • No independent third-party reviews at time of writing. No Trustpilot-equivalent, no major tech publication reviews. GitHub Issues and Discussions are the best available signal for real-world user experiences.
  • Not for non-technical users. YAML workflows, binary installation, Helm charts — this is a developer tool. The AI workflow-generation features help, but the core requires technical competence [1][2].
  • AI steps require external API keys. Sending data to OpenAI/Anthropic/Google is a dependency. No documented path for fully local LLM integration [1][3].

Who should use this / who shouldn’t

Use Dagu if:

  • You’re a developer or small engineering team running cron jobs and shell scripts, and you need retries, parallel execution, dependency ordering, and monitoring — without standing up a database to get them.
  • Your environment is air-gapped or has strict constraints on external service dependencies.
  • You have existing scripts in multiple languages that work fine and you don’t want to rewrite them in Python for an orchestration framework.
  • You want to add LLM-powered steps to deterministic pipelines — not pure prompt chains, but AI reasoning embedded in workflows with guardrails and human approval gates.
  • SSO and RBAC matter to you and you don’t want to pay an enterprise tier to unlock them.

Skip it (look at Airflow or Prefect instead) if:

  • Your team writes Python and wants typed task definitions with decorator syntax and a rich operator ecosystem for AWS/GCP/Azure services.
  • You need exactly-once execution semantics for transactional or financial workflows.
  • You want a mature platform with 10+ years of production track record at scale.

Skip it (look at n8n or Activepieces instead) if:

  • You’re a non-technical founder who wants drag-and-drop workflow automation without touching YAML or a terminal.
  • You need 300+ pre-built SaaS integrations (Gmail, HubSpot, Notion, Stripe) without writing HTTP request steps manually.

Skip it (look at Temporal instead) if:

  • You’re building distributed business processes across microservices where each step needs durable, exactly-once execution guarantees that survive server restarts and network partitions.

Be cautious if:

  • You’re building a product you plan to ship to customers and want to embed Dagu internally — the GPL-3.0 license means getting legal sign-off before you commit to this architecture.

Alternatives worth considering

  • Apache Airflow — the category benchmark Dagu explicitly targets. Python-native, massive operator ecosystem, 40K+ stars, battle-tested at scale. Operational overhead is real and ongoing [2].
  • Prefect — more ergonomic than Airflow for Python teams. Managed cloud option available. Requires @task/@flow decorators on your code.
  • Temporal — for durable, long-running business workflows with exactly-once guarantees across distributed services. Significant infrastructure and learning curve. Different use case than Dagu.
  • Windmill — script-first, code-native, web UI. Good fit for engineering teams that want version-controlled scripts with scheduling and a cleaner UI than Airflow. Actively developed.
  • n8n — visual workflow builder, 400+ integrations, “Fair-code” Sustainable Use license. Closer to Zapier replacement than orchestrator.
  • Activepieces — MIT-licensed, drag-and-drop automation, aimed at non-technical teams. Not a workflow orchestrator in the Airflow/Dagu sense.
  • Woodpecker CI / Forgejo Actions — if your workflow use case is primarily CI/CD pipelines rather than general-purpose orchestration.

For the specific Dagu use case — developer-operated, YAML-based orchestration of arbitrary scripts without database overhead — the closest real alternative is Windmill. Choose Dagu if zero external dependencies and air-gap support matter; choose Windmill if you want a more polished web UI and a larger community.


Bottom line

Dagu solves a real problem: teams that need proper workflow orchestration — scheduling, retries, dependency graphs, monitoring — but don’t have the appetite to run and maintain Airflow. The single-binary, no-database model isn’t a simplification; it’s the entire architectural bet, and for the target use case it pays off. Getting a workflow scheduler with a web UI, OIDC authentication, RBAC, and distributed execution working in under 20 minutes is not the default expectation in this category, and Dagu delivers it.

The AI additions are further along than most competitors: in-workflow LLM steps, natural-language workflow creation, and a Slack/Telegram operator that actually understands your workflow context are features that most orchestrators haven’t shipped yet. The human approval gate is the kind of practical guardrail teams building AI automation actually need.

The constraints are equally real: GPL-3.0 matters if commercial embedding is on the roadmap, the community is small enough that you’ll hit edges without good documentation, and non-technical users should look at n8n or Activepieces instead. But for a developer or small engineering team that wants Airflow’s power without Airflow’s operational weight — and who’s willing to accept a newer, smaller project in exchange for a dramatically simpler deployment model — Dagu is worth a serious evaluation.


Sources

Primary sources (all citations in this review draw from these):

  1. GitHub Repository and README — Dagu (dagu-org/dagu), GPL-3.0, 3,185 stars. https://github.com/dagu-org/dagu
  2. Official website — dagu.sh, including homepage, feature pages, and comparison table. https://dagu.sh
  3. Merged project profile — Structured metadata including license, star count, feature flags, and category. Compiled from GitHub and website data.
  4. Dagu documentationhttps://docs.dagu.sh

No substantial independent third-party reviews of Dagu were available at time of writing. Readers should consult GitHub Issues and the project Discord for real-world edge cases beyond the primary sources.

Features

Authentication & Access

  • API Key Authentication
  • Role-Based Access Control
  • Single Sign-On (SSO)

Integrations & APIs

  • REST API
  • Slack Integration
  • Telegram Integration

AI & Machine Learning

  • AI / LLM Integration
  • AI Agents

Automation & Workflows

  • Scheduled Tasks / Cron
  • Workflows

Localization & Accessibility

  • Timezone Support

Mobile & Desktop

  • Offline Mode