Cronicle
Cronicle handles simple, distributed task scheduler and runner with a web based UI as a self-hosted solution.
A multi-server job scheduler with a clean UI, honestly reviewed. What it does well, what it doesn’t, and the thing you need to know before building on it.
TL;DR
- What it is: A self-hosted, multi-server task scheduler with a web UI — think modern Cron replacement with a dashboard, real-time logs, and a plugin system [README].
- Who it’s for: Solo developers and small teams who need a visual interface for scheduled jobs across one or more servers, without the complexity of Airflow or Argo.
- License: MIT (despite the merged profile showing “NOASSERTION” — the website and source confirm MIT) [website].
- Key strength: Zero database requirement, clean web UI, real-time log viewing, plugin support in any language, and no per-task pricing of any kind [README][1].
- Key weakness: The original author has announced xyOps™ as the spiritual successor to Cronicle. Cronicle’s future is bug fixes and security patches — not new features [README].
- Cost: Self-hosted on a VPS runs $5–20/mo depending on your server. Elestio managed hosting starts at $14/mo [3].
- GitHub stars: 5,557.
What is Cronicle
Cronicle is a Node.js-based task scheduler and runner with a web-based front end. The simplest description from its own documentation: “a fancy Cron replacement written in Node.js” [README]. That’s accurate and more useful than most project marketing.
You set up a primary server that runs the scheduler and a web UI. Worker servers connect to it and receive jobs. You define events — what to run, when to run it, on which server or group of servers — using a visual date/time picker rather than raw cron syntax. When a job runs, Cronicle shows you a real-time log stream, CPU/memory usage graphs, and estimated time remaining. When it finishes, you can browse historical logs and performance metrics.
What separates it from a raw cron setup:
- Multi-server support with automatic failover. If the primary server dies, a designated backup takes over. Worker servers auto-discover each other via UDP broadcast on the LAN [README][website].
- No database required. Everything — jobs, logs, config — lives in JSON files on disk by default. You can plug in S3 or Redis as backends, but you don’t have to [2][website].
- Plugins in any language. A plugin is any executable script. Cronicle communicates with it via JSON over stdin/stdout. Python, Bash, Ruby, Go — anything that can read stdin and write JSON works [README][2].
- Event chaining. One job can trigger another when it finishes, passing custom data between them. This is a light form of workflow orchestration [website].
- REST API. Schedule and trigger jobs programmatically using API keys [README][website].
The project is built by Joseph Huckaby (PixlCore), the same person behind projects like Pixl Server, and it’s been around since 2015. As of this review, it has 5,557 GitHub stars.
There’s one thing you need to know upfront: the README now opens with an announcement for xyOps™, described as “the spiritual successor to Cronicle.” Beta v0.9 is available for testing at a separate GitHub repo. Cronicle will still receive bug fixes and security patches but the creator’s attention has moved on [README]. This isn’t a dead project, but you should factor the maintenance trajectory into any long-term infrastructure decision.
Why people choose it
The reviews converge on the same core reasons. None of them are surprising given what the tool does, but they’re worth stating plainly.
The UI is genuinely good for this category. The mfyz.com review [1] — written by someone with two decades of server management experience — calls the web interface “a breath of fresh air” compared to the complexity of other orchestration tools. The visual date/time picker replaces cron syntax with a multi-select widget for years, months, days, weekdays, hours, and minutes. For non-developers who need to define recurring jobs, that’s a significant usability win.
The zero-database default makes setup fast. The Medium technical overview [2] and the Klutch.sh deployment guide [4] both highlight that JSON-file storage is a real differentiator. You don’t need to provision PostgreSQL or MongoDB before Cronicle can start. Spin up a Node.js server, run the install script, and you have a working scheduler in minutes. The trade-off is that flat-file storage doesn’t scale to thousands of jobs with heavy log retention, but for the target use case it’s fine.
Real-time log viewing solves a specific pain point. The reviewer at mfyz.com [1] mentions log access specifically: “Real-time Monitoring: Keep track of your jobs’ status, progress, performance, and most importantly logs. Cronicle provides all.” For anyone who has had to SSH into a server mid-job to tail a log file, the built-in live log viewer is immediately valuable.
On-demand triggering via API. The mfyz.com reviewer [1] uses it explicitly as a “job runner, not just a scheduler” — meaning they trigger jobs via API call rather than on a schedule. Cronicle handles both, and the REST API with API key authentication is production-grade enough for this use case [README][website].
The self-hosted story is clean. No SaaS subscriptions, no per-execution pricing, no vendor reading your job output. For teams running sensitive data pipelines or internal automation, ownership of the scheduler infrastructure matters [1].
What reviewers don’t say — and this is also informative — is that anyone picked it for complex multi-step AI workflows, Kubernetes-native scheduling, or anything resembling data pipeline orchestration at scale. Those use cases belong elsewhere.
Features
Based on the README, website, and third-party descriptions:
Scheduling engine:
- Visual multi-selector for years, months, days, weekdays, hours, minutes — similar to cron but graphical [website][README]
- One-time, recurring, and on-demand job execution [1]
- Catch-up mode: run missed events after downtime [website]
- Event chaining: trigger the next job when the previous finishes [website]
- Concurrency controls and queue limits per event [README]
- Timeout settings with automatic job termination [1]
Multi-server:
- Primary + backup + worker server topology [README]
- Automated failover if primary goes down [website]
- UDP-based auto-discovery of nearby servers on the LAN [README][2]
- WebSocket connections between primary and workers for real-time updates [2]
- Target specific servers or pick randomly across server groups [website]
Monitoring:
- Live log viewer — streams stdout from the running job in real time [website]
- Real-time graphical progress bars with estimated time remaining [website]
- CPU and memory tracking per job (including child processes) [website]
- Historical performance graphs [website]
- Job history with success/failure tracking and downloadable log archives [website]
Plugin system:
- Any executable script in any language qualifies as a plugin [README][2]
- Communication via simple JSON over stdin/stdout [2]
- Custom UI controls (text fields, checkboxes, dropdowns) can be defined per plugin [website]
- Plugins receive parameters, emit progress events, and return performance metrics [website]
Integration:
- REST API for scheduling and triggering events externally [README]
- API keys for authentication [README]
- Web hooks (HTTP POST) at job start and end with full JSON payload [website]
- Email notifications per job [website]
Storage backends:
- Default: local filesystem JSON files [README][2]
- Optional: Amazon S3, Couchbase, Redis [2]
Not included:
- No built-in secrets management
- No native container/Kubernetes orchestration (it runs jobs, not containers)
- No visual DAG builder — chaining is linear, not branching
- No SSO or LDAP (basic username/password authentication only)
Pricing: SaaS vs self-hosted math
Cronicle itself has no commercial pricing. It’s MIT-licensed software — you run it on your own server [website].
Self-hosted costs:
- Software: $0
- VPS to run it on: $5–20/mo depending on size and provider
- Your time to set it up: roughly 30–60 minutes if you follow a guide [4]
Managed hosting via Elestio:
- Starts at $14/mo [3]
- Includes automated backups, SSL, OS updates, monitoring, and support
- Runs on dedicated VMs — you’re not on shared infrastructure [3]
- Good option if you want Cronicle’s functionality without touching a Linux server
Managed hosting via Klutch.sh:
- Docker-based deployment guide available [4]
- Pricing not stated in the available data
Comparison to alternatives: Rundeck Community Edition is free and self-hosted like Cronicle, but more complex to configure. Airflow is free but requires Python expertise and a proper database backend. Cloud-based job scheduling services (AWS EventBridge, GCP Cloud Scheduler) charge per invocation — at low volumes this is cheap, but at high volumes it’s not. Cronicle self-hosted charges nothing per execution regardless of volume.
The practical math: if you’re paying $10–30/mo on a cloud scheduling service and running fewer than 50 jobs, self-hosted Cronicle on a $6 Hetzner VPS is cheaper on day one. The break-even happens before you finish the setup.
Deployment reality check
The Klutch.sh guide [4] and the Elestio page [3] both suggest deployment is straightforward, and the website claims “get up and running in 5 minutes” — which is aspirational but not wildly off for a technical user.
The documented install path:
curl -s https://raw.githubusercontent.com/jhuckaby/Cronicle/master/bin/install.js | node
This runs an auto-install script that pulls dependencies via npm. The website describes this as a single command to get started [website].
For Docker (recommended for production):
The Klutch.sh guide [4] provides a working Dockerfile that clones the repo, runs npm install, builds the dist, and starts the server. Configuration goes in a config.json file. The key environment variable is CRONICLE_foreground=1 to keep the process alive inside the container.
What you actually need:
- A Linux VPS or server with Node.js installed
- Port 3012 open (the default web UI port)
- Optionally a reverse proxy (nginx or Caddy) for HTTPS
- Persistent volume if running via Docker (otherwise your job history disappears on container restart)
What can go sideways:
- The flat-file storage defaults are fine for small setups, but if you’re running hundreds of jobs with verbose logs, disk usage grows without active cleanup. The
job_data_expire_daysconfig option exists for this reason [4]. - The multi-server setup requires UDP broadcast to work for auto-discovery, which may not function across different subnets or cloud VPCs — in those environments you’ll need manual server registration [README].
- The web UI is not designed for mobile. It’s a desktop-first dashboard.
- With the project trending toward maintenance mode [README], if you hit a bug that isn’t a security issue, the timeline for a fix is uncertain.
Realistic time estimate for a developer: 30–60 minutes to a working single-server instance. Multi-server setup with failover: 2–4 hours including testing. For someone without Linux server experience following the Elestio or Klutch.sh guides: budget a few hours or use managed hosting.
Pros and cons
Pros
- Genuinely easy setup. Single-command install for the basic case, Docker support for production. No database provisioning required [website][4].
- Real-time log viewer. Watching stdout stream live from the web UI solves a real problem for anyone who has had to SSH into servers mid-job [website][1].
- Any-language plugins. Bash scripts, Python, Go — if it can read stdin and write JSON, it’s a plugin. No SDK lock-in [README].
- Visual scheduling UI. The date/time picker is more approachable than raw cron syntax, and the overall UI is clean and functional [1][website].
- On-demand triggering via API. Works as a job runner, not just a scheduler. REST API with API keys enables programmatic job management [1][README].
- Multi-server with automatic failover. Primary/backup/worker topology handles server failures without manual intervention [README].
- Zero operational cost. MIT licensed, no per-execution fees, no phone-home telemetry [website].
- Historical performance graphs. Track CPU/memory trends over time — useful for detecting gradual resource growth before it becomes an incident [website].
Cons
- The project is heading into maintenance mode. The original author announced xyOps™ as the spiritual successor in the main README. New features will go there, not to Cronicle [README]. This is a real risk for long-term infrastructure.
- No branching workflows. Event chaining is linear — A triggers B triggers C. No conditional branching, no fan-out, no DAG-style orchestration [README][website].
- Flat-file storage doesn’t scale. For large job volumes with heavy log retention, the default JSON file storage gets unwieldy. Alternative backends (S3, Redis) exist but require additional setup [2].
- No SSO, RBAC, or LDAP. User management is basic username/password. Fine for a solo operator, problematic for a team with compliance requirements [README].
- Single-region auto-discovery. UDP broadcast for server discovery works on a LAN but breaks across subnets or cloud VPCs — multi-region setups need manual configuration [README].
- No container-native scheduling. Cronicle runs shell commands and scripts, not containers. If your jobs are Docker images or Kubernetes workloads, you’ll need Argo Workflows or something similar.
- UI is not mobile-friendly. Desktop-only interface [website].
Who should use this / who shouldn’t
Use Cronicle if:
- You need a clean visual UI for scheduled or on-demand jobs on one or a few Linux servers, and you don’t want to write cron configuration by hand.
- Your jobs are shell scripts, Python scripts, or other executables — and you want real-time log viewing without SSH.
- You want zero ongoing cost for the scheduler itself and you’re comfortable running a Linux server.
- You need basic multi-server distribution with automatic failover without the complexity of a full orchestration platform.
- You’re a solo developer or small team where SSO and RBAC aren’t requirements.
Think carefully before choosing Cronicle if:
- You’re making a 3–5 year infrastructure bet. The maintenance-mode trajectory means you may need to migrate to xyOps™ or another tool eventually [README].
- Your team is larger than 5–10 people and needs audit logging, RBAC, or SSO.
Skip it and pick something else if:
- Your jobs are data pipelines with dependencies, branching logic, and retry policies — use Apache Airflow or Prefect.
- You’re running jobs as containers on Kubernetes — use Argo Workflows or the native CronJob resource.
- You need enterprise-grade workload automation across hundreds of servers — Rundeck Enterprise or commercial WLA tools apply here [5].
- You’re building for a team that won’t touch a Linux server — Elestio managed hosting reduces this barrier, but Cronicle still requires infrastructure ownership.
Alternatives worth considering
From the AIMultiple open-source job scheduler comparison [5] and the broader category:
- xyOps™ — the direct successor from the same author. Currently in beta (v0.9). If you’re evaluating Cronicle for a new deployment, it’s worth waiting to see whether xyOps™ matures before committing [README].
- Rundeck Community Edition — more complex to configure, but more mature multi-node job management, better RBAC, and a larger community. No database-free mode [5].
- Apache Airflow — the standard for data pipeline scheduling. Requires Python, a database, and significantly more infrastructure. Not a simple Cron replacement — it’s an orchestration platform [5].
- Dkron — distributed job scheduler built for cloud environments, uses Raft consensus for HA. More Kubernetes-friendly than Cronicle, actively developed [5].
- n8n — if your scheduled tasks involve integrating SaaS APIs and you want a visual workflow builder, n8n is a closer fit than a raw job scheduler.
- AWS EventBridge / GCP Cloud Scheduler — fully managed, no server to run, pay-per-invocation. Makes sense if you’re already cloud-native and your job volume is low.
For the specific use case of “I want a UI for my cron jobs on a small number of servers,” the practical shortlist is Cronicle vs Dkron vs Rundeck Community Edition. Cronicle wins on setup simplicity and UI polish. Dkron wins on Kubernetes/cloud-native architecture. Rundeck wins on enterprise features and community maturity.
Bottom line
Cronicle is a well-built, honest tool that solves a specific problem: making server-side job scheduling visible and manageable without requiring a database, a PhD in Kubernetes, or a SaaS subscription. The UI is clean, the setup is fast, the plugin system works with any language, and the real-time log viewer alone justifies the migration from raw cron for most people who’ve ever had to debug a failed overnight job without logs.
The thing that complicates a clean recommendation is the maintenance trajectory. The README announcement of xyOps™ is honest and the author deserves credit for signaling it clearly — but it means the risk profile of building on Cronicle has changed. For existing deployments, this is largely fine: bug fixes and security patches continue, and stable infrastructure doesn’t need new features. For a fresh deployment in 2026, it’s worth watching xyOps™ progress before committing.
If Cronicle fits your current requirements and you’re comfortable with the maintenance horizon, self-hosting on a $6–10 VPS is a straightforward win over any per-execution cloud alternative. If the setup is the blocker, that’s exactly what upready.dev deploys for clients: one-time setup, you own the infrastructure going forward.
Sources
- mfyz.com — “Cronicle: My new Go-To Task Scheduler (+ it’s Open Source)” (Sep 3, 2024). https://mfyz.com/cronicle-my-new-go-to-task-scheduler-its-open-source/
- Medium / Manik Somayaji — “Cronicle — a Task Scheduler” (Oct 29, 2023). https://medium.com/@somayajimanik/cronicle-a-task-scheduler-da4f409bf3e9
- Elestio — “Managed Cronicle as a Service”. https://elest.io/open-source/cronicle
- Klutch.sh Docs — “Deploying Cronicle”. https://docs.klutch.sh/guides/open-source-software/cronicle/
- AIMultiple — “Top 12 Open Source Job Schedulers & 5 WLA Tools” (updated Mar 9, 2026). https://aimultiple.com/open-source-job-scheduler
Primary sources:
- GitHub repository and README: https://github.com/jhuckaby/cronicle (5,557 stars, MIT license)
- Official website: http://cronicle.net
- xyOps™ successor project: https://github.com/pixlcore/xyops
Features
Integrations & APIs
- Plugin / Extension System
- REST API
Replaces
Related Automation & Workflow Tools
View all 122 →n8n
180KOpen-source-ish workflow automation for people who write code and people who don't — the 180K-star platform technical teams actually adopt.
Langflow
146KVisual platform for building AI agents and MCP servers with drag-and-drop components, Python customization, and support for any LLM.
Dify
133KOpen-source platform for building production-ready agentic workflows, RAG pipelines, and AI applications with a visual builder and no-code approach.
Browser Use
81KMake websites accessible for AI agents — automate browsing, extraction, testing, and monitoring in natural language with Playwright and LLMs.
Ansible
68KThe most popular open-source IT automation engine — automate provisioning, configuration management, application deployment, and orchestration using simple YAML playbooks over SSH.
openpilot
60KOpen-source driver assistance system from comma.ai that brings adaptive cruise control and lane centering to 275+ supported car models.