Leon
Leon is an open-source personal assistant who can live on your server. He is built on the top of Node.js, Python and artificial intelligence concepts
Open-source personal assistant, honestly reviewed. What you actually get when you self-host the project with 17K GitHub stars — and what you need to know before you deploy it.
TL;DR
- What it is: An open-source (MIT), self-hosted personal assistant — voice and text commands trigger modular “skills” that live on your own server [1][2].
- Who it’s for: Technical hobbyists and privacy-focused builders who want full data control and don’t mind working with an experimental codebase. Not ready for non-technical founders expecting a polished product today.
- Cost savings: No SaaS tier exists. Leon is purely self-hosted, which means your only costs are a VPS and your time [2].
- Key strength: MIT license, 17,055 GitHub stars, modular skill architecture, and a genuinely ambitious roadmap toward autonomous LLM-based agents [1][2].
- Key weakness: The project is mid-rewrite as of early 2026. The stable legacy version predates modern LLMs. The new agentic core is explicitly marked unstable. This is one person’s project, maintained in spare time — and it shows in the commit cadence [1].
What is Leon
Leon is a self-hosted personal assistant written in Node.js and Python. You install it on a server, give it commands via text or voice, and it executes “skills” — modular scripts that perform tasks like fetching information, controlling media, running workflows, or integrating with third-party APIs [2].
The pitch on the homepage is “Meet your virtual brain” and the GitHub description is simpler: “Leon is your open-source personal assistant.” The tool has been in active development since 2017, which makes it one of the older projects in the self-hosted assistant space [1].
Here’s the thing you need to know immediately, buried inside the README under a banner labeled Important Notice (as of 2026-01-11):
Leon is currently undergoing a massive architectural rewrite. The
developbranch is highly experimental and may be unstable as I implement the new agentic core. If you are looking for the legacy, stable version (pre-LLM), please use themasterbranch. [1]
This is not a minor caveat. The project currently exists in two states simultaneously: a stable but pre-LLM legacy version (the master branch), and an experimental new architecture that hasn’t shipped yet. If you deploy Leon today, you’re either running the old version — which predates modern LLM capabilities — or you’re volunteering as a beta tester for something explicitly described as unstable.
The developer is transparent about why: “I’m taking a lot of time to work on the new core of Leon due to personal reasons. I can only work on it during my spare time. Hence, I’m blocking any contribution as the whole core of Leon is coming with many breaking changes.” [1]
That’s an honest statement, and it deserves to be weighed honestly.
Why people consider it
Leon has 17,055 GitHub stars. That’s not nothing — it puts it roughly in the same visibility tier as many well-maintained self-hosted tools. The stars accumulated over eight years of gradual interest from the privacy-conscious developer community that wants to run assistants locally rather than feeding commands to Amazon, Google, or Apple [1][2].
The core appeal is straightforward: you own the data, you run the server, and no third party sees your queries. The website states it plainly: “You are in control of your data. Leon lives on your server and you decide if you wish to make use of any third party.” [2]
This resonates with a specific kind of user — someone who was burned by Alexa’s always-on microphone, or who works with sensitive business data and doesn’t want their assistant workflow passing through AWS or Google’s servers. For those users, Leon’s MIT license and self-hosted architecture are genuinely attractive properties [1][2].
The ambition in the roadmap is also worth acknowledging. The new architectural direction — workflows built as Skills > Actions > Tools > Functions > Binaries, autonomous skill generation where Leon writes its own skill code on demand, and ReAct-style reasoning loops using local LLMs — is a technically interesting design, not marketing fluff [1]. Whether it ships, and when, is the open question.
Features
What exists in the legacy stable version (master branch):
- Voice and text command interface [2]
- Modular skill system — each skill is a standalone module you can add or remove [2]
- Text-to-Speech support: Google Cloud, AWS, IBM Watson, CMU Flite (offline); Alibaba Cloud and Microsoft Azure listed as coming soon [2]
- Speech-to-Text support: Google Cloud, IBM Watson, Coqui STT (offline); Alibaba Cloud and Microsoft Azure listed as coming soon [2]
- npm-based install via
@leon-ai/cli—npm install --global @leon-ai/clithenleon create birth[2] - NLP-based command classification [2]
What is planned / in-progress for the new agentic core (develop branch):
- Restructured workflow architecture: Skills → Actions → Tools → Functions → Binaries — atomic components instead of monolithic scripts [1]
- Example given in the README: a “Video Translator” skill would orchestrate vocal isolation, voice cloning, ASR, and audio gender recognition as separate tools rather than running a single script [1]
- Autonomous skill generation — a meta-skill that writes new skill code when a user request doesn’t match an existing skill, then injects it into Leon’s memory for future reuse [1]
- ReAct reasoning loop using local LLMs — Leon reasons, acts, observes results, and iterates until the task is solved [1]
- Context filtering to reduce token usage and hallucinations when running on local hardware [1]
The honest framing: the legacy version is a functional but modest NLP-based command dispatcher. The planned version is ambitious. The gap between the two is where this project lives right now.
Roadmap items shown on the website include: Sound Recognition from Skills, Game Picker, Video Summarizer, Video Translator, offline STT implementation, Connected Memo Skill, Image to CSV/JSON, GitHub Skill, and Hacker Skill [2]. No release dates are attached to these.
Pricing: SaaS vs self-hosted math
There is no SaaS tier. No cloud version. No subscription. Leon is purely self-hosted software [1][2].
Self-hosted costs:
- Software license: $0 (MIT) [1]
- VPS: $4–10/month on Hetzner, Contabo, or DigitalOcean is sufficient for a Node.js + Python process
- Domain (optional): $10–15/year if you want a friendly URL
- Your time to install, configure, and maintain it
What you’re replacing and what you’d pay instead:
If you’re comparing against commercial alternatives:
- Amazon Alexa requires Echo hardware (~$30–$100) plus your data going to Amazon
- Google Assistant is bundled with Android but routes all queries through Google’s servers
- Custom GPT wrapper solutions via OpenAI API cost per-request — $0.01–$0.15 per query at volume
- Managed self-hosted LLM inference on a rented GPU (if you want the new LLM-based Leon) runs $20–$80/month depending on usage
For the legacy version: running a personal assistant that handles local commands with occasional cloud TTS/STT calls costs essentially $5/month in VPS + whatever the cloud speech providers charge (Google Cloud TTS offers 1 million characters free per month; AWS Polly offers 5 million characters free per month in the first year) [2].
If your goal is to run everything offline — no cloud TTS, no cloud STT — Leon supports CMU Flite (TTS) and Coqui STT (STT), both free and local [2]. The tradeoff is quality: CMU Flite sounds robotic compared to cloud-based voices.
Deployment reality check
The official install path is npm-based, which is cleaner than the Docker Compose setup that many comparable tools require [2]:
npm install --global @leon-ai/cli
leon create birth
What you actually need:
- Node.js (LTS) and npm
- Python 3.x for skill scripts
- A Linux VPS or local machine with at least 1–2GB RAM for the legacy version
- If you want local STT (Coqui), budget additional setup time and more RAM
- If you want the experimental LLM-based version, a machine capable of running a local LLM (Ollama, LM Studio) — that means 8GB+ RAM minimum, 16GB+ for comfortable operation
What can go sideways:
The biggest deployment risk isn’t the installer — it’s knowing which branch you want. The README explicitly warns that documentation covers the legacy architecture and is outdated regarding the new agentic core [1]. If you follow a tutorial or guide written before 2025, it almost certainly describes the old classification-based Leon, not the LLM-based rewrite.
There’s also the maintenance question. This is a one-developer project maintained in spare time. If a Node.js upgrade breaks a skill, or a Python dependency conflicts, the fix depends on one person’s availability [1]. There’s a Discord community (discord.gg/MNQqqKg) for questions [1][2], but the bus factor is real.
Realistic time estimate for a developer: 30–60 minutes to a working legacy instance. For the new agentic core on the develop branch: budget time for debugging, expect instability, and read the Discord before you start [1].
Pros and Cons
Pros
- MIT license. Genuinely permissive — use it, fork it, embed it in a product, no restrictions [1].
- 17K+ GitHub stars over 8 years indicates real community interest, not a flash-in-the-pan project.
- Full data sovereignty. Nothing leaves your server unless you choose cloud TTS/STT providers. Works entirely offline if you use CMU Flite and Coqui STT [2].
- Honest project communication. The README contains a frank notice about the rewrite, the instability, and the single-developer constraint [1]. That’s rare and worth respecting.
- Ambitious roadmap. The planned agentic architecture — atomic tools, self-coding skills, ReAct reasoning on local LLMs — is technically coherent and genuinely interesting [1].
- npm-based install is simpler than Docker Compose setups in comparable tools [2].
Cons
- Actively mid-rewrite. You’re choosing between a stable but outdated legacy version and an unstable experimental one. There’s no stable modern version to deploy [1].
- Solo developer, spare-time project. The README says this explicitly. Maintenance velocity depends on one person’s availability [1].
- Documentation is outdated. The README itself says the documentation no longer reflects the technical architecture [1]. Following the docs may lead you to the wrong place.
- No third-party reviews available for this product. At time of writing, no substantial independent reviews of Leon-the-personal-assistant surface from technology publications. The 17K stars come from developer discovery, not press coverage.
- Legacy version is pre-LLM. It uses NLP classification, not a language model. It won’t understand natural language queries the way ChatGPT, Ollama, or even Home Assistant + LLM integrations do in 2026 [1].
- The new version’s timeline is unknown. No public release date for the stable agentic core [1].
- Small default skill set. Roadmap items like Video Summarizer and Connected Memo are still listed as planned features, not shipped capabilities [2].
Who should use this / who shouldn’t
Use Leon if:
- You’re a developer who wants a local-first, MIT-licensed assistant framework to build on top of — and you’re comfortable working with experimental code.
- You care deeply about data sovereignty and want zero cloud dependencies, including for the AI layer.
- You want to contribute to an open-source assistant project that has a technically interesting architectural direction.
- You’re patient. The new version will be worth watching, but you’re buying into a project mid-transition.
Skip it for now if:
- You’re a non-technical founder looking for a working Alexa/Siri replacement you can deploy and forget about. The project isn’t there yet.
- You need something production-stable. The legacy version is stable but dated; the new version is explicitly unstable [1].
- You want rich integrations with SaaS tools out of the box. Leon’s skill ecosystem is modest compared to what n8n or Home Assistant offer.
- Your assistant use case is primarily home automation — Home Assistant has a larger ecosystem, better hardware support, and more active development in that specific domain.
Skip it indefinitely if:
- You want vendor support, SLAs, or someone to call when it breaks.
- The idea of depending on a one-developer project for a core workflow makes you nervous.
Alternatives worth considering
- Home Assistant — if your primary use case is home automation with voice control. Much larger community, more integrations, actively maintained, also self-hostable. Not a general-purpose assistant but better-executed in its lane.
- Mycroft AI (now Neon AI / OVOS) — the other major open-source voice assistant project. Mycroft the company shut down in 2023, but the codebase forked into Open Voice OS (OVOS) and Neon AI. More mature voice pipeline than Leon but similarly fragmented.
- n8n — if your “personal assistant” use case is really workflow automation triggered by commands. n8n is actively developed, has hundreds of integrations, and handles complex multi-step logic. Different category but often solves the same underlying problem.
- Activepieces — MIT-licensed workflow automation with a cleaner UI than n8n, good for non-technical users who want Zapier-replacement automation.
- Ollama + Open WebUI — if you want a self-hosted LLM interface rather than a command-dispatcher assistant. Not voice-native but significantly more capable for natural language tasks.
- Flowise / Langflow — if you want to build LLM-powered agentic workflows visually. Closer to what Leon’s new architecture is aiming for, and shipping today.
Bottom line
Leon is an honest project with a long history and a genuine architectural vision. The MIT license, the privacy-first design, and the planned LLM-based agentic core are all real strengths. But deploying it in early 2026 means accepting one of two options: a stable legacy version that predates modern AI capabilities, or an unstable experimental rewrite that isn’t ready for daily use. The project is maintained by one developer in spare time, and it says so plainly. For technical hobbyists who want to watch and contribute to something interesting, Leon is worth following. For non-technical founders who need a working personal assistant on their server today, the timing is wrong — check back when the new core stabilizes.
Sources
- Leon GitHub Repository and README — github.com. https://github.com/leon-ai/leon
- Leon Official Website — getleon.ai. https://getleon.ai
Replaces
Related Automation & Workflow Tools
View all 122 →n8n
180KOpen-source-ish workflow automation for people who write code and people who don't — the 180K-star platform technical teams actually adopt.
Langflow
146KVisual platform for building AI agents and MCP servers with drag-and-drop components, Python customization, and support for any LLM.
Dify
133KOpen-source platform for building production-ready agentic workflows, RAG pipelines, and AI applications with a visual builder and no-code approach.
Browser Use
81KMake websites accessible for AI agents — automate browsing, extraction, testing, and monitoring in natural language with Playwright and LLMs.
Ansible
68KThe most popular open-source IT automation engine — automate provisioning, configuration management, application deployment, and orchestration using simple YAML playbooks over SSH.
openpilot
60KOpen-source driver assistance system from comma.ai that brings adaptive cruise control and lane centering to 275+ supported car models.