unsubbed.co

Langflow

Visual platform for building AI agents and MCP servers with drag-and-drop components, Python customization, and support for any LLM.

Self-hosted AI workflow builder, honestly reviewed. No marketing fluff, just what you get when you run it yourself.

TL;DR

  • What it is: Open-source (MIT) low-code platform for building AI agents, RAG pipelines, and LLM workflows through a visual node-based editor — with every flow deployable as a REST API or MCP server [2][3].
  • Who it’s for: Developers and technical founders who want to prototype AI pipelines fast, skip boilerplate, and ship flows as APIs without rebuilding everything in raw Python. Also viable for non-engineers who need to understand or modify AI logic visually [2][3].
  • Cost savings: Voiceflow (the profile’s listed SaaS comparator) starts at $50/mo per seat for production features. Langflow OSS self-hosted is free. A $6–10/mo VPS covers a working instance with no per-execution or per-seat pricing.
  • Key strength: The fastest path from “I have an LLM idea” to “I have a running API.” Visual builder + Python escape hatch + built-in MCP server + export-as-JSON means you’re never locked in [3][5].
  • Key weakness: Real production pain: documented 10–15 second pre-call delays, 100% CPU spikes, and a critical unauthenticated RCE vulnerability (CVE-2025-3248) that sat in the codebase until 2025. Not a set-it-and-forget-it self-host [1].

What is Langflow

Langflow is a visual workflow builder for AI pipelines. You place nodes — an LLM, a vector store, a prompt template, a retriever, a code block — connect them, and get a working chain. The canvas approach was explicitly inspired by tools like ComfyUI for Stable Diffusion: take something that normally requires dozens of lines of glue code and make it drag-and-drop [4].

Under the hood it’s Python. Every component in the visual editor is a Python class. If the built-in nodes don’t do what you need, you write a custom one in Python and drop it in. The flows are stored as JSON and can be exported, versioned, imported, or deployed directly [3][5].

The project was created in 2023 by Rodrigo Nader and Gabriel Luiz Freitas Almeida and has grown to 145,778 GitHub stars — a number that puts it in the top tier of the entire AI tooling category. The company behind it, Langflow AI, now offers a managed cloud alongside the OSS core [2][README].

What makes the pitch concrete in 2025 is two things: MCP and API deployment. Every Langflow flow can be exposed as an MCP server, making your custom pipeline callable from Claude Desktop, Cursor, or any MCP client. And every flow is simultaneously a REST API with SSE streaming — so a flow you built visually can serve production traffic without rewriting it in another framework [3][5]. One Medium author described it as “Figma for LLM workflows + an API server built in” [3].

The README description is more blunt than the homepage: “a powerful tool for building and deploying AI-powered agents and workflows.” That’s a more useful summary than the homepage’s “Stop fighting your tools.”


Why people choose it over LangChain, n8n, and Dify

The reviews converge on a consistent picture: Langflow wins on visual iteration speed and Python flexibility, and shows its limits under production load and security scrutiny.

Versus raw LangChain. This is the original pitch and still the strongest one. LangChain gives you fine-grained control but demands boilerplate for every chain, every callback, every debug pass. Langflow wraps that complexity in a visual layer while keeping the escape hatch: if you need to drop into code, you write a Python node and it slots into the visual graph [2][4]. One reviewer notes the platform “accelerates prototyping from weeks to hours while lowering technical barriers” [2].

Versus n8n. n8n is the general-purpose automation platform that added AI nodes. Langflow is AI-native from the ground up. The comparison from Langflow’s own blog [5] is actually balanced: n8n wins on general-purpose automation and breadth of SaaS connectors; Langflow wins when your core use case is agent orchestration, RAG pipelines, and LLM composition. For a team building an AI customer support pipeline, Langflow’s node vocabulary is purpose-built for that. For a team automating invoice approvals and Slack notifications that occasionally touch an LLM, n8n is the better fit [5].

Versus Dify. Dify is all-in-one — knowledge base management, deployment, a chat UI, basic agents — but less flexible on the engineering side. Langflow gives up the managed knowledge-base workflow in exchange for Python-level customization and cleaner MCP/API integration [1][5]. If your team includes engineers who want to write custom retrieval logic, Langflow is the better tool. If your team is non-technical and needs a product with guardrails, Dify is more approachable.

Versus LangGraph. LangGraph (also from the LangChain ecosystem) is the code-first state-machine approach for complex, traceable agent workflows. The ZenML review [1] positions them as complements: LangGraph for deterministic, auditable state machines; Langflow for rapid visual iteration. The practical split: Langflow for prototyping and API exposure, LangGraph when you need fine-grained control of agent state transitions in production.

On developer experience. The Medium first-impressions review [4] is useful because it documents the friction honestly: Docker setup had version conflicts, a Langfuse integration was broken in the then-current release, and getting a working version required manually downgrading images. That was 2024 — the installation path has improved since — but it signals that this is not a one-click appliance. You need to know which version you’re running [4].


Features

Based on the README, website, and article descriptions:

Core visual builder:

  • Node-based canvas: LLMs, prompts, retrievers, vector stores, code blocks, agents [README][3]
  • Every node input/output is typed — connections are validated, not just visual [4]
  • Interactive playground with step-by-step execution inspection [README]
  • Flows exported as JSON — importable, shareable, version-controlled [3][5]
  • Pre-built templates and community-contributed flows [website]

Python customization:

  • Any node can be replaced or extended with a Python class [3][5]
  • Source code access to every built-in component [README]
  • Custom components integrate with the same typed input/output contract [5]

Agents and orchestration:

  • Multi-agent orchestration with conversation history and retrieval [README]
  • Agents-as-tools pattern: one agent can call other flows as tools [README][5]
  • Human-in-the-loop support (implied by “step-by-step control” in README)
  • Compatible with OpenAI, Anthropic, Google, Meta, HuggingFace, Ollama, and others [README][2]

Deployment:

  • Deploy as REST API with SSE streaming [README][3][5]
  • Deploy as MCP server — every flow becomes an MCP tool callable from Claude Desktop, Cursor, Windsurf [README][5]
  • Langflow is also an MCP client, meaning it can consume external MCP servers [5]
  • Docker single-container and Docker Compose options [README]
  • Langflow Desktop for macOS and Windows — all dependencies bundled, no Python environment needed [README]
  • Export as JSON for embedding in other Python apps [README][3]

Observability:

  • Native LangSmith and Langfuse integrations [README][4]
  • Enterprise observability described as production-ready [README][3]

Integrations shown on website: OpenAI, Anthropic, Azure, AWS Bedrock, Google Cloud, HuggingFace, Ollama, Pinecone, Weaviate, Milvus, Qdrant, MongoDB, Supabase, Composio, CrewAI, Tavily, and many others [website].

What’s missing or limited: The ZenML review [1] flags maximum file upload size at 100MB with documented memory leaks during repeated file operations — a real constraint for RAG pipelines processing large document sets. Governance features (SSO, RBAC, audit logs) exist for the enterprise tier but the community edition’s auth story is basic: user management is disabled by default and must be manually activated via environment variables [4].


Pricing: SaaS vs self-hosted math

Langflow Cloud (their managed SaaS):

  • Free tier available
  • Paid tiers for production — exact pricing not published publicly; contact sales for enterprise [website]

Self-hosted (OSS):

  • Software: $0 (MIT license) [README]
  • Infrastructure: $6–15/mo on a VPS with 2–4GB RAM
  • Python 3.10–3.13 required; uv recommended as package manager [README]

Voiceflow for comparison (profile’s listed SaaS comparator):

  • Starter: free tier with limited usage
  • Pro: starts around $50/mo per editor seat
  • Enterprise: custom pricing

Concrete savings math: A small team of three using Voiceflow Pro for AI agent development pays roughly $150/mo. Self-hosting Langflow on a $10/mo VPS brings that to $10/mo — or $1,680/year saved — before accounting for the development overhead of managing the instance yourself.

The honest qualifier: Langflow’s managed cloud pricing isn’t transparent. If you’re evaluating for a budget-sensitive team and want predictable SaaS pricing, the self-hosted path is the clear cost play. The managed cloud is aimed at enterprise accounts where the pricing is negotiated, not listed.


Deployment reality check

The install path in 2025 has two clean options: pip install langflow via uv (recommended), or Docker. The Desktop app is the genuinely zero-friction option — packaged for macOS and Windows with all dependencies bundled [README].

What a real self-hosted setup requires:

  • Linux VPS with 2GB RAM minimum (4GB+ if running multiple flows with LLM calls)
  • Python 3.10–3.13 or Docker
  • A reverse proxy (Caddy or nginx) if you want HTTPS
  • External LLM API keys or a local Ollama instance
  • Optional but recommended: LangSmith or Langfuse for observability

What can go wrong — and this section matters:

The ZenML review [1] documents three production-grade concerns that are more serious than typical “setup was hard” complaints:

  1. Latency spikes. Documented delays of 10–15 seconds before an LLM call begins in some configurations. For interactive applications, that’s not a slow response — that’s a broken one. CPU hitting 100% under load suggests the Python execution model doesn’t scale horizontally without careful infrastructure work [1].

  2. File handling. 100MB maximum upload with memory leaks on repeated file operations. If you’re building a RAG pipeline that processes large documents repeatedly (common use case), this is a real constraint that requires workarounds [1].

  3. Critical security vulnerability. CVE-2025-3248 — an unauthenticated remote code execution vulnerability in the /api/v1/validate/code endpoint — was active until Langflow version 1.3. If you ran a publicly-accessible self-hosted instance before upgrading, arbitrary code execution was possible without authentication. Two additional CVEs (CVE-2025-68477 and CVE-2025-68478) were patched in version 1.7.1, alongside a data-loss bug in 1.7.0 that made all persisted state inaccessible after upgrade [README].

The README’s own cautions list is unusually long for an open-source tool: four separate “do not upgrade to version X” warnings, each tied to a different critical issue. This is not a sign of a poorly maintained project — the issues are being fixed — but it means that running a production Langflow instance requires active version management, not passive updates.

Realistic time estimates:

  • Technical user with Docker experience: 30–60 minutes to a working instance
  • Developer unfamiliar with Python environment management: 1–2 hours
  • Non-technical founder following a guide: this is not the right install path; use the Desktop app for local exploration, or pay for the managed cloud or a deployment service

Pros and Cons

Pros

  • 145,778 GitHub stars. This is not a niche experiment. It has more stars than most enterprise tools and an active community [README].
  • MIT license. Genuinely permissive — you can self-host, embed in your own SaaS, fork and resell without a commercial agreement [README][4].
  • Fastest visual prototyping in the AI-native category. Multiple reviewers confirm the step from idea to working flow is measured in hours, not days [2][3].
  • Python escape hatch. Every node is a Python class. You’re never trapped by the built-in node vocabulary — you extend it [3][5].
  • MCP native — both server and client. Every flow is automatically an MCP tool. And Langflow can consume external MCP servers as tools within flows. For teams already using Claude Desktop or Cursor, this is an immediate integration win [README][5].
  • Desktop app. Zero-dependency install for macOS and Windows means non-technical collaborators can run it locally without touching the command line [README].
  • Observability-first. Native LangSmith and Langfuse support makes debugging and evaluation straightforward for teams that care about production quality [README][4].
  • Flow-as-JSON. Flows are plain JSON — version-controlled, importable, shareable, embeddable [3][5].

Cons

  • Production latency problems. 10–15 second pre-call delays and 100% CPU spikes documented under load. Not suitable for latency-sensitive production traffic without infrastructure mitigation [1].
  • 100MB file upload ceiling with memory leaks. Real constraint for document-heavy RAG pipelines [1].
  • Security track record requires vigilance. Three significant CVEs in recent history, one of which was a critical unauthenticated RCE. Staying on current versions is non-negotiable for any public-facing deployment [README][1].
  • Version 1.7.0 was yanked. A data-loss bug — persisted flows and state became inaccessible after upgrade — required pulling the release entirely. Running a production instance means following the changelog closely [README].
  • Governance is DIY. Auth, RBAC, SSO, and audit logs require assembly or the enterprise tier. The community edition ships with user management off by default [4].
  • Not a cloud solution. Langflow’s own blog acknowledges the hosting complexity: “often presents complexity around hosting decisions around where to host it and how” [5]. The managed cloud exists to address this, but at enterprise pricing.
  • Enterprise features gated. Exact tiers not published; contact sales for anything beyond the free tier [website].

Who should use this / who shouldn’t

Use Langflow if:

  • You’re a developer or technical founder who needs to prototype AI agents and RAG pipelines quickly and wants to ship them as APIs without rewriting in a different framework.
  • Your team includes both engineers and non-engineers who need to inspect or modify AI logic — the visual canvas makes the pipeline legible to people who don’t read Python.
  • You want MIT-licensed infrastructure you can embed in your product or resell.
  • You’re already using Claude Desktop or Cursor and want your custom flows accessible as MCP tools.
  • You want to avoid per-execution or per-seat pricing — the self-hosted path has none.

Skip it (pick LangGraph instead) if:

  • Your use case is complex, stateful agent workflows where deterministic state management and auditability are non-negotiable.
  • You need fine-grained control over agent memory, episodic state, and execution tracing in production [1].

Skip it (pick n8n instead) if:

  • You’re primarily automating business workflows (email, CRM, spreadsheets) that happen to include an occasional LLM step.
  • You need hundreds of pre-built SaaS connectors out of the box [5].

Skip it (pick Dify instead) if:

  • Your team is non-technical and needs a fully managed product with a built-in knowledge base UI and minimal infrastructure decisions.
  • You want opinionated guardrails rather than Python-level flexibility [1].

Skip self-hosting entirely if:

  • You don’t have someone who can track CVEs and manage version upgrades on a running service. The security history makes unmanaged self-hosting a real risk for public-facing deployments.

Alternatives worth considering

  • LangGraph — Code-first state machines for complex, production-grade agents with full execution tracing. The power-user choice for engineering teams [1][5].
  • n8n — General-purpose workflow automation with AI nodes. Better for automation-heavy use cases with broad SaaS connectors [5].
  • FlowiseAI — Visual AI builder similar to Langflow but focused more on rapid prototyping and RAG templates [1].
  • Dify — All-in-one platform: RAG, agents, deployment, knowledge management. More opinionated, less flexible [1][5].
  • CrewAI — Role-based agent teams with sequential task execution. Better than Langflow for structured multi-agent pipelines where each agent has a defined role [1][5].
  • Voiceflow — The profile’s listed SaaS comparator. Managed platform with strong no-code focus; significantly more expensive at scale but lower operational burden.
  • Haystack (deepset) — RAG-first framework for enterprise document intelligence. Better for pure RAG pipelines that need rigorous evaluation and retrieval optimization.

For a developer building AI-powered features and wanting visual iteration alongside API deployment: Langflow vs Dify is the real decision. Langflow wins on flexibility and MCP integration. Dify wins on ease for non-technical teams.


Bottom line

Langflow earns its 145,000 stars on the strength of a genuine proposition: take the most tedious part of AI development — wiring together LLMs, retrievers, tools, and memory — and make it visual and iterative, while keeping Python as a full escape hatch. The MCP-native deployment story is a real differentiator in 2025; a flow you built in an afternoon is automatically callable from Claude Desktop. For developers who prototype frequently, that’s valuable.

The honest caveat is that the production story has gaps. The CVE history and version-yanking incidents signal that self-hosting requires active maintenance, not passive deployment. The latency and CPU issues under load mean Langflow is closer to a powerful development environment than a hardened production runtime — at least without infrastructure work. For non-technical founders, the Desktop app is the right entry point; public-facing self-hosting requires someone technical watching the security advisories. If the setup and maintenance overhead is the blocker, that’s exactly what upready.dev handles for clients — one-time deployment, properly secured, you own it.


Sources

  1. ZenML Blog“We Tried and Tested 8 Langflow Alternatives for Production-Ready AI Workflows”. https://www.zenml.io/blog/langflow-alternatives

  2. Silent Infotech Blog“Langflow AI Workflow Automation: Complete Guide to Building LLM Apps”. https://silentinfotech.com/blog/ai-automation-10/why-is-langflow-quietly-becoming-the-go-to-tool-345

  3. Ravindra Satyanarayana, Medium“LangFlow: from prompt experiments to production agentic workflows” (Dec 11, 2025). https://medium.com/@ravisat/langflow-from-prompt-experiments-to-production-agentic-workflows-a46405210a22

  4. Merlin Becker, Medium“First Impressions: Evaluating Langflow’s Graph-Based UI” (Feb 18, 2024). https://merlinbecker.de/first-impressions-evaluating-langflows-graph-based-ui-c28594331739

  5. Langflow Blog“The Complete Guide to Choosing an AI Agent Framework in 2025”. https://www.langflow.org/blog/the-complete-guide-to-choosing-an-ai-agent-framework-in-2025

Primary sources: