unsubbed.co

Kong Gateway

Kong Gateway is the most widely deployed open-source API gateway, providing traffic management, authentication, rate limiting, and observability for APIs and microservices.

Open-source API infrastructure, honestly reviewed. No marketing fluff, just what you get when you put Kong in front of your services.

TL;DR

  • What it is: Open-source (Apache 2.0) API gateway — a reverse proxy that handles routing, authentication, rate limiting, and traffic control for your APIs, LLMs, and MCP servers [2].
  • Who it’s for: Engineering teams and platform engineers who need a production-grade API gateway with serious plugin extensibility, Kubernetes integration, and increasingly, teams routing traffic to multiple LLM providers [2][5].
  • Cost savings: Managed API gateway services (AWS API Gateway, Azure API Management) bill per million API calls and add up fast at volume. Kong’s open-source edition runs on your own infrastructure with no per-request fees.
  • Key strength: 43,011 GitHub stars, NGINX-based performance benchmarked at 50,000+ transactions per second per node, and a plugin architecture that covers virtually every API management use case [website scrape].
  • Key weakness: Configuration complexity is the most-cited friction point — Kong is not a “spin it up in twenty minutes” tool for non-technical founders, and its managed cloud pricing (per-service per-month model) gets expensive fast for organizations with many services [5][3].

What is Kong Gateway

Kong Gateway is a cloud-native reverse proxy and API gateway built on NGINX. You put it in front of your APIs — or LLM endpoints, or MCP servers — and it handles the common infrastructure layer: authentication, rate limiting, SSL termination, load balancing, logging, and traffic routing [2].

The project has been around since 2015 and describes itself in its README as a “cloud-native, platform-agnostic, scalable API / LLM / MCP Gateway distinguished for its high performance and extensibility via plugins.” The LLM and MCP additions are recent — the README now leads with all three traffic types as first-class concerns rather than treating AI as a plugin-afterthought [README].

The gateway is built around two core concepts. Services are the upstream APIs you’re proxying. Routes define which incoming requests get sent to which service. Everything else — authentication, rate limiting, transformations, logging — is a plugin you attach to either the gateway globally, to a service, or to a specific route. This architecture means you can add behavior incrementally without rewriting anything [2].

Kong is backed by Kong Inc. (the company), which maintains the open-source project and sells commercial tiers on top of it via its Konnect platform. The open-source edition is genuinely Apache 2.0 — no “fair-code” or commercial use restrictions [merged profile]. The company also acquired Insomnia (the REST client) in 2019, expanding its footprint across the full API development lifecycle, not just the runtime layer [4].


Why people choose it over alternatives

The evaluation cases that appear across reviews land in roughly the same place: Kong wins on performance, extensibility, and open-source legitimacy, and loses on complexity and the learning curve for initial configuration.

Performance. The NGINX foundation is not marketing — Kong’s own benchmarks cite 50,000+ transactions per second per node, and this claim survives third-party scrutiny because NGINX itself is battle-hardened at those load levels [website scrape]. For teams that have outgrown AWS API Gateway’s throughput or want to stop paying per-call fees at high volume, this matters.

Plugin architecture. This is Kong’s defining technical bet. Rather than baking every capability into the core, Kong exposes a Plugin Development Kit that lets you write custom plugins in Lua or Go. The result is a hub of plugins covering rate limiting, JWT/OAuth/basic auth, request transformation, response caching, OpenTelemetry tracing, Kafka forwarding, and many others — all composable per route or service [2][5]. DreamFactory’s comparative review [5] cites “extensive customization” as Kong’s clearest advantage over competitors like MuleSoft and WSO2, which lock more behavior into proprietary configuration.

Kubernetes integration. Kong ships an official Kubernetes Ingress Controller, which means you configure routing the same way you configure everything else in Kubernetes — via CRDs and declarative YAML. For teams already running Kubernetes, this removes the mental-model mismatch between your app infrastructure and your gateway [website scrape][2].

AI gateway capabilities. The 2024-2025 additions are substantial: Kong now provides a “Universal LLM API” that routes requests across OpenAI, Anthropic, GCP Gemini, AWS Bedrock, Azure AI, Databricks, Mistral, Huggingface, and others through a single gateway layer. On top of that it adds semantic caching, semantic routing, semantic security (prompt injection detection), MCP traffic governance, and analytics for AI workloads — 60+ AI-specific features by Kong’s own count [README]. Whether you need all of this depends on your situation, but if you’re already running Kong and adding LLM integrations, having the AI layer in the same place is a real operational benefit.

Against Azure API Management and AWS API Gateway. The dedicated cloud gateways article [1] frames the argument cleanly: managed CSP gateways tie you to one cloud vendor, add extra hops between the gateway and your backends, and limit configuration flexibility. Kong’s Dedicated Cloud Gateways now run across all three major CSPs (AWS, Azure, GCP) with a 99.95% SLA and 25+ regions, which is the product-level answer to the vendor lock concern [1].

Against Tyk. Both are open-source API gateways with commercial tiers. Tyk is generally described as simpler to set up for smaller teams. Kong is described as more powerful and battle-tested at enterprise scale, with a larger plugin ecosystem [5].

Against MuleSoft and WSO2. These are enterprise API management platforms, not just gateways — they include developer portals, analytics, lifecycle management, and integration design tools baked in. They’re also significantly more expensive and complex. Kong competes at the gateway layer specifically; Konnect (Kong’s managed platform) extends into some of the same management territory [5].


Features

Core gateway:

  • Advanced routing, load balancing, health checking — all configurable via Admin API or declarative config [README]
  • Authentication: JWT, basic auth, OAuth 2.0, API keys, ACLs, mTLS [2]
  • Rate limiting, request/response transformation, header manipulation [2][README]
  • SSL/TLS termination, proxy support for L4 and L7 traffic [README]
  • DB-less deployment mode (declarative YAML, no database required) [2]
  • Hybrid deployment: control plane/data plane separation [README]
  • Plugin ordering: declaratively configure plugin execution order [website scrape]

AI gateway:

  • Universal LLM API routing across 10+ providers (OpenAI, Anthropic, Bedrock, Gemini, Azure AI, Mistral, Huggingface, others) [README]
  • Semantic caching — cache semantically similar LLM responses to reduce API costs [README]
  • Semantic routing — route LLM requests by semantic content, not just headers or URLs [README]
  • Semantic security — detect prompt injection attempts before they hit your models [README]
  • MCP traffic governance, MCP security, MCP observability [README]
  • MCP autogeneration from any RESTful API [README]
  • AI observability and analytics across all provider traffic [README]

Operations and Kubernetes:

  • Native Kubernetes Ingress Controller [website scrape][2]
  • OpenTelemetry tracing [website scrape]
  • Declarative configuration via decK (GitOps-compatible) [2]
  • Terraform provider for infrastructure-as-code [2]
  • Consumer groups with tiered rate limiting [website scrape]
  • Gateway event hooks (webhooks on config changes) [website scrape]

Commercial/enterprise features (Konnect-gated):

  • Developer portal with app auto-linking [website scrape]
  • Service Hub (global service catalog) [website scrape]
  • API analytics with data retention up to 1 year [website scrape]
  • Audit logging [website scrape]
  • Hosted control plane and database (removes operational overhead) [website scrape]
  • 99.9% uptime SLA [website scrape]

Pricing: SaaS vs self-hosted math

Kong open-source (self-hosted):

  • Software license: $0 (Apache 2.0)
  • Infrastructure: your own servers or cloud VMs
  • No per-request fees, no per-service fees — you run it, you pay only for compute

Kong Konnect (managed platform) — from SaaSworthy data [3]:

  • Serverless tier: Free base, then $20 per first 1M API requests. Good for development and prototyping, not production volume.
  • Self-Hosted/K8s: $105/month per gateway service + $34.25 per 1M API requests. Includes custom plugins, private networking.
  • Dedicated Cloud: Same per-service and per-request pricing, plus cloud infrastructure costs ($1/hour for network, $0.15/GB bandwidth). Fully managed, multi-cloud, auto-scaling.

Where the math gets uncomfortable:

Say you have 20 services and 500M API requests per month on the Self-Hosted/K8s tier. That’s $2,100/month in service fees plus ~$17,000/month in request fees — a number that makes enterprise procurement people reach for the phone. The pricing model is designed for organizations that can negotiate volume contracts, not for startups doing math on a spreadsheet.

The self-hosted alternative:

A reasonably provisioned VPS or Kubernetes cluster can run Kong open-source with no licensing cost. For teams at 10-50 services and moderate traffic, the difference between $0 (open-source on your own infra) and $1,050+/month (Konnect Self-Hosted tier) is significant. The tradeoff is operational ownership: you manage the control plane, the database (PostgreSQL), and upgrades yourself.

Comparison to cloud-native alternatives:

AWS API Gateway charges $3.50/million API calls for REST APIs, $1.00/million for HTTP APIs. At 500M calls/month that’s $1,750/month just in call fees, before any data transfer. Kong open-source self-hosted beats this at scale once your infrastructure cost is amortized. Azure API Management starts around $0.35/million calls on the Consumption tier but charges separately for gateway units on Standard and Premium.

The self-hosted cost math is favorable for high-traffic teams. The setup cost is real.


Deployment reality check

Kong is not a one-afternoon project for someone new to infrastructure. DreamFactory’s comparison [5] explicitly flags “complexity in configuration” and “resource intensity” as Kong’s primary cons, and this tracks with what the documentation describes.

What you actually need for a basic deployment:

  • A server or Kubernetes cluster (recommended: 2+ cores, 4GB+ RAM for production)
  • PostgreSQL (for traditional mode) or nothing (for DB-less declarative mode)
  • Docker or Kubernetes
  • A reverse proxy or load balancer in front of Kong for TLS termination
  • Familiarity with Kong’s Admin API or decK CLI for configuration

DB-less mode is worth knowing about [2]. You define all your services, routes, and plugins in a YAML file and Kong reads it at startup — no database required. This is operationally simpler and more GitOps-friendly, but it means no runtime configuration changes (you update the file and reload). For teams already comfortable with declarative infrastructure, this is actually the better mental model. For teams expecting a UI to click around in, it’s friction.

Kong Manager (the web UI) is included in the open-source edition for local management [README]. It runs on port 8002 in the default Docker setup and lets you configure services, routes, and plugins without touching the Admin API directly. It’s not as polished as commercial alternatives, but it exists.

What can go sideways:

  • Plugin ordering bugs: plugins execute in a defined order, and if you misconfigure it, auth checks can run after rate limiting in ways that create subtle security gaps. The plugin ordering feature in newer versions addresses this, but it requires understanding [website scrape].
  • Hybrid deployment (control plane/data plane separation) is powerful for multi-region setups but introduces its own operational complexity — certificate management, network configuration between planes [README].
  • Upgrading Kong versions in production requires care. Konnect’s managed tier handles this; self-hosted teams own it [website scrape].

Realistic time estimate for a technical team: 2–4 hours to a working gateway with basic auth and rate limiting on Docker Compose. Days to weeks to a production-grade deployment with Kubernetes, observability, proper TLS, and all your services configured.


Pros and Cons

Pros

  • Apache 2.0 license. No commercial use restrictions, no “fair-code” limitations. You can embed it, resell it, fork it — no legal conversation needed [merged profile].
  • NGINX performance baseline. 50,000+ transactions per second per node is not a marketing claim — it’s a consequence of building on NGINX, which runs a significant fraction of the world’s web traffic [website scrape].
  • Genuine plugin ecosystem. The Plugin Development Kit means you can write custom plugins in Lua or Go, and the community has done so extensively. This is real extensibility, not “open an issue and wait” extensibility [2][5].
  • Kubernetes-native with official Ingress Controller. For Kubernetes shops, this is table stakes — and Kong delivers it properly rather than as an afterthought [website scrape].
  • AI gateway built in. Multi-LLM routing, semantic caching, prompt injection detection, MCP governance — all in the same tool as your API gateway, if you need it [README].
  • 43,000+ GitHub stars. Not a niche project. A large community means answers exist on Stack Overflow, Discord, and the official forum before you have to file a support ticket.
  • DB-less declarative mode. GitOps-friendly deployment where your entire gateway config is version-controlled YAML [2].
  • Multi-cloud dedicated gateway. Kong’s Dedicated Cloud Gateways now run on AWS, Azure, and GCP with 99.95% SLA — the only API management vendor to support all three CSPs in this model [1].

Cons

  • Configuration complexity. This is the most consistent criticism across the comparison article [5] and implied throughout the documentation. Kong rewards engineers who invest time to understand it and punishes everyone else.
  • Konnect pricing is expensive at scale. The per-service per-month model ($105/service) adds up quickly for organizations with many microservices. Not startup-friendly in the managed tier [3].
  • Not built for non-technical founders. If you don’t know what a reverse proxy is, Kong is not the right starting point. This is developer infrastructure, not a no-code tool.
  • The AI features are new. The LLM and MCP capabilities were added recently and are evolving fast. Production stability for the AI gateway layer is less battle-tested than the core HTTP routing functionality [README].
  • No per-request pricing visibility. The Konnect pricing model can surprise organizations that don’t model their API call volume upfront. At high scale, request fees dominate over service fees [3].
  • Learning curve for upgrades. Kong releases are frequent. Self-hosted teams own their own upgrade path, including potential breaking changes in plugin APIs and configuration schemas.

Who should use this / who shouldn’t

Use Kong Gateway if:

  • You’re an engineering team building microservices architecture and need a serious API gateway with real plugin extensibility.
  • You’re running Kubernetes and want your gateway configured the same way as everything else — via CRDs and declarative config.
  • You’re routing traffic to multiple LLM providers and want centralized rate limiting, semantic caching, and observability for AI calls.
  • You need Apache 2.0 licensing — specifically if you’re building a product that will embed or redistribute the gateway.
  • You have the technical capacity to deploy and maintain infrastructure (or you’re evaluating Konnect’s managed tier with eyes open about the pricing model).

Skip it (use Traefik or Caddy) if:

  • You want a simpler reverse proxy with automatic HTTPS and basic routing — Kong’s overhead is unnecessary for small deployments with straightforward needs.

Skip it (use AWS/Azure API Gateway) if:

  • You’re all-in on one cloud vendor, don’t need multi-cloud, and want zero operational overhead. The managed CSP gateways are simpler if you accept the lock-in.

Skip it (use Tyk) if:

  • You want a simpler open-source API gateway for a smaller team and don’t need Kong’s enterprise scale or plugin depth.

Skip it entirely if:

  • You’re a non-technical founder who wants to connect apps without writing infrastructure. You want n8n or Activepieces, not an API gateway.
  • Your team has never managed a database or Linux server. Kong has a learning curve that will eat weeks of productive time if you’re starting from zero.

Alternatives worth considering

From the DreamFactory comparison [5] and broader context:

  • Traefik — simpler open-source reverse proxy with auto-discovery for Docker and Kubernetes. Better for small teams; lacks Kong’s plugin depth.
  • Tyk — open-source API gateway with commercial tier. Simpler to configure than Kong, smaller plugin ecosystem, competitive on features for small-to-mid deployments.
  • AWS API Gateway — zero operational overhead if you’re on AWS. Per-call pricing at scale; vendor lock-in is the trade-off.
  • Azure API Management — strong for Azure-native shops and .NET teams. Built-in developer portal and policy editor. Expensive in the higher tiers.
  • MuleSoft — full API lifecycle platform (design, portal, analytics, gateway). Enterprise pricing to match. The right tool when you need more than a gateway [5].
  • WSO2 API Manager — open-source API management platform with a built-in developer portal. More complex than Kong, targets similar enterprise use cases [5].
  • Envoy — the underlying proxy that many service meshes (Istio, etc.) use. More raw power, much steeper learning curve, not a drop-in replacement for Kong but relevant if you’re going deep on service mesh.
  • DreamFactory — positions itself as an API generation and management platform rather than a gateway. Different use case: auto-generating REST APIs from databases, not proxying existing ones [5].

For non-technical founders looking for API infrastructure, this list is mostly irrelevant — what you likely need is documentation and a deployed service, not a gateway. For engineering teams evaluating production API infrastructure, the realistic shortlist is Kong vs Tyk vs managed CSP gateway, with the choice turning on plugin requirements, multi-cloud needs, and budget for managed services.


Bottom line

Kong Gateway is professional-grade API infrastructure — the kind of thing platform engineering teams deploy, not the kind of thing a solo founder sets up in an afternoon. The NGINX performance foundation, Apache 2.0 license, and deep plugin ecosystem are genuine strengths, and the recent expansion into LLM routing and MCP governance gives it a credible story for teams already building AI-native applications. The friction is real: configuration complexity, expensive managed tier pricing at scale, and a learning curve that assumes engineering proficiency.

For the right audience — engineering teams running microservices who need serious routing, authentication, and observability without per-request billing — the self-hosted edition on their own infrastructure is one of the best options available at no licensing cost. For everyone else, the tooling probably overshoots the need.


Sources

  1. Michael Field, Kong Inc.“Kong’s Dedicated Cloud Gateways: A Deep Dive” (June 19, 2025). https://konghq.com/blog/product-releases/dedicated-cloud-gateways-deep-dive
  2. Kong Documentation“Kong Gateway”. developer.konghq.com. https://developer.konghq.com/gateway/
  3. SaaSworthy“Kong — Features, Reviews & Pricing (April 2026)”. https://www.saasworthy.com/product/kong
  4. Jakub Lewkowicz, SD Times“Kong acquires open-source REST client provider Insomnia” (October 2, 2019). https://sdtimes.com/api/kong-acquires-open-source-rest-client-provider-insomnia/
  5. Spencer Nguyen, DreamFactory Blog“Best Kong Alternatives for 2024” (January 2, 2024). https://blog.dreamfactory.com/best-kong-alternatives

Primary sources:

Features

Authentication & Access

  • OAuth / Social Login

Integrations & APIs

  • Plugin / Extension System
  • REST API

AI & Machine Learning

  • AI / LLM Integration

Security & Privacy

  • Rate Limiting
  • SSL / TLS / HTTPS