Perplexica
An AI-powered search engine that is an open-source alternative to Perplexity AI.
Open-source AI search with cited sources, honestly reviewed. No marketing fluff, just what you get when you run it yourself.
TL;DR
- What it is: Open-source (MIT) AI-powered answering engine — think Perplexity AI, but the search runs on your own hardware through SearxNG, and your queries never leave your server [1].
- Who it’s for: Privacy-conscious founders and researchers who want Perplexity-style cited answers without paying $20/month or handing their search history to a cloud provider [1][3].
- Cost savings: Perplexity Pro runs $20/month. Perplexica self-hosted runs on a $5–10/month VPS with unlimited queries [1].
- Key strength: Clean Perplexity-like UI, genuine source citations, and the ability to plug in local LLMs via Ollama — meaning the entire stack, including inference, can run air-gapped [1][3].
- Key weakness: Setup requires combining Perplexica with SearxNG (and optionally Ollama), which means three moving parts instead of one. Search quality depends heavily on SearxNG configuration and which LLM you choose. Not a plug-and-play product for non-technical users [1].
- One notable detail: As of early 2026, the project has been rebranded from “Perplexica” to “Vane” in its README and Docker images, while the GitHub repository URL (
github.com/itzcrazykns/perplexica) remains unchanged. If you’re reading documentation and seeing “Vane,” that’s the same project.
What is Perplexica
Perplexica is a self-hosted AI search engine. You type a question, it searches the web through SearxNG (an open-source meta-search engine that aggregates up to 245 search services), optionally reranks results using embedding similarity, and then hands the top sources to an LLM — which synthesizes an answer with numbered citations pointing back to the original pages [1].
The UX is a deliberate clone of Perplexity AI. That’s not a criticism — Perplexity’s interface is genuinely good, and Perplexica’s take on it is clean enough that the comparison lands. The difference is where the data flows. On Perplexity, your query goes to their servers, their search index, their LLM. On Perplexica, your query goes to SearxNG running on your VPS, with inference via whatever model you configure.
The project sits at 33,237 GitHub stars with an MIT license [merged profile]. It supports Ollama (local), OpenAI, Anthropic Claude, Google Gemini, and Groq as LLM backends — you pick one or swap between them through the settings UI [1].
The recent rebrand to “Vane” appears to be in progress: the README, Docker image (itzcrazykns1337/vane), and Docker Hub page all use “Vane,” but the GitHub repo slug, original name recognition, and most third-party coverage still say “Perplexica.” Both names refer to the same codebase.
Why people choose it
The case for Perplexica is simple: Perplexity AI’s UI is excellent, but at $20/month you’re paying for convenience, not capability. The underlying mechanism — search results fed into an LLM for synthesis — is reproducible at home. Perplexica is that reproduction, with MIT source code you can inspect and modify [1][3].
The XDA Developers roundup of local LLM apps [3] includes Perplexica specifically because it solves a friction point that comes up constantly in self-hosted AI setups: you can run Ollama locally, but Ollama alone doesn’t know what’s on the internet today. Perplexica adds the search layer that makes your local model actually useful for current-events queries, research, or anything that requires real-world grounding [3].
The Agent Native review [1] frames it as a “privacy-respecting Perplexity alternative” with emphasis on the citation chain: because every answer shows the sources it drew from, you can verify claims rather than trusting the LLM to summarize accurately. That matters when you’re doing actual research rather than just chatting.
Where reviewers hedge: the setup is not trivial, and search quality varies based on how well you’ve configured SearxNG and which LLM you’ve pointed at it. A poorly tuned SearxNG instance (bad proxies, blocked search engines) produces worse results than Perplexity. A well-tuned one with a capable model (GPT-4o, Claude Sonnet) gets surprisingly close [1].
Features
Based on the README and third-party coverage:
Search and AI core:
- Web search via SearxNG — aggregates multiple engines, no direct identity exposure [1]
- Results reranked via embedding similarity before LLM synthesis [1]
- Three search modes: Speed (faster, cheaper), Balanced, and Quality (deeper, slower) [1]
- Source citations in every answer with links back to originals [1]
- Configurable LLM backend: Ollama (local), OpenAI, Claude, Gemini, Groq [1]
Source type selection:
- Web (general results)
- Discussions (Reddit-style forums)
- Academic papers
- Domain-specific search — restrict results to a single site [1]
UI features:
- Widgets for quick lookups: weather, calculations, stock prices [1][README]
- Image and video search alongside text results [1][README]
- Smart query suggestions as you type [1][README]
- Discover tab for browsing trending content without an explicit search [README]
- Full local search history (stored on your server, not in the cloud) [1][README]
File handling:
- Document uploads — PDFs, text files, images — with Q&A against the content [1][README]
Infrastructure:
- Docker and Docker Compose deployment [merged profile]
- Single-container option that bundles SearxNG (
docker run -d -p 3000:3000 -v vane-data:/home/vane/data --name vane itzcrazykns1337/vane:latest) [README] - Settings UI for API keys and model configuration — no manual config file editing required [README]
What it doesn’t do:
- No multi-user support or access control (it’s designed as a personal or team-internal tool)
- No Slack or calendar integrations — it’s a search tool, not an automation platform
- No memory across sessions beyond saved history
Pricing: SaaS vs self-hosted math
Perplexity AI (the product Perplexica replaces):
- Free tier: limited Pro searches per day (roughly 5)
- Pro: $20/month — 300+ Pro searches/day, GPT-4o and Claude access, file uploads, image generation
- Enterprise: custom pricing
Perplexica self-hosted:
- Software: $0 (MIT) [merged profile]
- VPS to run it: $5–10/month on Hetzner, Contabo, or DigitalOcean
- LLM costs: $0 if you use Ollama locally; pay-per-token if you route through OpenAI or Claude
The math for a heavy research user:
If you’re doing 50+ serious research queries a day — the kind where you actually read the sources — Perplexity Pro at $20/month is the minimum. If you add OpenAI API costs for custom workflows, you’re past $30/month easily. Perplexica on a $6 Hetzner VPS with Ollama running a local model (Mistral 7B or Llama 3.1 8B) costs $6/month total, with unlimited queries. That’s $168/year saved at the low end.
If you use a cloud LLM backend (OpenAI or Claude) through Perplexica instead of Ollama, add API costs — but at 50 research queries per day with reasonable answer lengths, you’re likely still under $10/month in API costs, keeping you under Perplexity’s price [1].
The caveat: Perplexica doesn’t include Perplexity’s proprietary search index, which is tuned specifically for AI-assisted research. SearxNG is a meta-search engine that’s good but not identical. Data on comparative search quality is not available from the reviewed sources.
Deployment reality check
Perplexica is one of the more approachable self-hosted AI tools, but it’s not a single-binary install. You’re combining at minimum two services: Perplexica itself and SearxNG.
The Docker route (recommended):
The bundled Docker image now ships with SearxNG included:
docker run -d -p 3000:3000 -v vane-data:/home/vane/data --name vane itzcrazykns1337/vane:latest
This gets you a running instance at http://localhost:3000 where you configure API keys and model settings through the UI — no manual config file editing [README]. For a technical user, this is a 15-minute setup.
For local LLM support (Ollama):
Connecting Perplexica to a local Ollama instance requires that Ollama is accessible from within Docker, which means setting OLLAMA_HOST=0.0.0.0 in the Ollama systemd service config (Linux) or equivalent on Mac/Windows [README]. It’s not complex, but it’s a step that trips up first-timers who assume Ollama and Perplexica will find each other automatically.
What you actually need:
- A Linux VPS with 2GB+ RAM (4GB+ recommended if running Ollama on the same machine)
- Docker installed
- A domain and reverse proxy (Caddy or nginx) for HTTPS if you’re exposing it beyond localhost
What can go sideways:
- SearxNG search quality depends on which engines are configured and whether your server’s IP is flagged. Residential IPs or VPNs typically produce better results than datacenter IPs. This is a real limitation if you’re deploying on a Hetzner or DigitalOcean node [1].
- Ollama on the same VPS as Perplexica means you need a GPU-equipped server or tolerance for slow CPU inference. For small models (7B, 8B), CPU inference is usable but not fast.
- The project is maintained by a single primary developer — it’s a community project with 33K stars but not backed by a company or large organization [merged profile]. That means update cadence and issue response times vary.
Realistic time estimate: 30–60 minutes for a technical user to a working HTTPS instance. 2–4 hours for someone who’s set up Docker before but hasn’t done reverse proxies. For a complete non-technical founder: not self-service without a technical resource.
Pros and Cons
Pros
- MIT-licensed. Full rights to self-host, modify, and redistribute without any commercial restrictions [merged profile]. Not “open core,” not “fair-code” — actual MIT.
- Full privacy stack possible. Ollama (local LLM) + SearxNG (private search) + Perplexica on your VPS = zero queries leaving your infrastructure [1][3]. This is the strongest argument versus Perplexity AI.
- Cited sources on every answer. Unlike a plain chatbot, every response links back to the pages it drew from — you can verify rather than trust [1].
- 33K+ GitHub stars. Not a weekend project — this has real community validation [merged profile].
- Multi-provider flexibility. Same UI, swap between Ollama/OpenAI/Claude/Gemini/Groq depending on the task [1].
- Multiple search modes. Speed vs. Quality tradeoff is explicit and user-controlled [1].
- File upload Q&A. Ask questions about PDFs and documents — useful for research workflows [1][README].
- No per-query pricing. Self-hosted, so query volume is limited by hardware, not a billing counter.
Cons
- SearxNG dependency is a genuine setup hurdle. It’s not optional — without SearxNG, there’s no web search. SearxNG itself requires configuration to produce good results, and search quality on datacenter IPs is often worse than on residential connections [1].
- No user management. No login wall, no multi-user accounts, no access controls. Fine for personal use, awkward for team deployment without a reverse-proxy auth layer.
- Single-maintainer project risk. The project is maintained primarily by one developer. Commit cadence is active but there’s no company behind it, no SLA, no guaranteed long-term support [merged profile].
- Search quality is not Perplexity-level. Perplexity’s proprietary index is specifically tuned for AI-assisted research. SearxNG is general-purpose — it’s good, not identical. The LLM backend has a larger effect on answer quality than the search layer.
- Local LLM + VPS combo requires hardware thought. Running Ollama on the same budget VPS that hosts Perplexica produces slow inference. You either need a separate Ollama instance, a GPU-equipped server, or you accept cloud API costs [1][3].
- Active rebrand in progress. The “Perplexica” to “Vane” rename creates confusion in documentation, Docker images, and community discussion. Third-party guides may reference old image names.
- No REST API documented. No programmatic access — you use it through the UI or not at all.
Who should use this / who shouldn’t
Use Perplexica if:
- You’re paying for Perplexity Pro ($20/month) and you do enough searches that self-hosting makes the math work.
- Privacy is a genuine requirement — research queries contain sensitive business information you don’t want indexed by a third-party service.
- You already run Ollama locally and want to add a web-search layer that uses your local model for synthesis.
- You’re comfortable with Docker and basic Linux administration, or you have someone technical who can do the initial setup.
Skip it (stay on Perplexity) if:
- You value search quality over privacy and cost. Perplexity’s proprietary index is better for fast, accurate research answers.
- You’re doing fewer than ~15 searches per day — at that volume, $20/month is reasonable and the setup investment doesn’t pay off.
- You need reliable uptime without maintenance effort. Self-hosted means you handle updates, SearxNG configuration drift, and the occasional 2am SearxNG outage.
Skip it (use a plain LLM + web search plugin) if:
- You already use ChatGPT Plus or Claude Pro and just want occasional web search — those products include web search natively without running your own infrastructure.
Alternatives worth considering
- Perplexity AI — the product Perplexica clones. Better proprietary search index, no setup required, $20/month. The default choice for non-technical users who don’t have strong privacy requirements.
- SearxNG alone — if you just want private search without AI synthesis, SearxNG itself is the right tool. It’s simpler to deploy and has no LLM dependency.
- Open WebUI + Ollama — if what you want is a ChatGPT-style interface to local models with optional web search, Open WebUI is more mature, has multi-user support, and a broader plugin ecosystem.
- AnythingLLM — better fit if you want document Q&A as the primary use case. Supports local LLMs, vector stores, and has a more polished multi-user experience.
- Khoj — another AI search/assistant hybrid for personal use. Stronger on personal knowledge management (your notes, emails, documents) than on real-time web search.
- Phind — developer-focused AI search with web access, cloud-based. No self-hosted option but much better coding-specific search quality.
The practical shortlist for someone escaping Perplexity’s bill: Perplexica vs Open WebUI. Pick Perplexica if you specifically want the Perplexity UX pattern with citations. Pick Open WebUI if you want a more general-purpose local LLM interface with web search as one of many tools.
Bottom line
Perplexica is a genuine, working open-source replacement for Perplexity’s core value proposition: ask a question, get a synthesized answer with numbered citations linking to real sources. The MIT license, 33K stars, and multiple LLM backend options make it a credible long-term choice for privacy-first research setups [merged profile][1]. The setup is not trivial — you’re assembling SearxNG plus an LLM provider plus Perplexica itself — but the Docker bundled image has reduced that friction significantly. The honest limitation is that SearxNG’s search quality on typical datacenter VPS IPs is below Perplexity’s proprietary index, and the project’s single-maintainer structure is a real longevity consideration. For a privacy-conscious founder or researcher who does serious daily research and currently pays $20/month for Perplexity Pro, the math and the privacy upside are clear. For someone who does occasional searching and doesn’t care where their queries land, Perplexity itself is the better experience.
If the Docker setup is the blocker, that’s exactly what upready.dev deploys for clients. One-time fee, done, you own the infrastructure.
Sources
-
Agent Native, Medium — “Open Source Perplexity: Perplexica is Self-Hosted AI Search With Citations” (January 28, 2026). https://agentnativedev.medium.com/open-source-perplexity-perplexica-is-self-hosted-ai-search-with-citations-8eed0fb2f092
-
Ernie Smith, Tedium — “Self-Hosting Tools: Still Worth Trying In 2026?” (March 28, 2026). https://tedium.co/2026/03/28/self-hosting-platform-tools-guide/
-
Ayush Pande, XDA Developers — “I use my local LLMs with these 6 obscure self-hosted apps” (March 5, 2026). https://www.xda-developers.com/i-use-my-local-llms-with-these-6-obscure-self-hosted-apps/
Primary sources:
- GitHub repository and README: https://github.com/itzcrazykns/perplexica (33,237 stars, MIT license)
- Docker Hub: https://hub.docker.com/r/itzcrazykns1337/vane
Features
AI & Machine Learning
- AI / LLM Integration
Media & Files
- File Attachments
Category
Related AI & Machine Learning Tools
View all 93 →OpenClaw
320KPersonal AI assistant you run on your own devices. 25+ messaging channels, voice, cron jobs, browser control, and a skills system.
Ollama
166KRun open-source LLMs locally — get up and running with DeepSeek, Qwen, Gemma, Llama, and more with a single command.
Open WebUI
128KRun AI on your own terms. Connect any model, extend with code, protect what matters—without compromise.
OpenCode
124KThe open-source AI coding agent — free models included, or connect Claude, GPT, Gemini, and 75+ other providers.
Zed
77KA high-performance code editor built from scratch in Rust by the creators of Atom — GPU-accelerated rendering, built-in AI, real-time multiplayer, and no Electron.
OpenHands
69KThe open-source, model-agnostic platform for cloud coding agents — automate real software engineering tasks with sandboxed execution, SDK, CLI, and enterprise-grade security.