unsubbed.co

Amical

Amical handles AI dictation app as a self-hosted solution.

Open-source AI dictation, honestly reviewed. Built on Whisper and Ollama, runs on your machine, costs nothing.

TL;DR

  • What it is: Open-source (MIT) AI dictation and note-taking desktop app that runs entirely on your machine — no cloud, no subscription, no data leaving your computer [README][4].
  • Who it’s for: Founders, writers, and professionals who want voice-to-text without paying $10–20/month to Wispr Flow or Superwhisper, and who care about where their words go [4][README].
  • Cost savings: Wispr Flow costs ~$10–20/month, Superwhisper ~$8–15/month. Amical is MIT-licensed and free. If you’re dictating regularly, that’s $100–240/year back in your pocket [4].
  • Key strength: Context-aware formatting — it detects the active app and adjusts tone automatically. Drafting an email gets formal output; chatting on Discord gets casual output. This is the feature that sets it apart from dumb transcription tools [README][website].
  • Key weakness: A lot of the most interesting features — meeting transcription, MCP integration for voice-controlled app actions — are still planned or in progress, not shipped. You’re buying into a roadmap as much as a product [README].
  • Stars: 1,023–1,146 on GitHub (young project, ~1 year old) [4].

What is Amical

Amical is a desktop application for macOS and Windows that turns your voice into text using OpenAI’s Whisper model for speech recognition and Ollama for local LLM processing. Everything runs on your machine. Your words don’t touch a third-party server unless you explicitly choose a cloud provider [README].

The pitch is simple: dictate anywhere on your computer, and Amical formats the result appropriately for wherever you’re typing. An email draft comes out professional. A Slack message comes out casual. A code comment comes out terse. The app watches the active window and adjusts accordingly [website][README].

The project is built on Electron and Next.js, licensed MIT, and available via Homebrew (brew install --cask amical) or direct download from the GitHub releases page [README]. As of this review it sits at roughly 1,100 GitHub stars with about 95 forks — a small but active project less than a year old with commits landing as recently as hours ago [4].

The closest proprietary analogues are Wispr Flow and Superwhisper, both paid macOS dictation tools. Openalternative.co lists Amical as a direct open-source alternative to those two, plus Granola and Otter.ai [4].


Why people choose it

There isn’t a large body of third-party reviews for Amical yet — the project is too young and too niche for the major review sites. What does exist is the openalternative.co listing [4] and the product’s own positioning against the competition.

The core argument is straightforward: the paid dictation tools charge recurring fees for something that open models can do locally for free. Whisper is accurate, fast, and free. Ollama runs local LLMs without API costs. Amical wraps them into a polished desktop experience with a floating widget and hotkey control [README][4].

Versus Wispr Flow. Wispr Flow is the current leader in the “smart dictation for Mac” category — polished, fast, context-aware. It also costs ~$10–20/month and routes your voice through their servers. For a founder watching SaaS spend, that’s a subscription that’s easy to cut if the open-source alternative is good enough [4].

Versus Superwhisper. Superwhisper is more technical and customizable than Wispr Flow, also macOS-only, also paid. Amical targets the same use case with a local-first philosophy and no recurring cost [4].

Versus Otter.ai and Granola. These are meeting-focused transcription tools. Amical’s current strength is in-the-moment dictation across apps, not post-meeting summaries. Meeting transcription is on the roadmap but not shipped [README][4].

Versus native OS dictation. The website puts it plainly: native Mac and Windows dictation is “basic speech recognition with limited accuracy and no context awareness.” Amical’s Whisper-based engine plus LLM post-processing is meaningfully better for professional use [website].

The privacy angle is the other draw. If you’re dictating product strategy, client notes, or anything sensitive, routing that through a third-party SaaS is a trust decision you’re making by default. Local-first removes that decision entirely [README][4].


Features

Based on the README’s feature status markers (✔ = done, ◑ = in progress, ◯ = planned):

Shipped:

  • Whisper-based speech-to-text with AI accuracy enhancement [README]
  • Context-aware formatting — detects active app, adjusts tone automatically [README][website]
  • Floating widget with hotkey-triggered start/stop [README]
  • Custom vocabulary — teach it your terminology, jargon, proper nouns [website]
  • Custom hotkeys and voice macros for workflow shortcuts [README][website]
  • Multi-language support — 100+ languages per the website [website]
  • Local model support via Ollama — runs entirely offline [README]
  • Cloud model support (OpenAI, OpenRouter) for users who want cloud accuracy [website]
  • Smart formatting and autocorrect — grammar fixes, pronoun corrections [website]

In progress:

  • Smart voice notes — summaries, task lists, structured notes from voice [README]

Planned (not yet available):

  • MCP integration — voice commands that control your apps (“send a message to Jane on WhatsApp”) [README]
  • Real-time meeting transcription including system audio capture [README]

The MCP feature is the most ambitious item on the roadmap. If it ships, Amical becomes a voice-controlled automation layer on top of any app that exposes an MCP server — which is an interesting future. Right now it doesn’t exist [README].

One meaningful technical detail: the tech stack is Electron + Next.js + TypeScript + Whisper + Ollama. This means the app has a larger memory footprint than a native app, which is worth knowing if you’re on a machine with limited RAM [README].


Pricing: SaaS vs self-hosted math

Amical is free. MIT license, no subscription, no usage caps, no cloud dependency [README][4].

What you pay for instead:

  • Hardware to run it (your existing Mac or Windows machine)
  • Ollama running locally (free)
  • Whisper running locally (free)
  • Optionally: an OpenAI API key if you want cloud models instead of local ones

Competitor pricing for comparison:

  • Wispr Flow: ~$10–20/month (subscription, cloud-based)
  • Superwhisper: ~$8–15/month (subscription or one-time)
  • Otter.ai: free tier (limited) + $10–17/month for Pro
  • Granola: ~$10/month
  • Amical: $0

Over a year, replacing Wispr Flow with Amical saves roughly $120–240. That’s not life-changing money, but it’s also the kind of recurring SaaS line item that compounds across the 15 other subscriptions a typical founder is running [4].

The catch: you need a machine capable of running Whisper and an Ollama model locally without grinding to a halt. On Apple Silicon (M1/M2/M3 Macs), this is a non-issue — Whisper runs fast, and even a 7B parameter Ollama model runs comfortably. On older Intel machines or lower-end Windows hardware, local model performance may degrade. Fallback to cloud models (OpenAI, OpenRouter) remains available but reintroduces cost [README][website].

Cloud model pricing isn’t listed anywhere in the product documentation — costs would depend on your OpenAI or OpenRouter API usage, which varies by dictation volume.


Deployment reality check

This is a desktop app, not a server. “Deployment” means downloading and running it on your Mac or Windows machine. There’s no Docker, no VPS, no port configuration.

Install on macOS:

brew install --cask amical

Or download directly from GitHub releases. The app handles Ollama model setup in-app — the README explicitly calls out “one click setup of local models in-app” as a feature [README].

What you actually need:

  • macOS or Windows (Linux is not mentioned anywhere)
  • Enough RAM to run a Whisper model and optionally a local LLM — 8GB minimum, 16GB comfortable on Apple Silicon
  • A microphone
  • Optionally: Ollama installed for local LLM features (context-aware formatting)
  • Optionally: an OpenAI or OpenRouter API key for cloud model fallback

What can go sideways:

  • The smart formatting features (context-awareness, tone adaptation) require a local LLM via Ollama. Whisper alone handles transcription, but LLM post-processing is what makes the “context-aware” claim true. If you skip Ollama setup, you get accurate transcription but not intelligent formatting.
  • Electron apps are notoriously RAM-hungry. Running Amical alongside Ollama and your normal browser/app stack will consume meaningful memory on constrained machines.
  • The project is young (~1 year old, ~1,100 stars). Bugs exist. The issue tracker on GitHub is the primary support channel, and response speed depends on a small team [4][README].
  • Linux is absent from all documentation and download pages. If you’re a Linux user, this tool doesn’t exist for you yet.
  • Mobile is beta-access only (iOS and Android) — not generally available [README].

For a technical user on Apple Silicon: setup time is under 15 minutes. For a non-technical user: the Homebrew install is the hardest part, which isn’t that hard. The bigger friction is understanding that you need Ollama running separately if you want the full context-aware experience.


Pros and cons

Pros

  • MIT license, genuinely free. No subscription, no usage caps, no vendor lock-in. Own your tooling [README][4].
  • Fully local by default. Whisper runs on your machine. Your dictation never leaves unless you opt into cloud models. For anyone dictating sensitive content — strategy, client discussions, medical notes — this matters [README][4].
  • Context-aware formatting is real. Detecting the active app and adjusting tone is a genuine differentiator from dumb transcription tools. Native OS dictation doesn’t do this [website][README].
  • Fast install on macOS. Homebrew one-liner, in-app model setup. Not a painful onboarding [README].
  • Local model flexibility. Ollama means you choose which LLM does the post-processing — swap models based on speed vs. quality tradeoff [website][README].
  • Active development. Recent commits, open issue tracker, Discord community [4][README].

Cons

  • Critical features are still planned. Meeting transcription and MCP/voice-action integration — arguably the two most compelling future features — don’t exist yet [README]. You’re partly buying a roadmap.
  • Windows support is secondary. The project presentation, Homebrew install, and most screenshots skew macOS. Windows support exists but may lag [README].
  • No Linux. Simply absent [README].
  • Mobile is vaporware for most users. “Apply for Mobile Beta” is not the same as a shipping app [README].
  • Small community. ~1,100 stars, ~1 year old. Support depends on a small team and Discord. If you hit an obscure bug, you may wait [4].
  • Electron overhead. Not a native app. Expect higher memory usage than system-native dictation tools [README].
  • No pricing transparency for cloud models. If you fall back to OpenAI or OpenRouter, you’re responsible for monitoring and capping API spend yourself — nothing in the app guards against runaway costs.
  • Custom vocabulary and project-level context not documented clearly. The website advertises vocabulary customization prominently, but setup details are thin [website][4].

Who should use this / who shouldn’t

Use Amical if:

  • You’re paying for Wispr Flow or Superwhisper and want to stop. The core dictation experience is comparable, the price difference is 100% in your favor, and the local-first privacy is a free upgrade.
  • You have a modern Mac (M1 or newer) where local Whisper and Ollama run without friction.
  • Your dictation involves sensitive content — client calls, product strategy, medical or legal notes — and you don’t want it routed through someone else’s server.
  • You’re willing to tolerate the rough edges of a ~1-year-old open-source project in exchange for zero recurring cost.

Wait on Amical (revisit in 6 months) if:

  • Your main use case is meeting transcription. That feature isn’t shipped. Meetily or Otter.ai are better choices today [4].
  • You want voice-controlled app actions (say a phrase, trigger a workflow). MCP integration is on the roadmap but doesn’t exist yet [README].
  • You’re on Windows and want a polished experience comparable to macOS — the Windows build exists but it’s clearly not the primary target.
  • You’re on Linux. There’s no path forward here currently.

Skip it (use a paid tool) if:

  • You need guaranteed reliability for professional transcription — court reporting, medical dictation, client-facing use. A 1,100-star project with no SLA isn’t the right choice.
  • You genuinely can’t or won’t install Homebrew or manage Ollama. The setup is light, but it’s not zero.
  • Your machine is older Intel hardware struggling with local AI workloads. Cloud transcription tools will outperform local Whisper on constrained hardware.

Alternatives worth considering

From the openalternative.co listing and the product’s own competitor positioning [4]:

Proprietary (paid):

  • Wispr Flow — the polished benchmark. Excellent context-awareness, macOS-first, ~$10–20/month, cloud-based.
  • Superwhisper — more customizable than Wispr Flow, also macOS, also paid. Whisper-powered but proprietary.
  • Otter.ai — meeting-focused transcription with team features. Free tier + paid plans. Not dictation-focused.
  • Granola — AI meeting notes, not general dictation. Paid.

Open source:

  • Meetily — 11,169 stars, focused on meeting transcription with local Whisper and summaries. If meeting notes are your primary need, Meetily is more mature for that specific use case [4].
  • Hyprnote — 8,274 stars, AI notepad combining quick notes with meeting transcripts. Broader note-taking scope [4].
  • Minutes — 1,089 stars, local transcription with structured markdown output and Claude integration. Simpler, more focused [4].

The realistic shortlist for most users is Amical vs. Wispr Flow. Pick Amical if privacy and cost matter more than polish and reliability guarantees. Pick Wispr Flow if you need something that just works on day one, everywhere, without thinking about local model setup.


Bottom line

Amical is an honest bet on local AI dictation. It does what it says: turns your voice into well-formatted text using Whisper and a local LLM, detects what app you’re in, and adjusts the output accordingly — all on your machine, no subscription. For anyone paying $10–20/month to Wispr Flow or Superwhisper and running a modern Mac, the value case is obvious.

The honest caveat is that Amical is young. The most ambitious features — meeting transcription, voice-controlled app actions via MCP — exist on a roadmap, not in the app. What’s shipped is solid for daily dictation. What’s planned is exciting. Buy the present version with eyes open about the gap between “done” and “planned.”

If the local setup feels like friction you don’t want, that’s exactly the kind of deployment unsubbed.co’s parent studio upready.dev handles — one-time, done.


Sources

  1. OpenAlternative — Amical listing (stars, forks, alternatives, feature overview). https://openalternative.co/amical
  2. Amical GitHub README (features, tech stack, install instructions, license). https://github.com/amicalhq/amical
  3. Amical official website (homepage, feature descriptions, competitor comparison). https://amical.ai