The power of Whisper — without the cloud.
Whisper Talk wraps OpenAI's Whisper speech recognition model in a native macOS app. No Python, no API keys, no cloud. Just hold a hotkey and speak. Whisper transcribes locally, text appears in your app.
Get Whisper Talk$19Uses whisper.cpp via Rust N-API bindings. The same OpenAI Whisper models, optimized for real-time local inference on Mac hardware.
On Apple Silicon Macs, Whisper Talk leverages Metal GPU for faster transcription. Intel Macs use optimized CPU inference.
Tiny (75MB), Base (142MB), Small (466MB), Medium (1.5GB). Start with Base for great accuracy, upgrade to Medium for near-human performance.
Everything runs locally. No OpenAI API key, no usage fees, no rate limits. Your Mac is the server.
Whisper artifacts like [BLANK_AUDIO] and [MUSIC] are automatically stripped. Optional LLM correction cleans up grammar and punctuation.
Transcribed text lands directly in your active app via macOS accessibility APIs. Works in any text field.
Yes. Whisper Talk uses the same open-source Whisper models (ggml format) that OpenAI released, running locally via whisper.cpp. Same accuracy, no cloud dependency.
No. The Whisper model runs locally on your Mac. No API key or OpenAI account needed. The optional cloud AI correction can use OpenAI, but it's not required.
The Base model (142MB) is the sweet spot for most users — great accuracy with fast transcription. Upgrade to Medium (1.5GB) if you need near-human accuracy and have 16GB+ RAM.
Whisper Talk handles audio capture, model loading, transcription, text cleanup, and auto-pasting — all in a native Mac app with menu bar integration. No Python setup, no terminal commands.
No API keys · No Python · Just a Mac app · $19
Buy Whisper Talk for $19