Settings

Theme

Show HN: Moonshine Open-Weights STT models – higher accuracy than WhisperLargev3

github.com

316 points by petewarden 11 days ago · 91 comments · 1 min read

Reader

I wanted to share our new speech to text model, and the library to use them effectively. We're a small startup (six people, sub-$100k monthly GPU budget) so I'm proud of the work the team has done to create streaming STT models with lower word-error rates than OpenAI's largest Whisper model. Admittedly Large v3 is a couple of years old, but we're near the top the HF OpenASR leaderboard, even up against Nvidia's Parakeet family. Anyway, I'd love to get feedback on the models and software, and hear about what people might build with it.

Karrot_Kream 11 days ago

According to the OpenASR Leaderboard [1], looks like Parakeet V2/V3 and Canary-Qwen (a Qwen finetune) handily beat Moonshine. All 3 models are open, but Parakeet is the smallest of the 3. I use Parakeet V3 with Handy and it works great locally for me.

[1]: https://huggingface.co/spaces/hf-audio/open_asr_leaderboard

  • reitzensteinm 11 days ago

    Parakeet V3 is over twice the parameter count of Moonshine Medium (600m vs 245m), so it's not an apples to apples comparison.

    I'm actually a little surprised they haven't added model size to that chart.

    • bytesandbits 10 days ago

      parakeet v3 has a much better RTFx than moonshine, it's not just about parameter numbers. Runs faster.

      https://huggingface.co/spaces/hf-audio/open_asr_leaderboard

      • SyneRyder 10 days ago

        That was my experience when I tried Moonshine against Parakeet v3 via Handy. Moonshine was noticeably slower on my 2018-era Intel i7 PC, and didn't seem as accurate either. I'm glad it exists, and I like the smaller size on disk (and presumably RAM too). But for my purposes with Handy I think I need the extra speed and accuracy Parakeet v3 is giving me.

      • regularfry 10 days ago

        It is about the parameter numbers if what you care about is edge devices with limited RAM. Beyond a certain size your model just doesn't fit, it doesn't matter how good it is - you still can't run it.

        • bytesandbits 6 days ago

          I am not sure what "edge" device you want to run this on, but you can compress parakeet to under 500MB on RAM / disk with dynamic quants on-the-fly dequantization (GGUF or CoreML centroid palettization style). And retain essentially almost all accuracy.

          And just to be clear, 500MB is even enough for a raspberry Pi. Then your problem is not memory, is FLOPS. It might run real-time in a RPi 5, since it has around 50 GFLOPS of FP32, i.e. 100 GFLOPS of FP16. So about 20-50 times less than a modern iPhone. I don't think it will be able to keep it real time, TBF, but close.

          regardless, this model with such quantization strategy runs real time at +10x real-time factor even in 6-year old iPhones (which you can acquire for under $200) and offline at a reasonable speed, essentially anywhere.

          You get the best of both worlds: the accuracy of a whisper transformer at the speed and footprint of a small model.

    • agentifysh 11 days ago

      So I'm kinda new to this whole parakeet and moonshine stuff, and I'm able to run parakeet on a low end CPU without issues, so I'm curious as to how much that extra savings on parameters is actually gonna translate.

      Oh and I type this in handy with just my voice and parakeet version three, which is absolutely crazy.

  • Imustaskforhelp 10 days ago

    To this comment and all the other comments talking about handy below this comment. I tried handy right now and it's super amazing. I'm speaking this from Handy. This is so cool, man.

    And handy even takes care of all the punctuation, which is really nice.

    Thanks a lot for suggesting it to me. I actually wanted something like this, and I was using something like Google Docs, and it required me to use Chrome to get the speech to text version, and I actually ended up using Orion for that because Orion can actually work as a Chrome for some reason while still having both Firefox and Chrome extension support. So and I had it installed, but yeah.

    This is really amazing and actually a sort of lifesaver actually, so thanks a lot, man.

    Now I can actually just speak and this can convert this to text without having to go through any non-local model or Google Docs or whatever anything else.

    Why is this so good man? It's so good

    man, I actually now am thinking that I had like fully maxed out my typing speed to like hundred-120. But like this can actually write it faster. you know it's pretty amazing actually.

    Have a nice day, or as I abbreviate it, HAND, smiley face. :D

  • d4rkp4ttern 10 days ago

    Was a big fan of Handy until I found Hex, which, incredibly, has even faster transcription (with Parakeet V3), it’s MacOS only:

    https://github.com/kitlangton/Hex

    • Imustaskforhelp 10 days ago

      I tried this out but the brew command errors out saying it only works on macOS versions older than Sequoia.

      That's unfortunate. I think I can update my version but I have heard some bad things about performance from the newer update from my elder brother.

      • ValentineC 10 days ago

        > I tried this out but the brew command errors out saying it only works on macOS versions older than Sequoia.

        Newer than Sequoia, you mean?

        The brew recipe [1] says macOS >= 15.

        Anyway, I'm on Sequoia — it's mostly better than Ventura, which was what my M2 MacBook Pro came with. I'm holding off upgrading to Tahoe (macOS 26), hoping they fix liquid glAss.

        [1] https://formulae.brew.sh/cask/kitlangton-hex

      • d4rkp4ttern 10 days ago

        works fine on my MacOS w Tahoe

  • theologic 11 days ago

    By the way, I've been using a Whisper model, specifically WhisperX, to do all my work, and for whatever reason I just simply was not familiar with the Handy app. I've now downloaded and used it, and what a great suggestion. Thank you for putting it here, along with the direct link to the leaderboard.

    I can tell that this is now definitely going to be my go-to model and app on all my clients.

    • jasonjmcghee 11 days ago

      I have to ask- I see this handy app running on Mac and you hold a key down and then it doesn't show until seemingly a while later.

      The one built in is much faster, and you only have to toggle it on.

      Are these so much more accurate? I definitely have to correct stuff, but pretty good experience.

      Also use speech to text on my iphone which seems to be the same accuracy.

  • kardaj 10 days ago

    I'm building a local-first transcription iOS app and have been on Whisper Medium, switching to Parakeet V3 based on this.

    One note for anyone using Handy with codex-cli on macOS: the default "Option + Space" shortcut inserts spaces mid-speech. "Left Ctrl + Fn" works cleanly instead. I'm curious to know which shortcuts you're using.

    • bn-usd-mistake 10 days ago

      I am looking for such an app. Main use case is transcribing voice notes received on Signal while preserving privacy. Please post when you launch :)

  • tuananh 11 days ago

    Handy is amazing. Super quality app.

  • tomr75 11 days ago

    why V3 over V2 (assuming English only)?

  • agentifysh 11 days ago

    hmmm looks like assembyAI is still unbeatable here in terms of cost/performance unless im mistaken

    edit: holy shit parakeet is good.... Moonshine impressive too and it is half the param

    Now if only there was something just as quick as Parakeet v3 for TTS ! Then I can talk to codex all day long!!!

    • fittingopposite 10 days ago

      Also running parakeet on my phone with https://github.com/notune/android_transcribe_app

      Very lightweight and good quality

      • agentifysh 10 days ago

        This is actually pretty impressive. What kinda phone are you using? Are you noticing any drain on battery heat?Do you think it's possible to get this working with Flutter on iOS?

        • fittingopposite 10 days ago

          2-3 years old Android flagship phone with 8 GB RAM. When I looked for an app for parakeet, I think I also came across iOS apps. Don't recall it since I use Android. Seems light on the phone/battery. Don't observe any drain but I also only record shorter transcripts at once. Side note: Parakeet is actually pretty nice to do meetings with oneself. Did that on a computer while driving for an hour (split in several transcript chunks). Processed the raw meeting notes afterwards with an LLM. Effective use of the time in the car...

          • agentifysh 10 days ago

            Thank you for sharing ! What about the quality of the transcripts? Is it able to do live streaming?

            • fittingopposite 10 days ago

              Unfortunately, Parakeet doesn't support streaming like Moonshot does (as much as I know). Would be perfect to have sth of the size of Parakeet but supporting streaming. Still hope Nvidia releases a V4 with that feature :) Otherwise, I think STT is basically a solved problem running locally on edge devices.

              • Leftium 8 days ago

                I think there is a streaming version of Parakeet. It is often referred to as Nemotron, though.

                I tried comparing Parakeet streaming with Moonshine streaming. Moonshine is smaller, and I felt it was subjectively faster with about the same level of accuracy.

    • remuskaos 11 days ago

      Parakeet doesn't require a GPU. I'm handily running it on my Ubuntu Linux laptop.

      • namibj 10 days ago

        I'm looking to switch from feeding the default android "recorder" app's .WAV into Gemini 3 Pro (via the app) with (usually just) a `Transcribe this please:` prompt; content is usually German voice instructions/explanation for how to do/approach some sysadmin stuff; there does tend to be some amount of interjecting (primarily for clarifications(-posing/-requesting)) by me to resolve ambiguity as early as possible/practical.

        If e.g. parakeet can be run on my phone in real time showing the transcript live:

        - with latency low enough to be "comfortable enough" for the instructor to keep an eye on and approve the transcribed instructions

        [not necessarily every word of the transcript, i.e., a commanded "edit" doesn't need to be applied in the outcome as long as it's nature is otherwise clear enough to not add meaningful amounts of ambiguity to the final "written" instructions]

        by glancing at the screen while dictating the explanation (and blurting out any transcription complaints as soon as that's possible without breaking one's own string-of-thought or spoken grammar too much)

        , I'd very happily switch to that approach instead of what I was doing.

        Bonus if there's a no-bulky-or-expensive-hardware way to accommodate us both speaking over each other so I won't have to _interrupt_ his speaking just to put a clarifying comment (on what he just said) in the transcript for him to see and sign off, where the at least "only" briefly interrupts his thoughts right while he actually reads my transcribed words (he doesn't have to hear them, and it's better if he won't; I can probably get him to put on earmuffs to not hear me louder than he hears his thoughts, and a sufficiently-smoothed SNR meter for specifically his voice should take care him regulating his volume while the earmuffs mute it and I occasionally talk over him)...

      • agentifysh 11 days ago

        you are right i just downloaded it on handy and its working i can't believe it

        i was using assmeblyAI but this is fast and accurate and offline wtf!

        • remuskaos 5 days ago

          parakeet is amazing, it has completely ousted whisper for me. On Linux, both handy.computer and epicenter Whispering (using parakeet of course) work incredibly well for set-and-forget STT. I use it constantly to write messages on Slack/Teams, do debate with claude code etc. Both have minor bugs, but I can easily accept those, these apps being FOSS and all.

          On Mac, I've been using VoiceInk and it's even better. VoiceInk (and MacWhisper too, IIRC) use the neural engine and the delay between dictation and appearance of the typed text is almost imperceptible.

    • Dayshine 10 days ago

      What's wrong with piper?

  • syntaxing 11 days ago

    How much VRAM does parakeet take for you? For some reason it takes 4GB+ for me using the onyx version even though it’s 600M parameters

    • Leftium 8 days ago

      There are different versions of the parakeet model. The 8-bit quantized version doesn't use as many bits. Thus it saves space (only using about 600MB) while maintaining about the same level of accuracy.

      I think most apps that use Parakeet tend to use this version of the model?

      See if Parakeet (Nemotron) still uses 4GB+ with my implementation: https://rift-transcription.vercel.app/local-setup

T0mSIlver 10 days ago

Congrats on the results. The streaming aspect is what I find most exciting here.

I built a macOS dictation app (https://github.com/T0mSIlver/localvoxtral) on top of Voxtral Realtime, and the UX difference between streaming and offline STT is night and day. Words appearing while you're still talking completely changes the feedback loop. You catch errors in real time, you can adjust what you're saying mid-sentence, and the whole thing feels more natural. Going back to "record then wait" feels broken after that.

Curious how Moonshine's streaming latency compares in practice. Do you have numbers on time-to-first-token for the streaming mode? And on the serving side, do any of the integration options expose an OpenAI Realtime-compatible WebSocket endpoint?

  • Leftium 8 days ago

    My app uses this moonshine-voice python package, so you can experience it yourself here: https://rift-transcription.vercel.app/local-setup

    I made moonshine the default because it has the best accuracy/latency (aside from Web Speech API, but that is not fully local)

    I plan to add objective benchmarks in the future, so multiple models can be compared against the same audio data...

    ---

    I made a custom WebSocket server for my project. It defines its own API (modeled on the Sherpa-onnx API), but you could adjust it to output the OpenAI Realtime API: https://github.com/Leftium/rift-local

    (note rift-local is optimized for single connections, or rather not optimized to handle multiple WS connections)

francislavoie 11 days ago

I've helped many Twitch streamers set up https://github.com/royshil/obs-localvocal to plug transcription & translation into their streams, mainly for German audio to English subtitles.

I'd love a faster and more accurate option than Whisper, but streamers need something off-the-shelf they can install in their pipeline, like an OBS plugin which can just grab the audio from their OBS audio sources.

I see a couple obvious problems: this doesn't seem to support translation which is unfortunate, that's pretty key for this usecase. Also it only supports one language at a time, which is problematic with how streamers will frequently code-switch while talking to their chat in different languages or on Discord with their gameplay partners. Maybe such a plugin would be able to detect which language is spoken and route to one or the other model as needed?

  • mattmcegg 8 days ago

    I released a OBS plugin (and optional RTMP relay) that does exactly this. It can do real time translated captions and voice cloning/dubbing. The plugin lets you choose an audio source, then creates each language's captions and dub as new Sources. Use them however you'd like! check it out! https://streamfluent.ai

heftykoo 11 days ago

Claiming higher accuracy than Whisper Large v3 is a bold opening move. Does your evaluation account for Whisper's notorious hallucination loops during silences (the classic 'Thank you for watching!'), or is this purely based on WER on clean datasets? Also, what's the VRAM footprint for edge deployments? If it fits on a standard 8GB Mac without quantization tricks, this is huge.

guerython 11 days ago

Nice work. One metric I’d really like to see for streaming use cases is partial stability, not just final WER.

For voice agents, the painful failure mode is partials getting rewritten every few hundred ms. If you can share it, metrics like median first-token latency, real-time factor, and "% partial tokens revised after 1s / 3s" on noisy far-field audio would make comparisons much more actionable.

If those numbers look good, this seems very promising for local assistant pipelines.

  • regularfry 10 days ago

    Tangentially, have you got any idea what the equivalent "partial tokens revised" rate for humans is? I know I've consciously experienced backtracking and re-interpreting words before, and presumably it happens subconsciously all the time. But that means there's a bound on how low it's reasonable to expect that rate to be, and I don't have an intuition for what it is.

asqueella 11 days ago

For those wondering about the language support, currently English, Arabic, Japanese, Korean, Mandarin, Spanish, Ukrainian, Vietnamese are available (most in Base size = 58M params)

ac29 11 days ago

No idea why 'sudo pip install --break-system-packages moonshine-voice' is the recommended way to install on raspi?

The authors do acknowledge this though and give a slightly too complex way to do this with uv in an example project (FYI, you dont need to source anything if you use uv run)

fareesh 11 days ago

Accuracy is often presumed to be english, which is fine, but it's a vague thing to say "higher" because does it mean higher in English only? Higher in some subset of languages? Which ones?

The minimum useful data for this stuff is a small table of language | WER for dataset

nmstoker 11 days ago

Any plans regarding JavaScript support in the browser?

There was an issue with a demo but it's missing now. I can't recall for sure but I think I got it working locally myself too but then found it broke unexpectedly and I didn't manage to find out why.

RobotToaster 11 days ago

> Models for other languages are released under the Moonshine Community License, which is a non-commercial license.

Weird to only release English as open weights.

  • riedel 10 days ago

    I find it an even more weird practice for anyone working with speech or text models not in the first paragraph name the language it is meant for (and I do not mean the programming language bindings). How many English native speakers are there 5% of the world population?

    • RobotToaster 10 days ago

      Approximately yes, although another 15% are non-native English speakers. Chinese is a close second for total speakers.

dagss 11 days ago

Very exciting stuff!

    hear about what people might build with it
My startup is making software for firefighters to use during missions on tablets, excited to see (when I get the time) if we can use this as a keyboard alternative on the device. It's a use case where avoiding "clunky" is important and a perfect usecase for speech-to-text.

Due to the sector being increasingly worried about "hybrid threats" we try to rely on the cloud as little as possible and run things either on device or with the possibility of being self-hosted/on-premise. I really like the direction your company is going in in this respect.

We'd probably need custom training -- we need Norwegian, and there's some lingo, e.g., "bravo one two" should become "B-1.2". While that can perhaps also be done with simple post-processing rules, we would also probably want such examples in training for improved recognition? Have no VC funding, but looking forward to getting some income so that we can send some of it in your direction :)

  • steinvakt2 10 days ago

    Interesting. Can we get in touch? I just sold my webapp/saas where I used NB-Whisper to transcribe Norwegian media (podcast, radio, TV) and offer alerts and search by indexing it using elasticsearch.

    Edit: It was https://muninai.eu (I shut down the backend server yesterday so the functionality is disabled).

    • dagss 10 days ago

      Sure! I didn't find your contact info but drop me an email at dag@syncmap.no.

sourcetms 10 days ago

I'm offering support for this in Resonant - Already set up and running this week.

It's incredible for a live transcription stream - the latency is WOW.

https://www.onresonant.com/

For the open source folks, that's also set up in handy, I think.

armcat 11 days ago

This is awesome, well done guys, I’m gonna try it as my ASR component on the local voice assistant I’ve been building https://github.com/acatovic/ova. The tiny streaming latencies you show look insane

Leftium 8 days ago

Try Moonshine with a browser GUI:

    uv tool install rift-local && rift-local serve --open
This opens RIFT[1], my web frontend for local transcription with a copy button. You can also compare against Web Speech API and other models (including cloud API's).

https://github.com/Leftium/rift-local

[1]: https://rift-transcription.vercel.app/local-setup

binome 10 days ago

I vibe-trained moonshine-tiny on amateur radio morse code last weekend, and was surprised at the ~2% CER I was seeing in evals and over the air performance was pretty acceptable for a couple hour run on a 4090.

999900000999 11 days ago

Very cool. Anyway to run this in Web assembly, I have a project in mind

lostmsu 11 days ago

How does it compare to Microsoft VibeVoice ASR https://news.ycombinator.com/item?id=46732776 ?

g-mork 11 days ago

How does this compare to Parakeet, which runs wonderfully on CPU?

Ross00781 10 days ago

Open-weight STT models hitting production-grade accuracy is huge for privacy-sensitive deployments. Whisper was already impressive, but having competitive alternatives means we're not locked into a single model family. The real test will be multilingual performance and edge device efficiency—has anyone benchmarked this on M-series or Jetson?

pzo 11 days ago

haven't tested yet but I'm wondering how it will behave when talking about many IT jargon and tech acronyms. For those reason I had to mostly run LLM after STT but that was slowing done parakeet inference. Otherwise had problems to detect properly sometimes when talking about e.g. about CoreML, int8, fp16, half float, ARKit, AVFoundation, ONNX etc.

Ross00781 10 days ago

The streaming architecture looks really promising for edge deployments. One thing I'm curious about: how does the caching mechanism handle multiple concurrent audio streams? For example, in a meeting transcription scenario with 4-5 speakers, would each stream maintain its own cache, or is there shared state that could create bottlenecks?

saltwounds 11 days ago

Streaming transcription is crazy fast on an M1. Would be great to use this as a local option versus Wispr Flow.

oezi 11 days ago

Do you also support timestamps the detected word or even down to characters?

fittingopposite 10 days ago

Which program does support it to allow streaming? Currently using spokenly and parakeet but would like to transition to a model that is streaming instead of transcribing chunk wise.

regularfry 10 days ago

Oh this is fantastic. I'm most interested to see if this reaches down to the raspberry pi zero 2, because that's a whole new ballgame if it does.

dSebastien 10 days ago

I've been using Moonshine since V1 and the results are really great. I'd say on par with Parakeet V3 while working really well with CPU only.

sroussey 11 days ago

onnx models for browser possible?

starkparker 11 days ago

Implemented this to transcribe voice chat in a project and the streaming accuracy in English on this was unusable, even with the medium streaming model.

fudged71 10 days ago

If it's using ONNX, can this be ported to Transformers.js?

alexnewman 11 days ago

If only it did Doric

raybb 11 days ago

fyi the typepad link in your bio is broken

cyanydeez 11 days ago

No LICENSE no go

  • bangaladore 11 days ago

    There is a license blurb in the readme.

    > This code, apart from the source in core/third-party, is licensed under the MIT License, see LICENSE in this repository.

    > The English-language models are also released under the MIT License. Models for other languages are released under the Moonshine Community License, which is a non-commercial license.

    > The code in core/third-party is licensed according to the terms of the open source projects it originates from, with details in a LICENSE file in each subfolder.

    • mkl 11 days ago

      The LICENSE file that refers to is missing. There's one in the python folder, but not for the rest of the code.

      • namibj 10 days ago

        IANAL.

        Presuming (I haven't checked myself) the git author information supports this, it should be fine to treat this as licensing the code it specifies under MIT; based on that license name being (to my understanding) unambiguous and license application being based on contract law and contract law basically having at it's very core the principle of "meeting of the minds" along with wilful infringement being really really hard to even argue for if the only thing that's separating it from being 100% clearly licensed in all proper ways being not copying in an MIT `LICENSE` template with date and author name pasted into it.

  • altruios 11 days ago

    reading through readme.md "License This code, apart from the source in core/third-party, is licensed under the MIT License, see LICENSE in this repository.

    The English-language models are also released under the MIT License. Models for other languages are released under the Moonshine Community License, which is a non-commercial license.

    The code in core/third-party is licensed according to the terms of the open source projects it originates from, with details in a LICENSE file in each subfolder."

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection