Show HN: Open-source voice typing for desktop
github.comSpeech recognition happens locally in chrome or sent to google?
The webspeech api is web based, which I think goes to Google if you're using Chrome or Firefox. It also has a fairly long lag (well over a second?) and is mostly suited to text dictation.
In that case I think the headline “open source voice typing” is not a good description. I would expect it to work locally with a headline like that. Not to ship off all of my voice data to Google and have them run the speech to text stuff.
In chrome extension i'm using webSpeech api, which sends voice data to google server and get text back, but one doesnt have to login in any account to verify. SO in a way google gets your voice data but that data is sudo anonymous.
Will it allow me to dictate and edit code?
At the moment no,but its not that far, one has to write macros to replace words to special chars which simple means introducing a new command in the codebase which already has a good structure and structured in a way to support multiple languages. I've started this project, so i could type in julia lang(in pluto notebook). currently one could type text, emoji, and do some other stuff, which is defined in terms of commands see full list of command in the readme of https://github.com/fxnoob/voice-typing-for-desktop/tree/mast...
Maybe possible to write code with some work, but it will feel awful due to the high latency of cloud speech recognition and weird behavior of using a natural language model.
You should use Talon or Dragonfly if you want to edit code. Both have fast offline recognition available.
Nice idea, I will look for offline SR support. Contributions to project are also heartily welcome.