Test TTS & STT models in your browser. No server. No data collection. Powered by WebGPU.
Try it yourselfStay in the loop
Subscribe for the latest news on voice AI and text-to-speech — new models, features, and benchmarks.
Featured Models
View allTalk to an AI assistant running entirely in your browser.
Text-to-Speech in the Browser
TTSLab lets you run text-to-speech models directly in your browser using WebGPU and WASM. No server-side processing, no API keys, no queue times. Models like Kokoro 82M, SpeechT5, and Piper generate natural-sounding speech in real time, right on your device.
Each model is downloaded once and cached locally for instant reuse. Whether you are a developer evaluating TTS options, a researcher benchmarking model quality, or a product team comparing voices, TTSLab provides a fast, standardized environment.
Why On-Device Speech AI Matters
Cloud-based TTS services require sending your text to external servers. For sensitive content — medical notes, legal documents, personal messages — that creates privacy and compliance risks. On-device inference eliminates this entirely: your text and audio never leave your browser.
WebGPU-accelerated inference also removes latency from network round trips, making real-time applications like voice agents and live captioning practical without a backend.
How It Works
1
Pick a Model
Browse our directory of TTS and STT models, or compare two side by side.
2
Downloads Once
The model weights are downloaded to your browser and cached locally for instant reuse.
3
Runs Locally
Inference runs entirely in your browser using WebGPU or WASM. No server required.
4
Data Stays Private
Your text and audio never leave your device. Zero data collection, zero tracking.
Open Source
TTSLab is MIT licensed and fully open source. Contribute models, report bugs, or build on top of the project.