📚 ebook2audiobook
CPU/GPU Converter from eBooks to audiobooks with chapters and metadata
using XTTSv2, Bark, Vits, Fairseq, YourTTS, Tacotron2 and more. Supports voice cloning and 1158 languages!
Important
This tool is intended for use with non-DRM, legally acquired eBooks only.
The authors are not responsible for any misuse of this software or any resulting legal consequences.
Use this tool responsibly and in accordance with all applicable laws.
Thanks to support ebook2audiobook developers!
Run locally
Run Remotely
GUI Interface
Demos
New Default Voice Demo
Sherlock.mp4
More Demos
ASMR Voice
WhisperASMR-Demo.mp4
Rainy Day Voice
Rainy_Day_voice_Demo.mp4
Scarlett Voice
ScarlettJohansson-Demo.mp4
David Attenborough Voice
shortStory.mp4
Example
README.md
Table of Contents
Features
- 📚 Splits eBook into chapters for organized audio.
- 🎙️ High-quality text-to-speech with Coqui XTTSv2 and Fairseq (and more).
- 🗣️ Optional voice cloning with your own voice file.
- 🌍 Supports +1110 languages (English by default). List of Supported languages
- 🖥️ Designed to run on 4GB RAM.
Supported Languages
| Arabic (ar) | Chinese (zh) | English (en) | Spanish (es) |
|---|---|---|---|
| French (fr) | German (de) | Italian (it) | Portuguese (pt) |
| Polish (pl) | Turkish (tr) | Russian (ru) | Dutch (nl) |
| Czech (cs) | Japanese (ja) | Hindi (hi) | Bengali (bn) |
| Hungarian (hu) | Korean (ko) | Vietnamese (vi) | Swedish (sv) |
| Persian (fa) | Yoruba (yo) | Swahili (sw) | Indonesian (id) |
| Slovak (sk) | Croatian (hr) | Tamil (ta) | Danish (da) |
Hardware Requirements
- 2gb RAM minimum, 8GB recommended
- Virtualization enabled if running on windows (Docker only)
- CPU (intel, AMD, ARM), GPU (Nvidia, AMD*, Intel*) (Recommended), MPS (Apple Silicon CPU) *available very soon
Important
Before to post an install or bug issue search carefully to the opened and closed issues TAB
to be sure your issue does not exist already.
Note
Lacking of any standards structure like what is a chapter, paragraph, preface etc.
you should first remove manually any text you don't want to be converted in audio.
Instructions
-
Clone repo
git clone https://github.com/DrewThomasson/ebook2audiobook.git cd ebook2audiobook -
Install / Run ebook2audiobook:
-
Linux/MacOS
./ebook2audiobook.sh # Run launch scriptNote for MacOS users: homebrew is installed to install missing programs.
-
Mac Launcher
Double clickMac Ebook2Audiobook Launcher.command -
Windows
ebook2audiobook.cmd # Run launch script or double click on itNote for Windows users: scoop is installed to install missing programs without administrator privileges.
-
Windows Launcher
Double clickebook2audiobook.cmd
-
-
Open the Web App: Click the URL provided in the terminal to access the web app and convert eBooks.
http://localhost:7860/ -
For Public Link:
./ebook2audiobook.sh --share(Linux/MacOS)ebook2audiobook.cmd --share(Windows)python app.py --share(all OS)
Important
If the script is stopped and run again, you need to refresh your gradio GUI interface
to let the web page reconnect to the new connection socket.
Basic Usage
-
Linux/MacOS:
./ebook2audiobook.sh --headless --ebook <path_to_ebook_file> \ --voice [path_to_voice_file] --language [language_code]
-
Windows
ebook2audiobook.cmd --headless --ebook <path_to_ebook_file> --voice [path_to_voice_file] --language [language_code]
-
[--ebook]: Path to your eBook file
-
[--voice]: Voice cloning file path (optional)
-
[--language]: Language code in ISO-639-3 (i.e.: ita for italian, eng for english, deu for german...).
Default language is eng and --language is optional for default language set in ./lib/lang.py.
The ISO-639-1 2 letters codes are also supported.
Example of Custom Model Zip Upload
(must be a .zip file containing the mandatory model files. Example for XTTSv2: config.json, model.pth, vocab.json and ref.wav)
-
Linux/MacOS
./ebook2audiobook.sh --headless --ebook <ebook_file_path> \ --language <language> --custom_model <custom_model_path>
-
Windows
ebook2audiobook.cmd --headless --ebook <ebook_file_path> \ --language <language> --custom_model <custom_model_path>
Note: the ref.wav of your custom model is always the voice selected for the conversion
-
<custom_model_path>: Path to
model_name.zipfile, which must contain (according to the tts engine) all the mandatory files
(see ./lib/models.py).
For Detailed Guide with list of all Parameters to use
- Linux/MacOS
./ebook2audiobook.sh --help
- Windows
ebook2audiobook.cmd --help
- Or for all OS
python app.py --help
usage: app.py [-h] [--session SESSION] [--share] [--headless] [--ebook EBOOK] [--ebooks_dir EBOOKS_DIR]
[--language LANGUAGE] [--voice VOICE]
[--device {{'proc': 'cpu', 'found': True},{'proc': 'cuda', 'found': False},{'proc': 'mps', 'found': False},{'proc': 'rocm', 'found': False},{'proc': 'xpu', 'found': False}}]
[--tts_engine {XTTSv2,BARK,VITS,FAIRSEQ,TACOTRON2,YOURTTS,xtts,bark,vits,fairseq,tacotron,yourtts}]
[--custom_model CUSTOM_MODEL] [--fine_tuned FINE_TUNED] [--output_format OUTPUT_FORMAT]
[--output_channel OUTPUT_CHANNEL] [--temperature TEMPERATURE] [--length_penalty LENGTH_PENALTY]
[--num_beams NUM_BEAMS] [--repetition_penalty REPETITION_PENALTY] [--top_k TOP_K] [--top_p TOP_P]
[--speed SPEED] [--enable_text_splitting] [--text_temp TEXT_TEMP] [--waveform_temp WAVEFORM_TEMP]
[--output_dir OUTPUT_DIR] [--version]
Convert eBooks to Audiobooks using a Text-to-Speech model. You can either launch the Gradio interface or run the script in headless mode for direct conversion.
options:
-h, --help show this help message and exit
--session SESSION Session to resume the conversion in case of interruption, crash,
or reuse of custom models and custom cloning voices.
**** The following options are for all modes:
Optional
**** The following option are for gradio/gui mode only:
Optional
--share Enable a public shareable Gradio link.
**** The following options are for --headless mode only:
--headless Run the script in headless mode
--ebook EBOOK Path to the ebook file for conversion. Cannot be used when --ebooks_dir is present.
--ebooks_dir EBOOKS_DIR
Relative or absolute path of the directory containing the files to convert.
Cannot be used when --ebook is present.
--language LANGUAGE Language of the e-book. Default language is set
in ./lib/lang.py sed as default if not present. All compatible language codes are in ./lib/lang.py
optional parameters:
--voice VOICE (Optional) Path to the voice cloning file for TTS engine.
Uses the default voice if not present.
--device {{'proc': 'cpu', 'found': True},{'proc': 'cuda', 'found': False},{'proc': 'mps', 'found': False},{'proc': 'rocm', 'found': False},{'proc': 'xpu', 'found': False}}
(Optional) Pprocessor unit type for the conversion.
Default is set in ./lib/conf.py if not present. Fall back to CPU if CUDA or MPS is not available.
--tts_engine {XTTSv2,BARK,VITS,FAIRSEQ,TACOTRON2,YOURTTS,xtts,bark,vits,fairseq,tacotron,yourtts}
(Optional) Preferred TTS engine (available are: ['XTTSv2', 'BARK', 'VITS', 'FAIRSEQ', 'TACOTRON2', 'YOURTTS', 'xtts', 'bark', 'vits', 'fairseq', 'tacotron', 'yourtts'].
Default depends on the selected language. The tts engine should be compatible with the chosen language
--custom_model CUSTOM_MODEL
(Optional) Path to the custom model zip file cntaining mandatory model files.
Please refer to ./lib/models.py
--fine_tuned FINE_TUNED
(Optional) Fine tuned model path. Default is builtin model.
--output_format OUTPUT_FORMAT
(Optional) Output audio format. Default is m4b set in ./lib/conf.py
--output_channel OUTPUT_CHANNEL
(Optional) Output audio channel. Default is mono set in ./lib/conf.py
--temperature TEMPERATURE
(xtts only, optional) Temperature for the model.
Default to config.json model. Higher temperatures lead to more creative outputs.
--length_penalty LENGTH_PENALTY
(xtts only, optional) A length penalty applied to the autoregressive decoder.
Default to config.json model. Not applied to custom models.
--num_beams NUM_BEAMS
(xtts only, optional) Controls how many alternative sequences the model explores. Must be equal or greater than length penalty.
Default to config.json model.
--repetition_penalty REPETITION_PENALTY
(xtts only, optional) A penalty that prevents the autoregressive decoder from repeating itself.
Default to config.json model.
--top_k TOP_K (xtts only, optional) Top-k sampling.
Lower values mean more likely outputs and increased audio generation speed.
Default to config.json model.
--top_p TOP_P (xtts only, optional) Top-p sampling.
Lower values mean more likely outputs and increased audio generation speed. Default to config.json model.
--speed SPEED (xtts only, optional) Speed factor for the speech generation.
Default to config.json model.
--enable_text_splitting
(xtts only, optional) Enable TTS text splitting. This option is known to not be very efficient.
Default to config.json model.
--text_temp TEXT_TEMP
(bark only, optional) Text Temperature for the model.
Default to config.json model.
--waveform_temp WAVEFORM_TEMP
(bark only, optional) Waveform Temperature for the model.
Default to config.json model.
--output_dir OUTPUT_DIR
(Optional) Path to the output directory. Default is set in ./lib/conf.py
--version Show the version of the script and exit
Example usage:
Windows:
Gradio/GUI:
ebook2audiobook.cmd
Headless mode:
ebook2audiobook.cmd --headless --ebook '/path/to/file' --language eng
Linux/Mac:
Gradio/GUI:
./ebook2audiobook.sh
Headless mode:
./ebook2audiobook.sh --headless --ebook '/path/to/file' --language eng
Docker build image:
Windows:
ebook2audiobook.cmd --script_mode build_docker
Linux/Mac
./ebook2audiobook.sh --script_mode build_docker
Docker run image:
Gradio/GUI:
CPU:
docker run --rm -it -p 7860:7860 ebook2audiobook:cpu
CUDA:
docker run --gpus all --rm -it -p 7860:7860 ebook2audiobook:cu[118/121/128 etc..]
ROCM:
docker run --device=/dev/kfd --device=/dev/dri --rm -it -p 7860:7860 ebook2audiobook:rocm[5.5/6.1/6.4 etc..]
XPU:
docker run --device=/dev/dri --rm -it -p 7860:7860 ebook2audiobook:xpu
JETSON:
docker run --runtime nvidia --rm -it -p 7860:7860 ebook2audiobook:jetson[51/60/61 etc...]
Headless mode:
CPU:
docker run --rm -it -v "/my/real/ebooks/folder/absolute/path:/app/ebooks" -v "/my/real/output/folder/absolute/path:/app/audiobooks" -p 7860:7860 ebook2audiobook:cpu --headless --ebook "/app/ebooks/myfile.pdf" [--voice /app/my/voicepath/voice.mp3 etc..]
CUDA:
docker run --gpus all --rm -it -v "/my/real/ebooks/folder/absolute/path:/app/ebooks" -v "/my/real/output/folder/absolute/path:/app/audiobooks" -p 7860:7860 ebook2audiobook:cu[118/121/128 etc..] --headless --ebook "/app/ebooks/myfile.pdf" [--voice /app/my/voicepath/voice.mp3 etc..]
ROCM:
docker run --device=/dev/kfd --device=/dev/dri --rm -it -v "/my/real/ebooks/folder/absolute/path:/app/ebooks" -v "/my/real/output/folder/absolute/path:/app/audiobooks" -p 7860:7860 ebook2audiobook:rocm[5.5/6.1/6.4 etc..] --headless --ebook "/app/ebooks/myfile.pdf" [--voice /app/my/voicepath/voice.mp3 etc..]
XPU:
docker run --device=/dev/dri --rm -it -v "/my/real/ebooks/folder/absolute/path:/app/ebooks" -v "/my/real/output/folder/absolute/path:/app/audiobooks" -p 7860:7860 ebook2audiobook:xpu --headless --ebook "/app/ebooks/myfile.pdf" [--voice /app/my/voicepath/voice.mp3 etc..]
JETSON:
docker run --runtime nvidia --rm -it -v "/my/real/ebooks/folder/absolute/path:/app/ebooks" -v "/my/real/output/folder/absolute/path:/app/audiobooks" -p 7860:7860 ebook2audiobook:jetson[51/60/61 etc...] --headless --ebook "/app/ebooks/myfile.pdf" [--voice /app/my/voicepath/voice.mp3 etc..]
Docker Compose (i.e. for cuda 11.8, add --build to rebuild):
DEVICE_TAG=cu118 docker compose up -d
Podman Compose (i.e. for cuda 12.4, add --build to rebuild):
DEVICE_TAG=cu124 podman-compose up -d
* MPS is not exposed in docker so CPU must be used.
Tip: to add of silence (random duration between 1.0 and 1.8 seconds) into your text just use "###" or "[pause]".
NOTE: in gradio/gui mode, to cancel a running conversion, just click on the [X] from the ebook upload component.
TIP: if it needs some more pauses, just add '###' or '[pause]' between the words you wish more pause. one [pause] is a random between 0.8 to 1.6 seconds
Docker
Steps to Run
- Clone the Repository:
git clone https://github.com/DrewThomasson/ebook2audiobook.git
cd ebook2audiobook- Build the container
# Windows ebook2audiobook.cmd --script_mode build_docker # Linux/MacOS ./ebook2audiobook.sh --script_mode build_docker
- Run the Container:
# Gradio/GUI: # CPU: docker run --rm -it -p 7860:7860 ebook2audiobook:cpu # CUDA: docker run --gpus all --rm -it -p 7860:7860 ebook2audiobook:cu[118/121/128 etc..] # ROCM: docker run --device=/dev/kfd --device=/dev/dri --rm -it -p 7860:7860 ebook2audiobook:rocm[5.5/6.1/6.4 etc..] # XPU: docker run --device=/dev/dri --rm -it -p 7860:7860 ebook2audiobook:xpu # JETSON: docker run --runtime nvidia --rm -it -p 7860:7860 ebook2audiobook:jetson[51/60/61 etc...] # Headless mode examples: # CPU: docker run --rm -it -v "/my/real/ebooks/folder/absolute/path:/app/ebooks" -v "/my/real/output/folder/absolute/path:/app/audiobooks" -p 7860:7860 ebook2audiobook:cpu --headless --ebook "/app/ebooks/myfile.pdf" [--voice /app/my/voicepath/voice.mp3 etc..] # CUDA: docker run --gpus all --rm -it -v "/my/real/ebooks/folder/absolute/path:/app/ebooks" -v "/my/real/output/folder/absolute/path:/app/audiobooks" -p 7860:7860 ebook2audiobook:cu[118/121/128 etc..] --headless --ebook "/app/ebooks/myfile.pdf" [--voice /app/my/voicepath/voice.mp3 etc..] # ROCM: docker run --device=/dev/kfd --device=/dev/dri --rm -it -v "/my/real/ebooks/folder/absolute/path:/app/ebooks" -v "/my/real/output/folder/absolute/path:/app/audiobooks" -p 7860:7860 ebook2audiobook:rocm[5.5/6.1/6.4 etc..] --headless --ebook "/app/ebooks/myfile.pdf" [--voice /app/my/voicepath/voice.mp3 etc..] # XPU: docker run --device=/dev/dri --rm -it -v "/my/real/ebooks/folder/absolute/path:/app/ebooks" -v "/my/real/output/folder/absolute/path:/app/audiobooks" -p 7860:7860 ebook2audiobook:xpu --headless --ebook "/app/ebooks/myfile.pdf" [--voice /app/my/voicepath/voice.mp3 etc..] # JETSON: docker run --runtime nvidia --rm -it -v "/my/real/ebooks/folder/absolute/path:/app/ebooks" -v "/my/real/output/folder/absolute/path:/app/audiobooks" -p 7860:7860 ebook2audiobook:jetson[51/60/61 etc...] --headless --ebook "/app/ebooks/myfile.pdf" [--voice /app/my/voicepath/voice.mp3 etc..] # Docker Compose (example for cuda 12.9) docker-compose up -d DEVICE_TAG=cu128 docker compose up -d # add --build if needed # To stop -> docker-compose down # Podman Compose (example for cuda 12.8) podman compose -f podman-compose.yml up DEVICE_TAG=cu128 podman-compose up -d # add --build if needed # To stop -> podman compose -f podman-compose.yml down
- NOTE: MPS is not exposed in docker so CPU must be used
Common Docker Issues
- My NVIDIA GPU isn't being detected?? -> GPU ISSUES Wiki Page
Fine Tuned TTS models
Fine Tune your own XTTSv2 model
De-noise training data
Fine Tuned TTS Collection
For an XTTSv2 custom model a ref audio clip of the voice reference is mandatory:
Supported eBook Formats
.epub,.pdf,.mobi,.txt,.html,.rtf,.chm,.lit,.pdb,.fb2,.odt,.cbr,.cbz,.prc,.lrf,.pml,.snb,.cbc,.rb,.tcr- Best results:
.epubor.mobifor automatic chapter detection
Output Formats
- Creates a
['m4b', 'm4a', 'mp4', 'webm', 'mov', 'mp3', 'flac', 'wav', 'ogg', 'aac'](set in ./lib/conf.py) file with metadata and chapters.
Updating to Latest Version
git pull # Locally/Compose docker pull athomasson2/ebook2audiobook:latest # For Pre-build docker images
Your own Ebook2Audiobook customization
You are free to modify libs/conf.py to add or remove the settings you wish. If you plan to do it just make a copy of the original conf.py so on each ebook2audiobook update you will backup your modified conf.py and put back the original one. You must plan the same process for models.py. If you wish to make your own custom model as an official ebook2audiobook fine tuned model so please contact us and we'll ad it to the models.py list.
Reverting to older Versions
Releases can be found -> here
git checkout tags/VERSION_NUM # Locally/Compose -> Example: git checkout tags/v25.7.7 athomasson2/ebook2audiobook:VERSION_NUM # For Pre-build docker images -> Example: athomasson2/ebook2audiobook:v25.7.7
Common Issues:
- My NVIDIA GPU isn't being detected?? -> GPU ISSUES Wiki Page
- CPU is slow (better on server smp CPU) while NVIDIA GPU can have almost real time conversion. Discussion about this For faster multilingual generation I would suggest my other project that uses piper-tts instead (It doesn't have zero-shot voice cloning though, and is Siri quality voices, but it is much faster on cpu).
- "I'm having dependency issues" - Just use the docker, its fully self contained and has a headless mode,
add
--helpparameter at the end of the docker run command for more information. - "Im getting a truncated audio issue!" - PLEASE MAKE AN ISSUE OF THIS, we don't speak every language and need advise from users to fine tune the sentence splitting logic.😊
What we need help with! 🙌
Full list of things can be found here
- Any help from people speaking any of the supported languages to help us improve the models
Special Thanks
- Coqui TTS: Coqui TTS GitHub
- Calibre: Calibre Website
- FFmpeg: FFmpeg Website
- @shakenbake15 for better chapter saving method




