Hebrew Text-to-Speech inference using ONNX and TensorRT
Project description
Blue
Text-to-Speech inference using ONNX Runtime with optional TensorRT acceleration.
✨ Demo
🎙️ Human-sounding TTS in Hebrew, English, Spanish, Italian & German — try samples and the live demo on the site.
Installation
From PyPI:
pip install blue-onnx
Install the core dependencies (from this repo):
uv sync
For CUDA (GPU) support:
uv sync --extra gpu
Download Models
uv run hf download notmax123/blue-onnx --repo-type model --local-dir ./onnx_models \
--exclude "voices/all_voices/**"
Optional:
- Hebrew G2P:
wget -O model.onnx https://huggingface.co/thewh1teagle/renikud/resolve/main/model.onnx
- 2000+ voice JSONs:
uv run hf download notmax123/blue-onnx voices/all_voices/ --repo-type model --local-dir ./onnx_models
- PyTorch weights (export new voices):
uv sync --extra exportthenuv run hf download notmax123/blue --repo-type model --local-dir ./pt_models
Usage
Examples use voices/female1.json from this repo. After the optional voice download, use paths under onnx_models/voices/all_voices/ (manifest.tsv lists them).
Here is a basic example of how to use BlueTTS in Python:
import soundfile as sf
from src.blue_onnx import BlueTTS
tts = BlueTTS(
onnx_dir="onnx_models",
style_json="voices/female1.json",
renikud_path="model.onnx",
)
# Single language
samples, sr = tts.synthesize("שלום, זהו מודל דיבור בעברית.", lang="he")
sf.write("output.wav", samples, sr)
# Mixed languages
mixed = "שלום לכולם, <en>welcome to the presentation</en>, <es>espero que lo disfruten</es>."
samples, sr = tts.synthesize(mixed, lang="he")
sf.write("mixed_output.wav", samples, sr)
Running Examples
You can run the provided example scripts to test the model. Outputs will be saved in the examples/out/ directory.
# Generate samples for individual languages
uv run python examples/hebrew.py
uv run python examples/english.py
uv run python examples/spanish.py
uv run python examples/italian.py
uv run python examples/german.py
# Generate a mixed-language sample
uv run python examples/mixed.py
# Run the CLI app
uv run python examples/app.py --lang en --text "Hello world."
TensorRT (NVIDIA GPUs Only)
For faster inference on NVIDIA GPUs, you can build TensorRT engines.
- Install TensorRT dependencies:
uv sync --extra tensorrt
uv pip install tensorrt-cu12 # installed separately due to astral-sh/uv#14313
- Build the engines (see
scripts/README.mdfor details):
uv run python scripts/create_tensorrt.py \
--onnx_dir onnx_models --engine_dir trt_engines --precision fp16 --config config/tts.json
(Note: The examples/all_langs_and_mix.py --tensorrt flag is currently bugged).
Papers
@ARTICLE{2025arXiv250323108K,
author = {{Kim}, Hyeongju and {Yang}, Jinhyeok and {Yu}, Yechan and {Ji}, Seunghun and {Morton}, Jacob and {Bous}, Frederik and {Byun}, Joon and {Lee}, Juheon},
title = "{SupertonicTTS: Towards Highly Efficient and Streamlined Text-to-Speech System}",
journal = {arXiv e-prints},
keywords = {Audio and Speech Processing, Machine Learning, Sound},
pages = {arXiv:2503.23108},
}
@article{kim2025training,
title={Training Flow Matching Models with Reliable Labels via Self-Purification},
author={Kim, Hyeongju and Yu, Yechan and Yi, June Young and Lee, Juheon},
journal={arXiv preprint arXiv:2509.19091},
year={2025}
}
@misc{yi2025robustttstrainingselfpurifying,
title={Robust TTS Training via Self-Purifying Flow Matching for the WildSpoof 2026 TTS Track},
author={June Young Yi and Hyeongju Kim and Juheon Lee},
year={2025},
eprint={2512.17293},
archivePrefix={arXiv},
primaryClass={cs.SD},
url={https://arxiv.org/abs/2512.17293},
}
Acknowledgments
This project uses renikud for Hebrew G2P. Special thanks to thewh1teagle for his work on Hebrew phonemization.
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file blue_onnx-0.1.1.tar.gz.
File metadata
- Download URL: blue_onnx-0.1.1.tar.gz
- Upload date:
- Size: 8.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
cec398d27ea35a6b36a917b603b1a71f8247e63d1f76c58342150e18fdadb46e
|
|
| MD5 |
159862d0ebb006cd4ca04ecf82a0adb9
|
|
| BLAKE2b-256 |
3dcb9b8e0b8705e381398dd049280b4fa68b331d19280fabc804f0ceb2683f0c
|
File details
Details for the file blue_onnx-0.1.1-py3-none-any.whl.
File metadata
- Download URL: blue_onnx-0.1.1-py3-none-any.whl
- Upload date:
- Size: 8.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5965c93a5ba2d451f8f2b4111dc4ae40938c5129e5264d71a2b700407a0f572e
|
|
| MD5 |
cd7039b7a1df3061d8479b3f5da2ff07
|
|
| BLAKE2b-256 |
ec4efd1415d81bca3ba854605773cb25cb153d1cdd58b01e6625351b175f2d67
|