Tools for building wake-word and speech-command datasets and models.
Project description
wakewords (python)
Build custom wakewords detection datasets and models from synthetic data (TTS-generated).
This python library is for training wakewords models. For using the models trained using this tool, please check the
wakewordsjavascript and swift libraries.
Install
# For pip
pip install wakewords
# For uv
uv add wakewords
How to train a wakewords model for your custom words
1. Create a project
cd into any directory and run the init command to create the project scaffolding and a config.json.
uv run wakewords init
2. Define your wake words
Edit config.json and put your wake words in custom_words. Define as many as you like.
tts_inputis what is sent to the TTS to generate the audio.labelis for the dataset.
{
"custom": [
{"tts_input": "Atlas", label: "atlas}
]
}
The default config includes words from the google speech commands dataset by default. Feel free to review them and retain what you like.
3. Set Up TTS provider credentials
The default TTS provider is Cartesia. Set your API key to generate audio:
export CARTESIA_API_KEY=your-api-key
Custom TTS providers can be registered from config.json. See ../docs/custom-providers.md.
3. Generate audio samples
Generate clean samples for the custom_words in the project config.json using every available voice:
uv run wakewords generate --lang en --all-voices
The lang option is for a specfic language. If you skip it, it'll use the voices you specify. Check docs for more options like selecting by gender, language, etc. You can specify config to stay selective like "3 voices per gender per language".
Generated audio and metadata are written to the project's data/custom_words.parquet.
4. Augment the dataset
This command will augment the dataset to create more samples using variations with background noise, tempo and snr. The target sample size post-augmentation is around 4k samples per word.
uv run wakewords augment
The default target for total number of samples per word is 4000 (approximated). This is to match the number of samples for each word in the google speech commands dataset (helps if you retain the words for that dataset too). Helps balance the samples available for each word.
So the command calculates (targetSamplesPerWord = 4000 - generatedSamples) and then generates enough variations to (approximately) match that target sample count.
5. Train your model
Download Google Speech Commands, build manifests, and preview the training run:
# Download google speech commands dataset
uv run wakewords download
# Create manifest files for 70-20-10 split (train-validate-test)
uv run wakewords manifest
# Start training (linux only).
# Default 15 epochs. Use --max-epochs to change.
uv run wakewords train
Export
Export the latest completed training run into a project-level model bundle:
# defaults to onnx format
uv run wakewords export
- This writes
models/model.onnxandmodels/labels.jsonfor inference. - And also saves
models/last_checkpoint/last.ckpt.
You can now use model.onnx and labels.json for inference with the wakewords javascript or swift libraries.
More Details
We have other commands like checkdata. See ../docs/python.md for command options, split ratios, augmentation details, cleaning commands, and training notes.
License
Copyright © 2026 Akash Manohar John, under MIT License (See LICENSE file at root of git repo).
Background sounds: The background audio embedded in this PyPI package comes from the Google Speech Commands dataset and ships with this library for convenience. This is licensed under the same license as the dataset. The details are in the README.md file inside of the wakewords/google_scd_background_noise dir.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file wakewords-0.3.15.tar.gz.
File metadata
- Download URL: wakewords-0.3.15.tar.gz
- Upload date:
- Size: 11.7 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b194a2d8ecc5712834dc4eae70531315fbc9be5f4a4c8bad7d18aa8a813edb58
|
|
| MD5 |
60e371b8ae4e78b0d53728128a6ce0c2
|
|
| BLAKE2b-256 |
dc88a3c92bcf3869bc02cdda3cd4db083b4ae95b0367e88bd4af1212f68c4a55
|
Provenance
The following attestation bundles were made for wakewords-0.3.15.tar.gz:
Publisher:
publish.yml on HashNuke/wakewords
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
wakewords-0.3.15.tar.gz -
Subject digest:
b194a2d8ecc5712834dc4eae70531315fbc9be5f4a4c8bad7d18aa8a813edb58 - Sigstore transparency entry: 1409487611
- Sigstore integration time:
-
Permalink:
HashNuke/wakewords@81590ff1e73f650a723d9b2be56b5ed130dab818 -
Branch / Tag:
refs/tags/v0.3.15 - Owner: https://github.com/HashNuke
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@81590ff1e73f650a723d9b2be56b5ed130dab818 -
Trigger Event:
push
-
Statement type:
File details
Details for the file wakewords-0.3.15-py3-none-any.whl.
File metadata
- Download URL: wakewords-0.3.15-py3-none-any.whl
- Upload date:
- Size: 11.5 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7aa6abf742510a945cab8d5fe075c5bc582303d625a3108d7256b878c6a2023c
|
|
| MD5 |
e9f27036444c38650aff0d7f39049e56
|
|
| BLAKE2b-256 |
2e04929a0fc194cef08497e69298fcf15fb96ae170cd662620becf6cbbd58af5
|
Provenance
The following attestation bundles were made for wakewords-0.3.15-py3-none-any.whl:
Publisher:
publish.yml on HashNuke/wakewords
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
wakewords-0.3.15-py3-none-any.whl -
Subject digest:
7aa6abf742510a945cab8d5fe075c5bc582303d625a3108d7256b878c6a2023c - Sigstore transparency entry: 1409487639
- Sigstore integration time:
-
Permalink:
HashNuke/wakewords@81590ff1e73f650a723d9b2be56b5ed130dab818 -
Branch / Tag:
refs/tags/v0.3.15 - Owner: https://github.com/HashNuke
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@81590ff1e73f650a723d9b2be56b5ed130dab818 -
Trigger Event:
push
-
Statement type: