Skip to main content

Tools for building wake-word and speech-command datasets and models.

Project description

wakewords

Build custom wakeword and command-word datasets from TTS-generated words plus Google Speech Commands.

Quick Start

Create A Project

Initialize the project layout:

uv run wakewords init

This creates data/, background_audio/, config.json, and a project .gitignore entry for downloaded Google Speech Commands data.

Edit config.json and put your wake words in custom_words.

Set Up TTS

The default TTS provider is Cartesia. Set your API key before generating audio:

export CARTESIA_API_KEY=your-api-key

Custom TTS providers can be registered from config.json. See docs/custom-providers.md.

Generate English Data

Generate clean samples for the custom_words in the project config.json using every available English voice:

uv run wakewords generate --lang en --all-voices

Generated audio and metadata are written to the project's data/custom_words.parquet.

Augment The Dataset

Create noisy tempo variants for the generated clean samples:

uv run wakewords augment

By default, augmentation targets about 4000 total samples per word.

Check Data

Print duration and no-speech stats for generated and augmented rows:

uv run wakewords checkdata

Use --generated or --augmented to check only one source type. No-speech sample IDs are written to no-speech.txt in the project root.

Train

Download Google Speech Commands, build manifests, and preview the training run:

uv run wakewords download
uv run wakewords manifest
uv run wakewords train --dry-run

Run training on Linux with NeMo installed:

uv run wakewords train

Training uses NeMo's from_pretrained() by default. To train from a local .nemo file instead, pass --base-model-path.

Export

Export the latest completed training run into a project-level model bundle:

uv run wakewords export --format onnx

This writes models/model.onnx for inference, plus models/last_checkpoint/last.ckpt, models/last_checkpoint/train_config.json, models/labels.json, and models/export_config.json when those source files are available. The checkpoint directory is kept ready for continued training with the original training settings.

Resume from an exported checkpoint bundle with:

uv run wakewords train --from-checkpoint models/last_checkpoint/last.ckpt

That imports the checkpoint into a new runs/<run-name>/ directory before training continues.

Find Outputs

Training artifacts are written under runs/<run-name>/:

  • train_config.json
  • checkpoints/
  • logs/
  • models/

The final exported model is written under the models/ directory of that specific training run.

More Details

See docs/USAGE.md for command options, split ratios, augmentation details, cleaning commands, and training notes.

License

Copyright © 2026 Akash Manohar John, under MIT License (See LICENSE file).

Background sounds: The background audio embedded in this pypi package comes from the Google Speech Commands dataset and ships with this library for convenience. This is licensed under the same license as the dataset. The details are in the README.md file inside of the wakewords/google_scd_background_noise dir.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

wakewords-0.3.10.tar.gz (11.4 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

wakewords-0.3.10-py3-none-any.whl (11.1 MB view details)

Uploaded Python 3

File details

Details for the file wakewords-0.3.10.tar.gz.

File metadata

  • Download URL: wakewords-0.3.10.tar.gz
  • Upload date:
  • Size: 11.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for wakewords-0.3.10.tar.gz
Algorithm Hash digest
SHA256 e84021a3b7fd05066ac6dc136f6a1e6f29afe4415fbae551e0c2f92db8d3e6b2
MD5 f74df72617974e532c5cfba6ec26c9c3
BLAKE2b-256 163c211c820090f69cddd20103090b38eaae8fa18ad7a0a8204d5b89efd8ee35

See more details on using hashes here.

Provenance

The following attestation bundles were made for wakewords-0.3.10.tar.gz:

Publisher: publish-pypi.yml on HashNuke/wakewords

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file wakewords-0.3.10-py3-none-any.whl.

File metadata

  • Download URL: wakewords-0.3.10-py3-none-any.whl
  • Upload date:
  • Size: 11.1 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for wakewords-0.3.10-py3-none-any.whl
Algorithm Hash digest
SHA256 7a8520f3671fe63388e6a7c5bf35080a05eb39c30e475b44cf178707f5a78a48
MD5 bd38635741df6b1a3291926f11cfb3fa
BLAKE2b-256 f6961fa701f63251eb2c99137e0175c467cbc4a935c978e0183c7afb189512f8

See more details on using hashes here.

Provenance

The following attestation bundles were made for wakewords-0.3.10-py3-none-any.whl:

Publisher: publish-pypi.yml on HashNuke/wakewords

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page