Skip to main content

tungnaa is a text-to-voice model family and musical instrument

Project description

Tungnaa Interactive Voice Instruments

Training and GUI inference for interactive artistic text-to-voice models.

Installation

pip install tungnaa[gui] (to use the instrument)

pip install tungnaa[train] (if you are installing on a server to train models)

Usage

tungnaa --help

Models from Huggingface

git clone git@hf.co:intelligent-instruments-Lab/tungnaa

Running with the Python Audio Engine

  • #todo model selection from the Tungnaa gui, including block size and sample rate that match system (currently not possible to have a SR mismatch)
  • #todo audio device selection from the Tungnaa gui
tungnaa run --tts models/tts/rtalign_044_jvs.ckpt --vocoder models/vocoder/rave3-jvs-warm200k-lobeta_c052f53b23_streaming.ts --audio-out default

Using SuperCollider, PureData or Max as Audio Engine

If --latent-audio switch is enabled, Tungnaa will stream RAVE latent trajectories over a single audio-rate channel, which can be piped into another audio engine running the RAVE vocoder. The piping can be done relatively easily on Linux using JACK, and on MacOS using Blackhole.

SuperCollider: sclang supercollider/rtvoice-demo.scd

tungnaa run --tts models/tts/rtalign_044_jvs.ckpt --latent_audio

Training Models

vocoder training

using victor-shepardson RAVE fork

example preprocessing with joining of short files (especially useful for datasets containing many short utterances)

rave preprocess \
--input_path /path/to/audio/directory \
--output_path /path/to/tmp/storage/myravedata \
--num_signal 150000 --sampling_rate 48000 \
--join_short_files

example transfer learning using IIL rave-models:

rave train --name 001-my-vocoder-name \
--config rave-models/voice_multi_b2048_r48000/config.gin --config transfer \
--db_path /path/to/tmp/storage/myravedata \
--out_path /path/to/rave/runs \
--transfer_ckpt rave-models/voice_multi_b2048_r48000/version_0/checkpoints/last.ckpt \
--n_signal 150000
--gpu 0

example export using sign normalization (latents correlate with louder/brighter sounds):

rave export --run /path/to/rave/runs/001-my-vocoder-name \
--streaming --normalize_sign --latent_size ...

Tungnaá preprocessing

see tungnaa prep --help.

To use datasets other than vctk or hifitts, it may be necessary to add an adapter function in prep.py.

example:

tungnaa prep \
--datasets '{kind:"vctk", path:"/path/to/VCTK"}' \
--rave-path /path/to/rave_streaming.ts \
--out-path /path/to/tmp/dataset_name

training

see tungnaa trainer --help

example:

tungnaa trainer --experiment 001-my-tts-name \
--model-dir /path/for/checkpoints \
--log-dir /path/for/logs \
--manifest /path/to/tmp/dataset_name/manifest.json \
--rave-model /path/to/rave_streaming.ts 
--lr 3e-4 --lr-text 3e-5 --epoch-size 200 --save-epochs 20 \
--device cuda:0 \
train 

resume a stopped training: add --checkpoint /path/to/checkpoint

transfer learning: add --checkpoint /path/to/checkpoint --resume False

in-text annotations

--speaker_annotate prepends the speaker id determined during prepreprocessing. With --speaker_dataset, it includes the dataset.

--csv /path/to/file.csv accepts a jvs_labels_encoder_k7.csv-style CSV file. First column contains the audio filename without extension, second column contains an annotation to be prepended to text.

If you were to use all three options, you would get:

"csvval:[dataset:speaker] original text"

Developing

Poetry is used for packaging and dependency management. Conda is used for environments and Python version management, and may be replaced by virtualenv or similar.

  1. cd tungnaa
  2. conda create -n tungnaa python=3.12 ffmpeg
  3. conda activate tungnaa
  4. poetry install

Note that poetry should not be installed in the project environment, but rather from the system package manager, with pipx, or in a separate environment.

To add a dependency, use poetry add, or edit pyproject.toml and then run poetry lock; poetry install.

To add a model, use dvc add /path/to/model, then git add /path/to/model.dvc. Tungnaá models should go in models/tts/, and be accompanied by a model.md file. Vocoders should go in models/vocoders.

docs

run mkdocs serve to build and view documentation

run mkdocs gh-deploy to deploy to github pages

Non-Python Dependencies

Python Dependencies

See pyproject.toml

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tungnaa-0.1.1.tar.gz (96.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tungnaa-0.1.1-py3-none-any.whl (97.1 kB view details)

Uploaded Python 3

File details

Details for the file tungnaa-0.1.1.tar.gz.

File metadata

  • Download URL: tungnaa-0.1.1.tar.gz
  • Upload date:
  • Size: 96.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.7.1 CPython/3.9.13 Darwin/21.6.0

File hashes

Hashes for tungnaa-0.1.1.tar.gz
Algorithm Hash digest
SHA256 76690c509128bc5393302647c1546c291971d7022a6c331fffd0b86dedd38ac9
MD5 987eb1d47b6620691fcfa2d81320ac20
BLAKE2b-256 6a00385acd0e46c77d344a069a12f422b063c3ecedb5b2f111009ac4d1e7d43b

See more details on using hashes here.

File details

Details for the file tungnaa-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: tungnaa-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 97.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.7.1 CPython/3.9.13 Darwin/21.6.0

File hashes

Hashes for tungnaa-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 5e3148843da7845b93bba0d684ab879610d0be6cff1d2e04249d2a9a8709eb33
MD5 b99e1bc4a2f648022e19951c68884ad4
BLAKE2b-256 bde3d39cbb26caff8c2e527fc253e1cd881352540cd15335062f3e20fb50e65e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page