Skip to main content

End-to-end spoken language identification (LID) on TensorFlow

Project description


  • End-to-end spoken language identification (LID) on TensorFlow.
  • Parallel feature extraction using, with STFT computations on the GPU using the tf.signal package.
  • Only metadata (e.g. utt2path, utt2label) is fully loaded into memory, rest is done in linear passes over the dataset with the iterator.
  • Spectrograms, source audio, and utterance ids can be written into TensorBoard summaries.
  • Model training with tf.keras, some model examples are available here.
  • Average detection cost (C_avg) implemented as a tf.keras.metrics.Metric.
  • You can also try lidbox for speaker recognition, since no assumptions will be made of the signal labels. E.g. use utt2speaker as utt2label and see what happens.

Here is a full example notebook showing what lidbox can do.

Why would I want to use this?

  • You need a simple, deep learning based speech classification pipeline. For example: waveform -> VAD filter -> augment audio data -> serialize all data to a single binary file -> extract log-scale Mel-spectra or MFCC -> use DNN/CNN/LSTM/GRU/attention (etc.) to classify by signal labels
  • You have thousands of hours of speech data
  • You have a TensorFlow/Keras model that you train on the GPU and want the extraction pipeline to also be on the GPU
  • You want an end-to-end pipeline that uses TensorFlow 2 as much as possible

Why would I not want to use this?

  • You are happy doing everything with Kaldi or some other toolkits
  • You don't want to debug by reading the source code when something goes wrong
  • You don't want to install TensorFlow 2 and configure its dependencies (CUDA etc.)
  • You need CTC or some other way to train a phoneme recognizer


git clone --depth 1
pip install ./lidbox

Check that the command line entry point is working:

lidbox -h

If not, make sure the setuptools entry point scripts (e.g. directory $HOME/.local/bin) are on your path.

Then, install TensorFlow 2.1 or 2.2 (both should work), unless it is already installed.

If everything is working, see this for a simple example to get started.

Language embeddings

If you want to use language embeddings, install the PLDA package from here:

pip install plda@

Editable install

If you plan on making changes to the code, it is easier to install lidbox as a Python package in setuptools develop mode:

git clone --depth 1
pip install --editable ./lidbox

Then, if you make changes to the code, there's no need to reinstall the package since the changes are reflected immediately. Just be careful not to make changes when lidbox is running, because TensorFlow will use its autograph package to convert some of the Python functions to TF graphs, which might fail if the code changes suddenly.

X-vector embeddings

One benefit of deep learning classifiers is that you can first train them on large amounts of data and then use them as feature extractors to produce low-dimensional, fixed-length language vectors from speech. See e.g. the x-vector approach by Snyder et al.

Below is a visualization of test set language embeddings for 4 languages in 2-dimensional space. Each data point represents 2 seconds of speech in one of the 4 languages.

2-dimensional PCA plot of 400 random x-vectors for 4 Common Voice languages

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lidbox-0.6.1.tar.gz (59.9 kB view hashes)

Uploaded source

Built Distribution

lidbox-0.6.1-py3-none-any.whl (74.2 kB view hashes)

Uploaded py3

Supported by

AWS AWS Cloud computing Datadog Datadog Monitoring Fastly Fastly CDN Google Google Object Storage and Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page