Machine learning audio prediction experiments based on templates
Project description
Overview
A project to detect speaker characteristics by machine learning experiments with a high-level interface.
The idea is to have a framework (based on e.g. sklearn and torch) that can be used to rapidly and automatically analyse and investigate audio data automatically.
- NEW: Nkululeko now automatically generates PDF reports sample for EmoDB
- The latest features can be seen in the ini-file options that are used to control Nkululeko
- Below is a Hello World example that should set you up fastly, also on Google Colab, and with Kaggle
- Here's a blog post on how to set up nkululeko on your computer.
- Here is a slack channel to discuss issues related to nkululeko. Please click the link if interested in contributing.
- Here's a slide presentation about nkululeko
- Here's a video presentation about nkululeko
- Here's the 2022 LREC article on nkululeko
Here are some examples of typical output:
Confusion matrix
Per default, Nkululeko displays results as a confusion matrix using binning with regression.
Epoch progression
The point when overfitting starts can sometimes be seen by looking at the results per epoch:
Feature importance
Using the explore interface, Nkululeko analyses the importance of acoustic features:
Feature distribution
And can show the distribution of specific features per category:
t-SNE plots
A t-SNE plot can give you an estimate wether your acoustic features are useful at all:
Data distribution
Sometimes you only want to take a look at your data:
Bias checking
In cases you might wonder if there's bias in your data. You can try to detect this with automatically estimated speech properties, by visualizing the correlation of target label and predicted labels.
Documentation
The documentation, along with extensions of installation, usage, INI file format, and examples, can be found nkululeko.readthedocs.io.
Installation
Create and activate a virtual Python environment and simply run
pip install nkululeko
We excluded some packages from the automatic installation because they might depend on your computer and some of them are only needed in special cases. So if the error
module x not found
appears, please try
pip install x
For many packages you will need the missing torch package. If you don't have a GPU (which is probably true if you don't know what that is), please use
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
else, you can use the default:
pip install torch torchvision torchaudio
Some functionalities require extra packages to be installed, which we didn't include automatically:
- the SQUIM model needs a special torch version:
pip uninstall -y torch torchvision torchaudio pip install --pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cpu
- the spotlight adapter needs spotlight:
pip install renumics-spotlight sliceguard
Some examples for ini-files (which you use to control nkululeko) are in the tests folder.
Usage
Basically, you specify your experiment in an "ini" file (e.g. experiment.ini) and then call one of the Nkululeko interfaces to run the experiment like this:
python -m nkululeko.nkululeko --config experiment.ini
A basic configuration looks like this:
[EXP]
root = ./
name = exp_emodb
[DATA]
databases = ['emodb']
emodb = ./emodb/
emodb.split_strategy = speaker_split
target = emotion
labels = ['anger', 'boredom', 'disgust', 'fear']
[FEATS]
type = ['praat']
[MODEL]
type = svm
[EXPL]
model = tree
plot_tree = True
[PLOT]
combine_per_speaker = mode
Read the Hello World example for initial usage with Emo-DB dataset.
Here is an overview of the interfaces:
- nkululeko.nkululeko: do machine learning experiments combining features and learners
- nkululeko.demo: demo the current best model on the command line
- nkululeko.test: predict a series of files with the current best model
- nkululeko.explore: perform data exploration
- nkululeko.augment: augment the current training data
- nkululeko.predict: predict features like SNR, MOS, arousal/valence, age/gender, with DNN models
- nkululeko.segment: segment a database based on VAD (voice activity detection)
- nkululeko.resample: check on all sampling rates and change to 16kHz
There's my blog with tutorials:
- Introduction
- Nkulueko FAQ
- How to set up your first nkululeko project
- Setting up a base nkululeko experiment
- How to import a database
- Comparing classifiers and features
- Use Praat features
- Combine feature sets
- Classifying continuous variables
- Try out / demo a trained model
- Perform cross database experiments
- Meta parameter optimization
- How to set up wav2vec embedding
- How to soft-label a database
- Re-generate the progressing confusion matrix animation wit a different framerate
- How to limit/filter a dataset
- Specifying database disk location
- Add dropout with MLP models
- Do cross-validation
- Combine predictions per speaker
- Run multiple experiments in one go
- Compare several MLP layer layouts with each other
- Import features from outside the software
- Explore feature importance
- Plot distributions for feature values
- Show feature importance
- Augment the training set
- Visualize clusters of acoustic features
- Visualize your data distribution
- Check your dataset
- Segmenting a database
- Predict new labels for your data from public models and check bias
- Resample
- Get some statistics on correlation and effect-size
- Generate a latex / pdf report
- Inspect your data with Spotlight
- Automatically stratify your split sets
- re-name data column names
- Oversample the training set
Hello World example
- NEW: Here's a Google colab that runs this example out-of-the-box, and here is the same with Kaggle
- I made a video to show you how to do this on Windows
- Set up Python on your computer, version >= 3.8
- Open a terminal/commandline/console window
- Test python by typing
python
, python should start with version >3 (NOT 2!). You can leave the Python Interpreter by typing exit() - Create a folder on your computer for this example, let's call it
nkulu_work
- Get a copy of the Berlin emodb in audformat and unpack inside the folder you just created (
nkulu_work
) - Make sure the folder is called "emodb" and does contain the database files directly (not box-in-a-box)
- Also, in the
nkulu_work
folder:- Create a Python environment
python -m venv venv
- Then, activate it:
- under Linux / mac
source venv/bin/activate
- under Windows
venv\Scripts\activate.bat
- if that worked, you should see a
(venv)
in front of your prompt
- under Linux / mac
- Install the required packages in your environment
pip install nkululeko
- Repeat until all error messages vanished (or fix them, or try to ignore them)...
- Create a Python environment
- Now you should have two folders in your nkulu_work folder:
- emodb and venv
- Download a copy of the file exp_emodb.ini to the current working directory (
nkulu_work
) - Run the demo
python -m nkululeko.nkululeko --config exp_emodb.ini
- Find the results in the newly created folder exp_emodb
- Inspect
exp_emodb/images/run_0/emodb_xgb_os_0_000_cnf.png
- This is the main result of you experiment: a confusion matrix for the emodb emotional categories
- Inspect
- Inspect and play around with the demo configuration file that defined your experiment, then re-run.
- There are many ways to experiment with different classifiers and acoustic features sets, all described here
Features
The framework is targeted at the speech domain and supports experiments where different classifiers are combined with different feature extractors.
- Classifiers: Naive Bayes, KNN, Tree, XGBoost, SVM, MLP
- Feature extractors: Praat, Opensmile, openXBOW BoAW, TRILL embeddings, Wav2vec2 embeddings, audModel embeddings, ...
- Feature scaling
- Label encoding
- Binning (continuous to categorical)
- Online demo interface for trained models
Here's a rough UML-like sketch of the framework (and here's the real one done with pyreverse).
Currently, the following linear classifiers are implemented (integrated from sklearn):
- SVM, SVR, XGB, XGR, Tree, Tree_regressor, KNN, KNN_regressor, NaiveBayes, GMM and the following ANNs (artificial neural networks)
- MLP (multi-layer perceptron), CNN (convolutional neural network)
Here's an animation that shows the progress of classification done with nkululeko
License
Nkululeko can be used under the MIT license If you use it, please mention the Nkululeko paper
F. Burkhardt, Johannes Wagner, Hagen Wierstorf, Florian Eyben and Björn Schuller: Nkululeko: A Tool For Rapid Speaker Characteristics Detection, Proc. Proc. LREC, 2022
@inproceedings{Burkhardt:lrec2022,
title = {Nkululeko: A Tool For Rapid Speaker Characteristics Detection},
author = {Felix Burkhardt and Johannes Wagner and Hagen Wierstorf and Florian Eyben and Björn Schuller},
isbn = {9791095546726},
journal = {2022 Language Resources and Evaluation Conference, LREC 2022},
keywords = {machine learning,speaker characteristics,tools},
pages = {1925-1932},
publisher = {European Language Resources Association (ELRA)},
year = {2022},
}
Changelog
Version 0.74.0
- added patience (early stopping)
- added MAE loss and measure
Version 0.73.0
- added reverse and scale arguments to target variable
- also, the data store can now be csv
Version 0.72.0
- worked over explore value counts section
- added bin_reals for all columns
Version 0.71.4
- automatic epoch reset if not ANN
- scatter plots now show a regression line
Version 0.71.3
- enabled scatter plots for all variables
Version 0.71.2
- enabled scatter plots for continuous labels
Version 0.71.1
- made a wav2vec default
- renamed praat features, ommiting spaces
- fixed plot distribution bugs
- added feature plots for continuous targets
Version 0.71.0
- added explore visuals.
- all columns from databases should now be usable
Version 0.70.0
- added imb_learn balancing of training set
Version 0.69.0
- added CNN model and melspec extractor
Version 0.68.4
- bugfix: got_gender was uncorrectly set
Version 0.68.3
- Feinberg Praat scripts ignore error and log filename
Version 0.68.2
- column names in datasets are now configurable
Version 0.68.1
- added error message on file to praat extraction
Version 0.68.0
- added stratification framework for split balancing
Version 0.67.0
- added first version of spotlight integration
Version 0.66.13
- small changes related to github worker
Version 0.66.12
- fixed bug that prevented Praat features to be selected
Version 0.66.11
- removed torch from automatic install. depends on cpu/gpu machine
Version 0.66.10
- Removed print statements from feats_wav2vec2
Version 0.66.9
- Version that should install without requiring opensmile which seems not to be supported by all Apple processors (arm CPU (Apple M1))
Version 0.66.8
- forgot init.py in reporting module
Version 0.66.7
- minor changes to experiment class
Version 0.66.6
- minor cosmetics
Version 0.66.5
- Latex report now with images
Version 0.66.4
- Pypi version mixup
Version 0.66.3
- made path to PDF output relative to experiment root
Version 0.66.2
- enabled data-pathes with quotes
- enabled missing category labels
- used tgdm for progress display
Version 0.66.1
- start on the latex report framework
Version 0.66.0
- added speechbrain speakerID embeddings
Version 0.65.9
- added a filter that ensures that the labels have the same size as the features
Version 0.65.8
- changed default behaviour of resampler to "keep original files"
Version 0.65.7
- more databases and force wav while resampling
Version 0.65.6
- minor catch for seaborn in plots
Version 0.65.5
- added fill_na in plot effect size
Version 0.65.4
- added datasets to distribution
- changes in wav2vec2
Version 0.65.3
- various bugfixes
Version 0.65.2
- fixed bug in dataset.csv that prevented correct paths for relative files
- fixed bug in export module concerning new file directory
Version 0.65.1
- small enhancements with transformer features
Version 0.65.0
- introduced export module
Version 0.64.4
- added num_speakers for reloaded data
- re-formatted all with black
Version 0.64.3
- added number of speakers shown after data load
Version 0.64.2
- added init.py for submodules
Version 0.64.1
- fix error on csv
Version 0.64.0
- added bin_reals
- added statistics for effect size and correlation to plots
Version 0.63.4
- fixed bug in split selection
Version 0.63.3
- Introduced data.audio_path
Version 0.63.2
- re-introduced min and max_length for silero segmenatation
Version 0.63.1
- fixed bug in resample
Version 0.63.0
- added wavlm model
- added error on filename for models
Version 0.62.1
- added min and max_length for silero segmenatation
Version 0.62.0
- fixed segment silero bug
- added all Wav2vec2 models
- added resampler module
- added error on file for embeddings
Version 0.61.0
- added HUBERT embeddings
Version 0.60.0
- some bugfixes
- new package structure
- fixed wav2vec2 bugs
- removed "cross_data" strategy
Version 0.59.1
- bugfix, after fresh install, it seems some libraries have changed
- added no_warnings
- changed print() to util.debug()
- added progress to opensmile extract
Version 0.59.0
- introduced SQUIM features
- added SDR predict
- added STOI predict
Version 0.58.0
- added dominance predict
- added MOS predict
- added PESQ predict
Version 0.57.0
- renamed autopredict predict
- added arousal autopredict
- added valence autopredict
Version 0.56.0
- added autopredict module
- added snr as feature extractor
- added gender autopredict
- added age autopredict
- added snr autopredict
Version 0.55.1
- changed error message in plot class
Version 0.55.0
- added segmentation module
Version 0.54.0
- added audeering public age and gender model embeddings and age and gender predictions
Version 0.53.0
- added file checks: size in bytes and voice activity detection with silero
Version 0.52.1
- bugfix: min/max duration_of_sample was not working
Version 0.52.0
- added flexible value distribution plots
Version 0.51.0
- added datafilter
Version 0.50.1
- added caller information for debug and error messages in Util
Version 0.50.0
- removed loso and added pre-selected logo (leave-one-group-out), aka folds
Version 0.49.1
- bugfix: samples selection for augmentation didn't work
Version 0.49.0
- added random-splicing
Version 0.48.1
- bugfix: database object was not loaded when dataframe was reused
Version 0.48.0
- enabled specific feature selection for praat and opensmile features
Version 0.47.1
- enabled feature storage format csv for opensmile features
Version 0.47.0
- added praat speech rate features
Version 0.46.0
- added warnings for non-existent parameters
- added sample selection for scatter plotting
Version 0.45.4
- added version attribute to setup.cfg
Version 0.45.4
- added version attribute
Version 0.44.1
- bugfixing: feature importance: https://github.com/felixbur/nkululeko/issues/23
- bugfixing: loading csv database with filewise index https://github.com/felixbur/nkululeko/issues/24
Version 0.45.2
- bugfix: sample_selection in EXPL was required wrongly
Version 0.45.2
- added sample_selection for sample distribution plots
Version 0.45.1
- fixed dataframe.append bug
Version 0.45.0
- added auddim as features
- added FEATS store_format
- added device use to feat_audmodel
Version 0.44.1
- bugfixes
Version 0.44.0
- added scatter functions: tsne, pca, umap
Version 0.43.7
- added clap features
Version 0.43.6
- small bugs
Version 0.43.5
- because of difficulties with numba and audiomentations importing audiomentations only when augmenting
Version 0.43.4
- added error when experiment type and predictor don't match
Version 0.43.3
- fixed further bugs and added augmentation to the test runs
Version 0.43.2
- fixed a bug when running continuous variable as classification problem
Version 0.43.1
- fixed test_runs
Version 0.43.0
- added augmentation module based on audiomentation
Version 0.42.0
- age labels should now be detected in databases
Version 0.41.0
- added feature tree plot
Version 0.40.1
- fixed a bug: additional test database was not label encoded
Version 0.40.0
- added EXPL section and first functionality
- added test module (for test databases)
Version 0.39.0
- added feature distribution plots
- added plot format
Version 0.38.3
- added demo mode with list argument
Version 0.38.2
- fixed a bug concerned with "no_reuse" evaluation
Version 0.38.1
- demo mode with file argument
Version 0.38.0
- fixed demo mode
Version 0.37.2
- mainly replaced pd.append with pd.concat
Version 0.37.1
- fixed bug preventing praat feature extraction to work
Version 0.37.0
- fixed bug cvs import not detecting multiindex
Version 0.36.3
- published as a pypi module
Version 0.36.0
- added entry nkululeko.py script
Version 0.35.0
- fixed bug that prevented scaling (normalization)
Version 0.34.2
- smaller bug fixed concerning the loss_string
Version 0.34.1
- smaller bug fixes and tried Soft_f1 loss
Version 0.34.0
- smaller bug fixes and debug ouputs
Version 0.33.0
- added GMM as a model type
Version 0.32.0
- added audmodel embeddings as features
Version 0.31.0
- added models: tree and tree_reg
Version 0.30.0
- added models: bayes, knn and knn_reg
Version 0.29.2
- fixed hello world example
Version 0.29.1
- bug fix for 0.29
Version 0.29.0
- added a new FeatureExtractor class to import external data
Version 0.28.2
- removed some Pandas warnings
- added no_reuse function to database.load()
Version 0.28.1
- with database.value_counts show only the data that is actually used
Version 0.28.0
- made "label_data" configuration automatic and added "label_result"
Version 0.27.0
- added "label_data" configuration to label data with trained model (so now there can be train, dev and test set)
Version 0.26.1
- Fixed some bugs caused by the multitude of feature sets
- Added possibilty to distinguish between absolut or relative pathes in csv datasets
Version 0.26.0
- added the rename_speakers funcionality to prevent identical speaker names in datasets
Version 0.25.1
- fixed bug that no features were chosen if not selected
Version 0.25.0
- made selectable features universal for feature sets
Version 0.24.0
- added multiple feature sets (will simply be concatenated)
Version 0.23.0
- added selectable features for Praat interface
Version 0.22.0
- added David R. Feinberg's Praat features, praise also to parselmouth
Version 0.21.0
- Revoked 0.20.0
- Added support for only_test = True, to enable later testing of trained models with new test data
Version 0.20.0
- implemented reuse of trained and saved models
Version 0.19.0
- added "max_duration_of_sample" for datasets
Version 0.18.6
- added support for learning and dropout rate as argument
Version 0.18.5
- added support for epoch number as argument
Version 0.18.4
- added support for ANN layers as arguments
Version 0.18.3
- added reuse of test and train file sets
- added parameter to scale continous target values: target_divide_by
Version 0.18.2
- added preference of local dataset specs to global ones
Version 0.18.1
- added regression value display for confusion matrices
Version 0.18.0
- added leave one speaker group out
Version 0.17.2
- fixed scaler, added robust
Version 0.17.0
- Added minimum duration for test samples
Version 0.16.4
- Added possibility to combine predictions per speaker (with mean or mode function)
Version 0.16.3
- Added minimal sample length for databases
Version 0.16.2
- Added k-fold-cross-validation for linear classifiers
Version 0.16.1
- Added leave-one-speaker-out for linear classifiers
Version 0.16.0
- Added random sample splits
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for nkululeko-0.74.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8f7f5be7ebbe06d7cef4872de5378b50ff5ef58fcf57c43bf3e0416db84a59d3 |
|
MD5 | f5e7a33509cd2d4ed0ea159d167152c7 |
|
BLAKE2b-256 | 6a4a18ac68fe5b41a007d8280c6f73af6a25ee870e1dcb5179d46d7e81907375 |