Skip to main content

automated annotation of vocalizations for everybody

Project description

DOI PyPI version License

vak

automated annotation of vocalizations for everybody

Build Status

Usage

Training models to segment and label vocalizations

Currently the easiest way to work with vak is through the command line. You run it with config.ini files, using one of a handful of commands. Here's the help text that prints when you run $ vak -h (-h for help):

$ vak -h
usage: vak [-h] command configfile

vak command-line interface

positional arguments:
  command     Command to run, valid options are:
              ['prep', 'train', 'predict', 'finetune', 'learncurve']
              $ vak train ./configs/config_2018-12-17.ini
  configfile  name of config.ini file to use 
              $ vak train ./configs/config_2018-12-17.ini

optional arguments:
  -h, --help  show this help message and exit

As an example, you can run vak with a single config.ini file by using the train command and passing the name of the config.ini file as an argument:

(vak-env)$ vak prep ./configs/config_bird0.ini
(vak-env)$ vak train ./configs/config_bird0.ini

You can then use vak to apply the trained model to other data with the predict command.

(vak-env)$ vak predict ./configs/config_bird0.ini

For more details on how training works, see experiments.md, and for more details on the config.ini files, see README_config.md.

Data and folder structures

To train models, you must supply training data in the form of audio files or spectrograms, and annotations for each spectrogram.

Spectrograms and labels

The package can generate spectrograms from .wav files or .cbin files. It can also accept spectrograms in the form of Matlab .mat files. The locations of these files are specified in the config.ini file as explained in experiments.md and README_config.md.

Preparing training files

It is possible to train on any manually annotated data but there are some useful guidelines:

  • Use as many examples as possible - The results will just be better. Specifically, this code will not label correctly syllables it did not encounter while training and will most probably generalize to the nearest sample or ignore the syllable.
  • Use noise examples - This will make the code very good in ignoring noise.
  • Examples of syllables on noise are important - It is a good practice to start with clean recordings. The code will not perform miracles and is most likely to fail if the audio is too corrupt or masked by noise. Still, training with examples of syllables on the background of cage noises will be beneficial.

Results of running the code

It is recommended to apply post processing when extracting the actual syllable tag and onset and offset timesfrom the estimates.

Predicting new labels

You can predict new labels by adding a [PREDICT] section to the config.ini file, and then running the command-line interface with the --predict flag, like so:
(vak-env)$ vak-cli --predict ./configs/config_bird0.ini An example of what a config.ini file with a [PREDICT] section is in the doc folder here.

Citation

If you use vak for a publication, please cite its DOI: DOI

License

License
BSD-3

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vak-0.3.0a1.tar.gz (64.1 kB view hashes)

Uploaded Source

Built Distribution

vak-0.3.0a1-py3-none-any.whl (86.7 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page