a neural network toolbox for animal vocalizations and bioacoustics
Project description
a neural network toolbox for animal vocalizations and bioacoustics
vak
is a library for researchers studying animal vocalizations--such as
birdsong, bat calls, and even human speech--although it may be useful
to anyone working with bioacoustics data.
The library has two main goals:
- make it easier for researchers studying animal vocalizations to apply neural network algorithms to their data
- provide a common framework that will facilitate benchmarking neural network algorithms on tasks related to animal vocalizations
Currently the main use is automated annotation of vocalizations and other animal sounds. By annotation, we mean something like the example of annotated birdsong shown below:
You give vak
training data in the form of audio or spectrogram files with annotations,
and then vak
helps you train neural network models
and use the trained models to predict annotations for new files.
We developed vak
to benchmark a neural network model we call tweetynet
.
Please see the eLife article here: https://elifesciences.org/articles/63853
Installation
Short version:
with pip
$ pip install vak
with conda
on Mac and Linux
$ conda install vak -c conda-forge
on Windows
On Windows, you need to add an additional channel, pytorch
.
You can do this by repeating the -c
option more than once.
$ conda install vak -c conda-forge -c pytorch
$ # ^ notice additional channel!
For more details, please see: https://vak.readthedocs.io/en/latest/get_started/installation.html
We test vak
on Ubuntu and MacOS. We have run on Windows and
know of other users successfully running vak
on that operating system,
but installation on Windows may require some troubleshooting.
A good place to start is by searching the issues.
Usage
Tutorial
Currently the easiest way to work with vak
is through the command line.
You run it with configuration files, using one of a handful of commands.
For more details, please see the "autoannotate" tutorial here:
https://vak.readthedocs.io/en/latest/get_started/autoannotate.html
How can I use my data with vak
?
Please see the How-To Guides in the documentation here: https://vak.readthedocs.io/en/latest/howto/index.html
Support / Contributing
We handle support through the issue tracker on GitHub:
https://github.com/vocalpy/vak/issues
Please raise an issue there if you run into trouble.
That would be a great place to start if you are interested in contributing, as well.
Citation
If you use vak for a publication, please cite its DOI:
License
is here.
About
For more on the history of vak
please see: https://vak.readthedocs.io/en/latest/reference/about.html
"Why this name, vak?"
It has only three letters, so it is quick to type, and it wasn't taken on pypi yet. Also I guess it has something to do with speech. "vak" rhymes with "squawk" and "talk".
Does your library have any poems?
Contributors โจ
Thanks goes to these wonderful people (emoji key):
This project follows the all-contributors specification. Contributions of any kind welcome!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for vak-0.5.0.post1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6496794ee4a775744d722a6da57690370fc6e4ac7419b12d7b7f74d980d336f5 |
|
MD5 | e471e7f806ca69688ca3c1f4d9994e7f |
|
BLAKE2b-256 | 86ab93663747eba9b3647dcad92831731e25297bc43bfadca69b40b90925215a |