Skip to main content

neural network that segments and labels birdsong and other animal vocalizations

Project description


All Contributors

PyPI version


tweetynet image

What is tweetynet?

A neural network architecture (shown below) that automates annotation of birdsong and other vocalizations by segmenting spectrograms, and then labeling those segments.

neural network architecture

This is an example of the kind of annotations that tweetynet learns to predict:

How is it used?


To train models and use them to predict annotation

To install, run the following command at the command line:
pip install tweetynet

To train models and use them to predict annotation

To facilitate training tweetynet models and using trained models to predict annotation on new datasets, we developed the vak library, that is installed automatically with tweetynet.

Please see the vak documentation for detailed installation instructions:

A link to a tutorial on using tweetynet with vak is below.

To reproduce results from article

In the directory ./article we provide code to reproduce the results in the article
"TweetyNet: A neural network that enables high-throughput, automated annotation of birdsong" (in revision at eLife).

Please see the README in that directory for instructions on how to install and work with that code.

General use


For a tutorial on using tweetynet with vak, please see the vak documentation:


Training data

To train models, you must supply training data in the form of audio files or spectrogram files, and annotations. The package can generate spectrograms from .wav or .cbin audio files. It can also accept spectrograms in the form of Matlab .mat files or .npz files created by numpy. vak uses a separate library to parse annotations, crowsetta, which handles some common formats and can also be used to write custom parsers for other formats. Please see the crowsetta documentation for more detail:

Preparing training files

It is possible to train on any manually annotated data but there are some useful guidelines:

  • Use as many examples as possible - The results will just be better. Specifically, this code will not label correctly syllables it did not encounter while training and will most probably generalize to the nearest sample or ignore the syllable.
  • Use noise examples - This will make the code very good in ignoring noise.
  • Examples of syllables on noise are important - It is a good practice to start with clean recordings. The code will not perform miracles and is most likely to fail if the audio is too corrupt or masked by noise. Still, training with examples of syllables on the background of cage noises will be beneficial.

For more details, please see the vak documentation.


If you run into problems, please use the issue tracker or contact the authors via email in the paper above.


If you use or adapt this code, please cite its DOI:


Released under BSD license.

Contributors ✨

Thanks goes to these wonderful people (emoji key):


💻 🐛 🔣 📖 🤔 💬 🔧 ⚠️ 📢

David Nicholson

💻 🐛 🔣 📖 🤔 💬 🔧 ⚠️ 📢

Zhehao Cheng


This project follows the all-contributors specification. Contributions of any kind welcome!

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tweetynet-0.7.0.tar.gz (21.2 MB view hashes)

Uploaded source

Built Distribution

tweetynet-0.7.0-py3-none-any.whl (9.0 kB view hashes)

Uploaded py3

Supported by

AWS AWS Cloud computing Datadog Datadog Monitoring Fastly Fastly CDN Google Google Object Storage and Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page