Skip to main content

BirdSet: A multi-task benchmark and data pipeline for deep learning based avian bioacoustics

Project description

BirdSet Benchmark

Results

Title Notes PER NES UHH HSN NBP POW SSW SNE Overall Code
BirdSet: A Multi-Task Benchmark For Classification In Avian Bioacoustics
BIRB: A Generalization Benchmark for Information Retrieval in Bioacoustics

Get Started

Devcontainer

You can use the devcontainer configured as as git submodule:

git submodule update --init --recursive

Install dependencies

Either with conda and pip.

conda create -n birdset python=3.10
pip install -e .

Or poetry.

poetry install
poetry shell

Minimal Working Example

Log in to Huggingface

Our datasets are shared via HuggingFace Datasets in our HuggingFace BirdSet repository. Huggingface is a central hub for sharing and utilizing datasets and models, particularly beneficial for machine learning and data science projects. For accessing private datasets hosted on HuggingFace, you need to be authenticated. Here's how you can log in to HuggingFace:

  1. Install HuggingFace CLI: If you haven't already, you need to install the HuggingFace CLI (Command Line Interface). This tool enables you to interact with HuggingFace services directly from your terminal. You can install it using pip:

    pip install huggingface_hub
    
  2. Login via CLI: Once the HuggingFace CLI is installed, you can log in to your HuggingFace account directly from your terminal. This step is essential for accessing private datasets or contributing to the HuggingFace community. Use the following command:

    huggingface-cli login
    

    After executing this command, you'll be prompted to enter your HuggingFace credentials (User Access Token). Once authenticated, your credentials will be saved locally, allowing seamless access to HuggingFace resources.

Prepare Data

from birdset.datamodule.base_datamodule import DatasetConfig
from birdset.datamodule.birdset_datamodule import BirdSetDataModule

# initiate the data module
dm = BirdSetDataModule(
    dataset= DatasetConfig(
        data_dir='../../data_birdset/HSN',
        dataset_name='HSN',
        hf_path='DBD-research-group/BirdSet',
        hf_name='HSN',
        n_classes=21,
        n_workers=3,
        val_split=0.2,
        task="multilabel",
        classlimit=500,
        eventlimit=5,
        sampling_rate=32000,
    ),
)

# prepare the data (download dataset, ...)
dm.prepare_data()

# setup the dataloaders
dm.setup(stage="fit")

# get the dataloaders
train_loader = dm.train_dataloader()

Prepare Model and Start Training

from lightning import Trainer
min_epochs = 1
max_epochs = 5
trainer = Trainer(min_epochs=min_epochs, max_epochs=max_epochs, accelerator="gpu", devices=1)

from birdset.modules.base_module import BaseModule
model = BaseModule(
    len_trainset=dm.len_trainset,
    task=dm.task,
    batch_size=dm.train_batch_size,
    num_epochs=max_epochs)

trainer.fit(model, dm)

Logging

Logs will be written to Weights&Biases by default.

Background noise

To enhance model performance we mix in additional background noise from downloaded from the DCASE18. To download the files and convert them to the correct format, run the notebook 'download_background_noise.ipynb' in the 'notebooks' folder.

Run experiments

Our experiments are defined in the configs/experiment folder. To run an experiment, use the following command:

python birdset/main.py experiment=EXPERIMENT_NAME

Project structure

This repository is inspired by the Yet Another Lightning Hydra Template.

├── configs                     <- Hydra configuration files
│   ├── callbacks               <- Callbacks configs
│   ├── datamodule              <- Datamodule configs
│   ├── debug                   <- Debugging configs
│   ├── experiment              <- Experiment configs
│   ├── extras                  <- Extra utilities configs
│   ├── hydra                   <- Hydra settings configs
│   ├── logger                  <- Logger configs
│   ├── module                  <- Module configs
│   ├── paths                   <- Project paths configs
│   ├── trainer                 <- Trainer configs
│   ├── transformations         <- Transformations / augmentation configs
│   |
│   ├── main.yaml               <- Main config
│
├── data_birdset                  <- Project data
├── dataset                     <- Code to build the BirdSet dataset
├── notebooks                   <- Jupyter notebooks.
│
├── birdset                         <- Source code
│   ├── augmentations           <- Augmentations
│   ├── callbacks               <- Additional callbacks
│   ├── datamodules             <- Lightning datamodules
│   ├── modules                 <- Lightning modules
│   ├── utils                   <- Utility scripts
│   │
│   ├── main.py                 <- Run experiments
│
├── .gitignore                  <- List of files ignored by git
├── pyproject.toml              <- Poetry project file
├── requirements.txt            <- File for installing python dependencies
├── requirements-dev.txt        <- File for installing python dev dependencies
├── setup.py                    <- File for installing project as a package
└── README.md

Data pipeline

Our datasets are shared via HuggingFace Datasets in our BirdSet repository. First log in to HuggingFace with:

huggingface-cli login

For a detailed guide to using the BirdSet data pipeline and its many configuration options, see our comprehensive BirdSet Data Pipeline Tutorial.

Datamodule

The datamodules are defined in birdset/datamodule and configurations are stored under configs/datamodule. base_datamodule is the main class that can be inherited for specific datasets. It is responsible for preparing the data in the function prepare_data and loading the data in the function setup. prepare_data downloads the dataset, applies preprocessing, creates validation splits and saves the data to disk. setup initiates the dataloaders and configures data transformations.

The following steps are performed in prepare_data:

  1. Data is downloaded from HuggingFace Datasets _load_data
  2. Data gets preprocessed with _preprocess_data
  3. Data is split into train validation and test sets with _create_splits
  4. Length of the dataset gets saved to access later
  5. Data is saved to disk with _save_dataset_to_disk

The following steps are performed in setup:

  1. Data is loaded from disk with _get_dataset in which the transforms are applied

Transformations

Data transformations are referred to data transformations that are applied to the data during training. They include e.g. augmentations. The transformations are added to the huggingface dataset with set_transform.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

birdset-0.1.1.tar.gz (66.6 kB view hashes)

Uploaded Source

Built Distribution

birdset-0.1.1-py3-none-any.whl (83.3 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page