Skip to main content

Open source, scalable acoustic data analysis for ecology and conservation

Reason this release was yanked:

installation bugs

Project description

OpenSoundscape

CI Documentation Status

OpenSoundscape (OPSO) is free and open source Python utility library analyzing bioacoustic data.

OpenSoundscape includes utilities which can be strung together to create data analysis pipelines, including functions to:

  • load and manipulate audio files
  • create and manipulate spectrograms
  • train deep learning models to recognize sounds
  • run pre-trained CNNs to detect vocalizations
  • tune pre-trained CNNs to custom classification tasks
  • detect periodic vocalizations with RIBBIT
  • load and manipulate Raven annotations
  • estimate the location of sound sources from synchronized recordings

OpenSoundscape's documentation can be found on OpenSoundscape.org.

Show me the code!

For examples of how to use OpenSoundscape, see the Quick Start Guide below.

For full API documentation and tutorials on how to use OpenSoundscape to work with audio and spectrograms, train machine learning models, apply trained machine learning models to acoustic data, and detect periodic vocalizations using RIBBIT, see the documentation.

Contact & Citation

OpenSoundcape is developed and maintained by the Kitzes Lab at the University of Pittsburgh. It is currently in active development. If you find a bug, please submit an issue on the GitHub repository. If you have another question about OpenSoundscape, please use the (OpenSoundscape Discussions board)[https://github.com/kitzeslab/opensoundscape/discussions] or email Sam Lapp (sam.lapp at pitt.edu)

Suggested citation:

Lapp, Sam; Rhinehart, Tessa; Freeland-Haynes, Louis; 
Khilnani, Jatin; Syunkova, Alexandra; Kitzes, Justin. 
“OpenSoundscape: An Open-Source Bioacoustics Analysis Package for Python.” 
Methods in Ecology and Evolution 2023. https://doi.org/10.1111/2041-210X.14196.

Quick Start Guide

A guide to the most commonly used features of OpenSoundscape.

Installation

Details about installation are available on the OpenSoundscape documentation at OpenSoundscape.org. FAQs:

How do I install OpenSoundscape?

  • Most users should install OpenSoundscape via pip, preferably within a virtual environment: pip install opensoundscape==0.13.0.
  • To use OpenSoundscape in Jupyter Notebooks (e.g. for tutorials), follow the installation instructions for your operating system, then follow the "Jupyter" instructions.
  • Contributors and advanced users can also use Poetry to install OpenSoundscape using the "Contributor" instructions

Will OpenSoundscape work on my machine?

  • OpenSoundscape can be installed on Windows, Mac, and Linux machines.
  • For Windows users, we strongly recommend using WSL2 which facilitates happy coding
  • We support Python 3.10, 3.11, 3.12, and 3.13 (but current github runners only test on Python 3.13)
  • Most computer cluster users should follow the Linux installation instructions
  • For older Macs (Intel chip), use this workaround since newer PyTorch versions are not found by pip (replace NAME with the desired name of your enviornment):
conda create -n NAME python=3.11
conda activate NAME
conda install pytorch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 -c conda-forge
pip install opensoundscape==0.13.0

Use Audio and Spectrogram classes to inspect audio data

from opensoundscape import Audio, Spectrogram

#load an audio file and trim out a 5 second clip
my_audio = Audio.from_file("/path/to/audio.wav")
clip_5s = my_audio.trim(0,5)

#create a spectrogram and plot it
my_spec = Spectrogram.from_audio(clip_5s)
my_spec.plot()

Load audio starting at a real-world timestamp

from datetime import datetime; import pytz

start_time = pytz.timezone('UTC').localize(datetime(2020,4,4,10,25))
audio_length = 5 #seconds  
path = '/path/to/audiomoth_file.WAV' #an AudioMoth recording

Audio.from_file(path, start_timestamp=start_time,duration=audio_length)

Load and use a model from the Bioacoustics Model Zoo

The Bioacoustics Model Zoo hosts models in a repository that can be installed as a package and are compatible with OpenSoundscape. To install, use pip install --upgrade bioacoustics-model-zoo

To install additional dependencies for specific models, use patterns like

pip install --upgrade bioacoustics-model-zoo[hawkears]

Load up a model and apply it to your own audio right away:

import bioacoustics_model_zoo as bmz

#list available models
print(bmz.utils.list_models())

#generate class predictions and embedding vectors with HawkEars...
hawkears = bmz.HawkEars()
scores = hawkears.predict(files)
embeddings = hawkears.embed(files)

#...or BirdNET...
# (you'll need ai-edge-litert in your environment, run `pip install bioacoustics-model-zoo[birdnet]`)
birdnet = bmz.BirdNET()
scores = birdnet.predict(files)
embeddings = birdnet.embed(files)

# or Perch2
# `pip install bioacoustics-model-zoo[perch]` will install tensorflow and tensorflow-hub
#...or BirdNET...
# (you'll need ai-edge-litert in your environment, run `pip install bioacoustics-model-zoo[birdnet]`)
perch2 = bmz.Perch2()
scores = perch2.predict(files)
embeddings = perch2.embed(files)

See the tutorial notebooks for examples of training and fine-tuning models from the model zoo with your own annotations.

Load a pre-trained CNN from a local file, and make predictions on long audio files

from opensoundscape import load_model

#get list of audio files
files = glob('./dir/*.WAV')

#generate predictions with a model
model = load_model('/path/to/saved.model')
scores = model.predict(files)

#scores is a dataframe with MultiIndex: file, start_time, end_time
#containing inference scores for each class and each audio window

Train a CNN using audio files and Raven annotations

from sklearn.model_selection import train_test_split
from opensoundscape import BoxedAnnotations, CNN

# assume we have a list of raven annotation files and corresponding audio files
# load the annotations into OpenSoundscape
all_annotations = BoxedAnnotations.from_raven_files(raven_file_paths,audio_file_paths)

# pick classes to train the model on. These should occur in the annotated data
class_list = ['IBWO','BLJA']

# create labels for fixed-duration (2 second) clips 
labels = all_annotations.clip_labels(
  clip_duration=2,
  clip_overlap=0,
  min_label_overlap=0.25,
  class_subset=class_list
)

# split the labels into training and validation sets
train_df, validation_df = train_test_split(labels, test_size=0.3)

# create a CNN and train on the labeled data
model = CNN(architecture='resnet18', sample_duration=2, classes=class_list, sample_rate=32000)

# train the model to recognize the classes of interest in audio data
model.train(train_df, validation_df, steps=500, num_workers=8, batch_size=256)

Train a custom classifier on BirdNET or Perch embeddings

Make sure you've installed the model zoo in your Python environment:

pip install bioacoustics-model-zoo==0.12.0

import bioacoustics_model_zoo as bmz

# load a model from the model zoo
model = bmz.BirdNET() #or bmz.Perch()

# define classes for your custom classifier
model.change_classes(train_df.columns)

# fit the trainable PyTorch classifier on your labels
model.train(train_df,val_df,num_augmentation_variants=4,batch_size=64)

# run inference using your custom classifier on audio data
model.predict(audio_files)

# save and load customized models
model.save(save_path)
reloaded_model = bmz.BirdNET.load(save_path)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

opensoundscape-0.13.0.tar.gz (707.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

opensoundscape-0.13.0-py3-none-any.whl (730.1 kB view details)

Uploaded Python 3

File details

Details for the file opensoundscape-0.13.0.tar.gz.

File metadata

  • Download URL: opensoundscape-0.13.0.tar.gz
  • Upload date:
  • Size: 707.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.5

File hashes

Hashes for opensoundscape-0.13.0.tar.gz
Algorithm Hash digest
SHA256 ee3fbf6589c2270d165d6d9b89380d6cc9913b33982c4b153a171afe263c09a5
MD5 7c1cbd5d5722b6d72a09cb162e56e855
BLAKE2b-256 20c034d83477570f2b5752b4f6d0a015eb18acb22bd12af3efa21c313ce75b11

See more details on using hashes here.

File details

Details for the file opensoundscape-0.13.0-py3-none-any.whl.

File metadata

File hashes

Hashes for opensoundscape-0.13.0-py3-none-any.whl
Algorithm Hash digest
SHA256 0771c304af7ee06eea0c6b63cfb05345023961f3da66a9cab7823d478d226930
MD5 349fbd2d6f65533275c6809f466ead0d
BLAKE2b-256 df7521adb3fc563f4db69dd10a5a64dfb242f29d5288f627c422d69c6b5067eb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page