Skip to main content

Open source, scalable acoustic classification for ecology and conservation

Project description

CI Status Documentation Status

OpenSoundscape

OpenSoundscape is a utility library for analyzing bioacoustic data. It consists of Python modules for tasks such as preprocessing audio data, training machine learning models to classify vocalizations, estimating the spatial location of sounds, identifying which species' sounds are present in acoustic data, and more.

These utilities can be strung together to create data analysis pipelines. OpenSoundscape is designed to be run on any scale of computer: laptop, desktop, or computing cluster.

OpenSoundscape is currently in active development. If you find a bug, please submit an issue. If you have another question about OpenSoundscape, please email Sam Lapp (sam.lapp at pitt.edu) or Tessa Rhinehart (tessa.rhinehart at pitt.edu).

Suggested Citation

Lapp, Rhinehart, Freeland-Haynes, and Kitzes, 2022. "OpenSoundscape v0.6.2".

Installation

OpenSoundscape can be installed on Windows, Mac, and Linux machines. It has been tested on Python 3.7 and 3.8.

Most users should install OpenSoundscape via pip: pip install opensoundscape==0.6.2. Contributors and advanced users can also use Poetry to install OpenSoundscape.

For more detailed instructions on how to install OpenSoundscape and use it in Jupyter, see the documentation.

Features & Tutorials

OpenSoundscape includes functions to:

  • trim, split, and manipulate audio files
  • create and manipulate spectrograms
  • train CNNs on spectrograms with PyTorch
  • run pre-trained CNNs to detect vocalizations
  • detect periodic vocalizations with RIBBIT
  • load and manipulate Raven annotations

OpenSoundscape can also be used with our library of publicly available trained machine learning models for the detection of 500 common North American bird species.

For full API documentation and tutorials on how to use OpenSoundscape to work with audio and spectrograms, train machine learning models, apply trained machine learning models to acoustic data, and detect periodic vocalizations using RIBBIT, see the documentation.

Quick Start

Using Audio and Spectrogram classes #tldr

from opensoundscape.audio import Audio
from opensoundscape.spectrogram import Spectrogram

#load an audio file and trim out a 5 second clip
my_audio = Audio.from_file("/path/to/audio.wav")
clip_5s = my_audio.trim(0,5)

#create a spectrogram and plot it
my_spec = Spectrogram.from_audio(clip_5s)
my_spec.plot()

Using a pre-trained CNN to make predictions on long audio files

from opensoundscape.torch.models.cnn import load_model
from opensoundscape.preprocess.preprocessors import ClipLoadingSpectrogramPreprocessor
from opensoundscape.helpers import make_clip_df
from glob import glob

#get list of audio files
files = glob('./dir/*.WAV')

#generate clip df
clip_df = make_clip_df(files,clip_duration=5.0,clip_overlap=0)

#create dataset
dataset = ClipLoadingSpectrogramPreprocessor(clip_df)
#you may need to change preprocessing params to match model

#generate predictions with a model
model = load_model('/path/to/saved.model')
scores, _, _ = model.predict(dataset)

#scores is a dataframe with MultiIndex: file, start_time, end_time
#containing inference scores for each class and each audio window

Training a CNN with labeled audio data

from opensoundscape.torch.models.cnn import PytorchModel
from opensoundscape.preprocess.preprocessors import CnnPreprocessor

#load a DataFrame of one-hot audio clip labels
#(index: file paths, columns: classes)
df = pd.read_csv('my_labels.csv')

#create a preprocessor that will create and augment samples for the CNN
train_dataset = CnnPreprocessor(df)

#create a CNN and train for 2 epochs
#for simplicity, using the training set as validation (not recommended!)
#the best model is automatically saved to `./best.model`
model = PytorchModel('resnet18',classes=df.columns)
model.train(
  train_dataset=train_dataset,
  valid_dataset=train_dataset,
  epochs=2
)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

opensoundscape-0.6.2.tar.gz (201.1 kB view details)

Uploaded Source

Built Distribution

opensoundscape-0.6.2-py3-none-any.whl (109.2 kB view details)

Uploaded Python 3

File details

Details for the file opensoundscape-0.6.2.tar.gz.

File metadata

  • Download URL: opensoundscape-0.6.2.tar.gz
  • Upload date:
  • Size: 201.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.6.0 importlib_metadata/4.8.2 pkginfo/1.8.1 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.2 CPython/3.9.5

File hashes

Hashes for opensoundscape-0.6.2.tar.gz
Algorithm Hash digest
SHA256 922c82fe7b48979b04299e3e352d450b3988379759f0513e486b0dc8b882d105
MD5 dc3363a26f50725fed0559af3a3abcf6
BLAKE2b-256 3ed09e6d7618dd4c77afb8fd452a868ba7a915477d4bb1ccddb430575b65281d

See more details on using hashes here.

File details

Details for the file opensoundscape-0.6.2-py3-none-any.whl.

File metadata

  • Download URL: opensoundscape-0.6.2-py3-none-any.whl
  • Upload date:
  • Size: 109.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.6.0 importlib_metadata/4.8.2 pkginfo/1.8.1 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.2 CPython/3.9.5

File hashes

Hashes for opensoundscape-0.6.2-py3-none-any.whl
Algorithm Hash digest
SHA256 6b1b5ff9d15d9cec1f97e4ec73c82d6cbe09cc7d4316aefec4317e9b77ab2390
MD5 aa47e35d63c8204d6533dd5f15b48772
BLAKE2b-256 b36ee7a7f4b283e694e194ca6595df83524079e68853382aec685a0bd2f7ab59

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page