Skip to main content

Pronounced as "musician", musicnn is a set of pre-trained deep convolutional neural networks for music audio tagging. Musicnn_keras is a tf.keras implementation of musicnn

Project description

musicnn_keras

Musicnn_keras is a tf.keras implementation of musicnn, originally written in "pure" TensorFlow. Musicnn_keras is aimed at making musicnn accessible via the popular tf.keras interface.

Pronounced as "musician", musicnn is a set of pre-trained musically motivated convolutional neural networks for music audio tagging. This repository also includes some pre-trained vgg-like baselines.

Check the documentation and our basic / advanced examples to understand how to use musicnn.

Do you have questions? Check the FAQs.

Installation

pip install musicnn_keras

Or clone the repository and install from source:

git clone https://github.com/Quint-e/musicnn_keras/musicnn_keras.git

python setup.py install

Dependencies: Tensorflow>=2.0, librosa>=0.7.0 and numpy<1.17,>=1.14.5.

Load pre-trained models

Loading pre-trained model is simply achieved by using the dedicated tf.keras API:

import tensorflow as tf
musicnn = tf.keras.models.load_model('./musicnn_keras/keras_checkpoints/MSD_musicnn.h5')

musicnn can then be used like any other keras model.

Note that if you are only interested in loading the pre-trained models in your code, you do not need to install the musicnn_keras package. tf.keras.models.load_model is sufficient.

Predict tags

From within python, you can estimate the topN tags:

from musicnn_keras.tagger import top_tags
top_tags('./audio/joram-moments_of_clarity-08-solipsism-59-88.mp3', model='MTT_musicnn', topN=10)

['techno', 'electronic', 'synth', 'fast', 'beat', 'drums', 'no vocals', 'no vocal', 'dance', 'beats']

Let's try another song!

top_tags('./audio/TRWJAZW128F42760DD_test.mp3')

['guitar', 'piano', 'fast']

From the command-line, you can also print the topN tags on the screen:

python -m musicnn_keras.tagger file_name.ogg --print
python -m musicnn_keras.tagger file_name.au --model 'MSD_musicnn' --topN 3 --length 3 --overlap 1.5 --print

or save to a file:

python -m musicnn_keras.tagger file_name.wav --save out.tags
python -m musicnn_keras.tagger file_name.mp3 --model 'MTT_musicnn' --topN 10 --length 3 --overlap 1 --print --save out.tags

Extract the Taggram

You can also compute the taggram using python (see our basic example for more details on how to depict it):

from musicnn_keras.extractor import extractor
taggram, tags = extractor('./audio/joram-moments_of_clarity-08-solipsism-59-88.mp3', model='MTT_musicnn')

Taggram

The above analyzed music clips are included in the ./audio/ folder of this repository.

musicnn_keras and musicnn

This repo mirrors the contents of the original musicnn repository, adapted to tf.keras. As a result, some of the code and examples used in this repository came from the original musicnn repo.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

musicnn_keras-0.1.0.tar.gz (7.2 kB view hashes)

Uploaded Source

Built Distribution

musicnn_keras-0.1.0-py3-none-any.whl (9.1 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page