Python audio signal processing library for musical tempo detection
Project description
Tempo-CNN
Tempo-CNN is a simple CNN-based framework for estimating temporal properties of music tracks featuring trained models from several publications [1] [2] [3] [4].
First and foremost, Tempo-CNN is a tempo estimator. To determine the global tempo of an audio file, simply run the script
tempo -i my_audio.wav
To create a local tempo “tempogram”, run
tempogram my_audio.wav
For a complete list of options, run either script with the parameter --help.
For programmatic use via the Python API, please see here.
Installation
In a clean Python 3.6 or 3.7 environment, simply run:
pip install tempocnn
If you rather want to install from source, clone this repo and run setup.py install using Python 3.6 or 3.7:
git clone https://github.com/hendriks73/tempo-cnn.git
cd tempo-cnn
python setup.py install
Models and Formats
You may specify other models and output formats (MIREX, JAMS) via command line parameters.
E.g. to create JAMS as output format and the model originally used in the ISMIR 2018 paper [1], please run
tempo -m ismir2018 --jams -i my_audio.wav
For MIREX-style output, add the --mirex parameter.
DeepTemp Models
To use one of the DeepTemp models from [3] (see also repo directional_cnns), run
tempo -m deeptemp --jams -i my_audio.wav
or,
tempo -m deeptemp_k24 --jams -i my_audio.wav
if you want to use a higher capacity model (some k-values are supported). deepsquare and shallowtemp models may also be used.
Note that some models may be downloaded (and cached) at execution time.
Mazurka Models
To use DT-Maz models from [4], run
tempo -m mazurka -i my_audio.wav
This defaults to the model named dt_maz_v_fold0. You may choose another fold [0-4] or another split [v|m]. So to use fold 3 from the M-split, use
tempo -m dt_maz_m_fold3 -i my_audio.wav
Note that Mazurka models may be used to estimate a global tempo, but were actually trained to create tempograms for Chopin Mazurkas [4].
While it’s cumbersome to list the split definitions for the Version folds, the Mazurka folds are easily defined:
fold0 was tested on Chopin_Op068No3 and validated on Chopin_Op017No4
fold1 was tested on Chopin_Op017No4 and validated on Chopin_Op024No2
fold2 was tested on Chopin_Op024No2 and validated on Chopin_Op030No2
fold3 was tested on Chopin_Op030No2 and validated on Chopin_Op063No3
fold4 was tested on Chopin_Op063No3 and validated on Chopin_Op068No3
The networks were trained on recordings of the three remaining Mazurkas. In essence this means, do not estimate the local tempo for Chopin_Op024No2 using dt_maz_m_fold0, because Chopin_Op024No2 was used in training.
Batch Processing
For batch processing, you may want to run tempo like this:
find /your_audio_dir/ -name '*.wav' -print0 | xargs -0 tempo -d /output_dir/ -i
This will recursively search for all .wav files in /your_audio_dir/, analyze then and write the results to individual files in /output_dir/. Because the model is only loaded once, this method of processing is much faster than individual program starts.
Interpolation
To increase accuracy for greater than integer-precision, you may want to enable quadratic interpolation. You can do so by setting the --interpolate flag. Obviously, this only makes sense for tracks with a very stable tempo:
tempo -m ismir2018 --interpolate -i my_audio.wav
Tempogram
Instead of estimating a global tempo, Tempo-CNN can also estimate local tempi in the form of a tempogram. This can be useful for identifying tempo drift.
To create such a tempogram, run
tempogram -p my_audio.wav
As output, tempogram will create a .png file. Additional options to select different models and output formats are available.
You may use the --csv option to export local tempo estimates in a parseable format and the --hop-length option to change temporal resolution. The parameters --sharpen and --norm-frame let you post-process the image.
Greek Folk
Tempo-CNN provides experimental support for temporal property estimation of Greek folk music [2]. The corresponding models are named fma2018 (for tempo) and fma2018-meter (for meter). To estimate the meter’s numerator, run
meter -m fma2018-meter -i my_audio.wav
Programmatic Usage
After installation, you may use the package programmatically.
Example for global tempo estimation:
from tempocnn.classifier import TempoClassifier
from tempocnn.feature import read_features
model_name = 'cnn'
input_file = 'some_audio_file.mp3'
# initialize the model (may be re-used for multiple files)
classifier = TempoClassifier(model_name)
# read the file's features
features = read_features(input_file)
# estimate the global tempo
tempo = classifier.estimate_tempo(features, interpolate=False)
print(f"Estimated global tempo: {tempo}")
Example for local tempo estimation:
from tempocnn.classifier import TempoClassifier
from tempocnn.feature import read_features
model_name = 'cnn'
input_file = 'some_audio_file.mp3'
# initialize the model (may be re-used for multiple files)
classifier = TempoClassifier(model_name)
# read the file's features, specify hop_length for temporal resolution
features = read_features(input_file, frames=256, hop_length=32)
# estimate local tempi, this returns tempo classes, i.e., a distribution
local_tempo_classes = classifier.estimate(features)
# find argmax per frame and convert class index to BPM value
max_predictions = np.argmax(local_tempo_classes, axis=1)
local_tempi = classifier.to_bpm(max_predictions)
print(f"Estimated local tempo classes: {local_tempi}")
License
Source code and models can be licensed under the GNU AFFERO GENERAL PUBLIC LICENSE v3. For details, please see the LICENSE file.
Citation
If you use Tempo-CNN in your work, please consider citing it.
Original publication:
@inproceedings{SchreiberM18_TempoCNN_ISMIR,
Title = {A Single-Step Approach to Musical Tempo Estimation Using a Convolutional Neural Network},
Author = {Schreiber, Hendrik and M{\"u}ller Meinard},
Booktitle = {Proceedings of the 19th International Society for Music Information Retrieval Conference ({ISMIR})},
Pages = {98--105},
Month = {9},
Year = {2018},
Address = {Paris, France},
doi = {10.5281/zenodo.1492353},
url = {https://doi.org/10.5281/zenodo.1492353}
}
ShallowTemp, DeepTemp, and DeepSquare models:
@inproceedings{SchreiberM19_CNNKeyTempo_SMC,
Title = {Musical Tempo and Key Estimation using Convolutional Neural Networks with Directional Filters},
Author = {Hendrik Schreiber and Meinard M{\"u}ller},
Booktitle = {Proceedings of the Sound and Music Computing Conference ({SMC})},
Pages = {47--54},
Year = {2019},
Address = {M{\'a}laga, Spain}
}
Mazurka models:
@inproceedings{SchreiberZM20_LocalTempo_ISMIR,
Title = {Modeling and Estimating Local Tempo: A Case Study on Chopin’s Mazurkas},
Author = {Hendrik Schreiber and Frank Zalkow and Meinard M{\"u}ller},
Booktitle = {Proceedings of the 21th International Society for Music Information Retrieval Conference ({ISMIR})},
Pages = {773--779},
Year = {2020},
Address = {Montreal, QC, Canada}
}
References
Changes
- 0.0.6:
Require h5py<3.0.0, to avoid model loading issues.
- 0.0.5:
Moved to TensorFlow 1.15.4.
Consolidated version info.
Consolidated requirements.
Switched to pytest.
Officially support Python 3.7.
Enabled GitHub actions for packaging and testing.
Added Pypi workflow.
Cache models locally.
Load models from GitHub.
Turned off TensorFlow debug logging.
Migrated scripts to entry points.
Removed charset encoding comments.
- 0.0.4:
Added support for DeepTemp, DeepSquare, and ShallowTemp models.
Added support for Mazurka models.
Added support for exporting data from tempograms.
Added support for framewise normalization in tempograms.
Moved to TensorFlow 1.15.2.
Print number of model parameters.
- 0.0.3:
Added flag --interpolate for tempo to increase accuracy.
Migrated models to TensorFlow 1.10.1.
- 0.0.2:
Added -d option for improved batch processing (tempo)
Improved jams output
Moved to librosa 0.6.2
Continue processing batch, even when encountering an error
- 0.0.1:
Initial version
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file tempocnn-0.0.6.tar.gz
.
File metadata
- Download URL: tempocnn-0.0.6.tar.gz
- Upload date:
- Size: 70.0 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/50.3.0.post20201006 requests-toolbelt/0.9.1 tqdm/4.50.2 CPython/3.7.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 34db7f0d7fa9df93a293c71573b6fe290eb143736c511f2a7c93f25a53416642 |
|
MD5 | 9d0bb29924b6e643e45a77c68390017f |
|
BLAKE2b-256 | a634d5c56011297f02ae33028d52f92c95e779896b7ecdd88c961fb788ddbad6 |
File details
Details for the file tempocnn-0.0.6-py3-none-any.whl
.
File metadata
- Download URL: tempocnn-0.0.6-py3-none-any.whl
- Upload date:
- Size: 70.0 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/50.3.0.post20201006 requests-toolbelt/0.9.1 tqdm/4.50.2 CPython/3.7.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | f566d51df2610b6e246afa436cfc682e93f024148826095347a4b80e3b2fe75e |
|
MD5 | 36dfd8aaf66e9fbf42fb89cee0583506 |
|
BLAKE2b-256 | 6ae37ba505d3e4d814919dd87242436aeb73e17d857da2c8c34366e22d119f91 |