Skip to main content

Python audio signal processing library

Project description

Madmom is an audio signal processing library written in Python with a strong focus on music information retrieval (MIR) tasks.

The library is internally used by the Department of Computational Perception, Johannes Kepler University, Linz, Austria ( and the Austrian Research Institute for Artificial Intelligence (OFAI), Vienna, Austria (

Possible acronyms are:

  • Madmom Analyzes Digitized Music Of Musicians

  • Mostly Audio / Dominantly Music Oriented Modules

It includes reference implementations for some music information retrieval algorithms, please see the References section.


Documentation of the package can be found online


The package has two licenses, one for source code and one for model/data files.

Source code

Unless indicated otherwise, all source code files are published under the BSD license. For details, please see the LICENSE file.

Model and data files

Unless indicated otherwise, all model and data files are distributed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 license.

If you want to include any of these files (or a variation or modification thereof) or technology which utilises them in a commercial product, please contact Gerhard Widmer.


Please do not try to install from the .zip files provided by GitHub. Rather install it from package (if you just want to use it) or source (if you plan to use it for development) by following the instructions below. Whichever variant you choose, please make sure that all prerequisites are installed.


To install the madmom package, you must have either Python 2.7 or Python 3.3 or newer and the following packages installed:

If you need support for audio files other than .wav with a sample rate of 44.1kHz and 16 bit depth, you need ffmpeg (avconv on Ubuntu Linux has some decoding bugs, so we advise not to use it!).

Please refer to the requirements.txt file for the minimum required versions and make sure that these modules are up to date, otherwise it can result in unexpected errors or false computations!

Install from package

The instructions given here should be used if you just want to install the package, e.g. to run the bundled programs or use some functionality for your own project. If you intend to change anything within the madmom package, please follow the steps in the next section.

The easiest way to install the package is via pip from the PyPI (Python Package Index):

pip install madmom

This includes the latest code and trained models and will install all dependencies automatically.

You might need higher privileges (use su or sudo) to install the package, model files and scripts globally. Alternatively you can install the package locally (i.e. only for you) by adding the --user argument:

pip install --user madmom

This will also install the executable programs to a common place (e.g. /usr/local/bin), which should be in your $PATH already. If you installed the package locally, the programs will be copied to a folder which might not be included in your $PATH (e.g. ~/Library/Python/2.7/bin on Mac OS X or ~/.local/bin on Ubuntu Linux, pip will tell you). Thus the programs need to be called explicitely or you can add their install path to your $PATH environment variable:

export PATH='path/to/scripts':$PATH

Install from source

If you plan to use the package as a developer, clone the Git repository:

git clone --recursive

Since the pre-trained model/data files are not included in this repository but rather added as a Git submodule, you either have to clone the repo recursively. This is equivalent to these steps:

git clone
cd madmom
git submodule update --init --remote

Then you can simply install the package in development mode:

python develop --user

To run the included tests:

python pytest

Upgrade of existing installations

To upgrade the package, please use the same mechanism (pip vs. source) as you did for installation. If you want to change from package to source, please uninstall the package first.

Upgrade a package

Simply upgrade the package via pip:

pip install --upgrade madmom [--user]

If some of the provided programs or models changed (please refer to the CHANGELOG) you should first uninstall the package and then reinstall:

pip uninstall madmom
pip install madmom [--user]

Upgrade from source

Simply pull the latest sources:

git pull

To update the models contained in the submodule:

git submodule update

If any of the .pyx or .pxd files changed, you have to recompile the modules with Cython:

python build_ext --inplace

Package structure

The package has a very simple structure, divided into the following folders:


this folder includes example programs (i.e. executable algorithms)


package documentation


the actual Python package


low level features (e.g. audio file handling, STFT)


evaluation code


higher level features (e.g. onsets, beats)


machine learning stuff (e.g. RNNs, HMMs)


pre-trained model/data files (see the License section)


misc stuff (e.g. MIDI and general file handling)



Executable programs

The package includes executable programs in the /bin folder. If you installed the package, they were copied to a common place.

All scripts can be run in different modes: in single file mode to process a single audio file and write the output to STDOUT or the given output file:

DBNBeatTracker single [-o OUTFILE] INFILE

If multiple audio files should be processed, the scripts can also be run in batch mode to write the outputs to files with the given suffix:


If no output directory is given, the program writes the output files to the same location as the audio files.

Some programs can also be run in online mode, i.e. operate on live audio signals. This requires pyaudio to be installed:

DBNBeatTracker online [-o OUTFILE] [INFILE]

The pickle mode can be used to store the used parameters to be able to exactly reproduce experiments.

Please note that the program itself as well as the modes have help messages:

DBNBeatTracker -h

DBNBeatTracker single -h

DBNBeatTracker batch -h

DBNBeatTracker online -h

DBNBeatTracker pickle -h

will give different help messages.

Additional resources

Mailing list

The mailing list should be used to get in touch with the developers and other users.


The wiki can be found here:


Frequently asked questions can be found here:


If you use madmom in your work, please consider citing it:

   Title = {{madmom: a new Python Audio and Music Signal Processing Library}},
   Author = {B{\"o}ck, Sebastian and Korzeniowski, Filip and Schl{\"u}ter, Jan and Krebs, Florian and Widmer, Gerhard},
   Booktitle = {Proceedings of the 24th ACM International Conference on
   Month = {10},
   Year = {2016},
   Pages = {1174--1178},
   Address = {Amsterdam, The Netherlands},
   Doi = {10.1145/2964284.2973795}



Supported by the European Commission through the GiantSteps project (FP7 grant agreement no. 610591) and the Phenicx project (FP7 grant agreement no. 601166) as well as the Austrian Science Fund (FWF) project Z159.

Release Notes

Version 0.16.1 (release date: 2017-11-14)

This is a maintenance release.

  • Include .pyx files in source distribution

Version 0.16 (release date: 2017-11-13)

New features:

  • TempoDetector can operate on live audio signals (#292)

  • Added chord evaluation (#309)

  • Bar tracking functionality (#316)

  • Added quantize_notes function (#327)

  • Added global key evaluation (#336)

  • Added key recognition feature and program (#345, #381)

Bug fixes:

  • Fix TransitionModel number of states when last state is unreachable (#287)

  • Fix double beat detections in BeatTrackingProcessor (#298)

  • Fix ffmpeg unicode filename handling (#305)

  • Fix STFT zero padding (#319)

  • Fix memory leak when accessing signal frames (#322)

  • Quantization of events does not alter them (#327)

API relevant changes:

  • BufferProcessor uses data instead of buffer for data storage (#292)

  • DBNBeatTrackingProcessor expects 1D inputs (#299)

  • Moved downbeat and pattern tracking to features.downbeats (#316)

  • Write/load functions moved to io module (#346)

  • Write functions do not return any data (#346)

  • Evaluation classes expect annotations/detections, cannot handle files (#346)

  • New MIDI module (io.midi) replacing (utils.midi) based on mido (#46)

Other changes:

  • Viterbi decoding of HMM raises a warning if no valid path is found (#279)

  • Add option to include Nyquist frequency in STFT (#280)

  • Use pyfftw to compute FFT (#363)

  • Python 3.7 support (#374)

  • Use pytest instead of nose to run tests (#385)

  • Removed obsolete code (#385)

Version 0.15.1 (release date: 2017-07-07)

This is a maintenance release.

  • NumPy boolean subtract fix (#296)

Version 0.15 (release date: 2017-04-25)

New features:

  • Streaming mode allows framewise processing of live audio input (#185)

  • Exponential linear unit (ELU) activation function (#232)

  • DBNBeatTracker can operate on live audio signals (#238)

  • OnsetDetectorLL can operate on live audio signals (#256)

Bug fixes:

  • Fix downbeat evaluation failure with a single annotation / detection (#216)

  • Fix tempo handling of multi-track MIDI files (#219)

  • Fix error loading unicode filenames (#223)

  • Fix ffmpeg unicode filename handling (#236)

  • Fix smoothing for peak_picking (#247)

  • Fix combining onsets/notes (#255)

API relevant changes:

  • NeuralNetwork expect 2D inputs; activation can be computed stepwise (#244)

  • Reorder GRUCell parameters, to be consistent with all other layers (#243)

  • Rename GRULayer parameters, to be consistent with all other layers (#243)

Other changes:

  • SPL and RMS can be computed on Signal and FramedSignal (#208)

  • num_threads is passed to ParallelProcessor in single mode (#217)

  • Use install_requires in to specify dependencies (#226)

  • Use new Cython build system to build extensions (#227)

  • Allow initialisation of previous/hidden states in RNNs (#243)

  • Forward path of HMM can be computed stepwise (#244)

Version 0.14.1 (release date: 2016-08-01)

This is a maintenance release.

  • RNNDownBeatProcessor returns only beat and downbeat activations (#197)

  • Update programs to reflect MIREX 2016 submissions (#198)

Version 0.14 (release date: 2016-07-28)

New features:

  • Downbeat tracking based on Recurrent Neural Network (RNN) and Dynamic Bayesian Network (DBN) (#130)

  • Convolutional Neural Networks (CNN) and CNN onset detection (#133)

  • Linear-Chain Conditional Random Field (CRF) implementation (#144)

  • Deep Neural Network (DNN) based chroma vector extraction (#148)

  • CRF chord recognition using DNN chroma vectors (#148)

  • CNN chord recognition using CRF decoding (#152)

  • Initial Windows support (Python 2.7 only, no pip packages yet) (#157)

  • Gated Recurrent Unit (GRU) network layer (#167)

Bug fixes:

  • Fix downbeat output bug (#128)

  • MIDI file creation bug (#166)

API relevant changes:

  • Refactored the ml.rnn to ml.nn and converted the models to pickles (#110)

  • Reordered the dimensions of comb_filters to time, freq, tau (#135)

  • write_notes uses delimiter instead of sep to separate columns (#155)

  • LSTMLayer takes Gate as arguments, all layers are callable (#161)

  • Replaced online parameter of FramedSignalProcessor by origin (#169)

Other changes:

  • Added classes for onset/note/beat detection with RNNs to features.* (#118)

  • Add examples to docstrings of classes (#119)

  • Converted madmom.modules into a Python package (#125)

  • match_files can handle inexact matches (#137)

  • Updated beat tracking models to MIREX 2015 ones (#146)

  • Tempo and time signature can be set for created MIDI files (#166)

Version 0.13.2 (release date: 2016-06-09)

This is a bugfix release.

  • Fix custom filterbank in FilteredSpectrogram (#142)

Version 0.13.1 (release date: 2016-03-14)

This is a bugfix release.

  • Fix beat evaluation argument parsing (#116)

Version 0.13 (release date: 2016-03-07)

New features:

Bug fixes:

  • Fix nasty unsigned indexing bug (#88)

  • MIDI note timing could get corrupted if note_ticks_to_beats() was called multiple times (#90)

API relevant changes:

  • Renamed DownBeatTracker and all relevant classes to PatternTracker (#25)

  • Complete refactoring of the features.beats_hmm module (#52)

  • Unified negative index behaviour of FramedSignal (#72)

  • Removed pickling of data classes since it was not tested thoroughly (#81)

  • Reworked stacking of spectrogram differences (#82)

  • Renamed norm_bands argument of MultiBandSpectrogram to norm_filters (#83)

Other changes:

  • Added alignment evaluation (#12)

  • Added continuous integration testing (#16)

  • Added -o option to both single/batch processing mode to not overwrite files accidentally in single mode (#18)

  • Removed block_size parameter from FilteredSpectrogram (#22)

  • Sample rate is always integer (#23)

  • Converted all docstrings to the numpydoc format (#48)

  • Batch processing continues if non-audio files are given (#53)

  • Added code quality checks (#61)

  • Added coverage measuring (#74)

  • Added –down` option to evaluate only downbeats (#76)

  • Removed option to normalise the observations (#95)

  • Moved filterbank related argument parser to FilterbankProcessor (#96)

Version 0.12.1 (release date: 2016-01-22)

Added Python 3 compatibility to (needed for the tutorials to work)

Version 0.12 (release date: 2015-10-16)

Initial public release of madmom

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

madmom-0.16.1.tar.gz (20.0 MB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page