Skip to main content

Evaluation tools for the SIGSEP MUS database

Project description

museval

Build Status Latest Version Supported Python versions

A python package to evaluate source separation results using the MUSDB18 dataset. This package was part of the MUS task of the Signal Separation Evaluation Campaign (SISEC).

BSSEval v4

The BSSEval metrics, as implemented in the MATLAB toolboxes and their re-implementation in mir_eval are widely used in the audio separation literature. One particularity of BSSEval is to compute the metrics after optimally matching the estimates to the true sources through linear distortion filters. This allows the criteria to be robust to some linear mismatches. Apart from the optional evaluation for all possible permutations of the sources, this matching is the reason for most of the computation cost of BSSEval, especially considering it is done for each evaluation window when the metrics are computed on a framewise basis.

For this package, we enabled the option of having time invariant distortion filters, instead of necessarily taking them as varying over time as done in the previous versions of BSS eval. First, enabling this option significantly reduces the computational cost for evaluation because matching needs to be done only once for the whole signal. Second, it introduces much more dynamics in the evaluation, because time-varying matching filters turn out to over-estimate performance. Third, this makes matching more robust, because true sources are not silent throughout the whole recording, while they often were for short windows.

Installation

Package installation

You can install the museval parsing package using pip:

pip install museval

Usage

The purpose of this package is to evaluate source separation results and write out validated json files. We want to encourage users to use this evaluation output format as the standardized way to share source separation results. museval is designed to work in conjuction with the musdb tools and the MUSDB18 dataset (however, museval can also be used without musdb).

Separate MUSDB18 tracks and Evaluate on-the-fly

  • If you want to perform evaluation while processing your source separation results, you can make use musdb track objects. Here is an example for such a function separating the mixture into a vocals and accompaniment track:
import musdb
import museval

def estimate_and_evaluate(track):
    # assume mix as estimates
    estimates = {
        'vocals': track.audio,
        'accompaniment': track.audio
    }

    # Evaluate using museval
    scores = museval.eval_mus_track(
        track, estimates, output_dir="path/to/json"
    )

    # print nicely formatted and aggregated scores
    print(scores)

mus = musdb.DB()
for track in mus:
    estimate_and_evaluate(track)

Make sure output_dir is set. museval will recreate the musdb file structure in that folder and write the evaluation results to this folder.

Evaluate MUSDB18 tracks later

If you have already computed your estimates, we provide you with an easy-to-use function to process evaluation results afterwards.

Simply use the museval.eval_mus_dir to evaluate your estimates_dir and write the results into the output_dir. For convenience, the eval_mus_dir function accepts all parameters of the musdb.run().

import musdb
import museval

# initiate musdb
mus = musdb.DB()

# evaluate an existing estimate folder with wav files
museval.eval_mus_dir(
    dataset=mus,  # instance of musdb
    estimates_dir=...,  # path to estimate folder
    output_dir=...,  # set a folder to write eval json files
    ext='wav
)

Aggregate and Analyze Scores

Scores for each track can also be aggregated in a pandas DataFrame for easier analysis or the creation of boxplots. To aggregate multiple tracks in a DataFrame, create museval.EvalStore() object and add the track scores successively.

results = museval.EvalStore(frames_agg='median', tracks_agg='median')
for track in tracks:
    # ...
    results.add_track(museval.eval_mus_track(track, estimates))

You may also add scores that have been computed beforehand through museval.eval_mus_dir:

results = museval.EvalStore(frames_agg='median', tracks_agg='median')
results.add_eval_dir(
    path=...# path to the output_dir for eval_mus_dir
)

When all tracks have been added, the aggregated scores can be shown using print(results) and results may be saved as a pandas DataFrame results.save('my_method.pandas').

To compare multiple methods, create a museval.MethodStore() object add the results

methods = museval.MethodStore()
methods.add_evalstore(results, name="XZY")

To compare against participants from SiSEC MUS 2018, we provide a convenient method to load the existing scores on demand using methods.add_sisec18(). For the creation of plots and statistical significance tests we refer to our list of examples.

Commandline tool

We provide a command line wrapper of eval_mus_dir by calling the museval command line tool. The following example is equivalent to the code example above:

museval --musdb path/to/musdb -o path/to/output_dir path/to/estimate_dir

:bulb: you use the --is-wav flag to use the decoded wav musdb dataset.

Using Docker for Evaluation

If you don't want to set up a Python environment to run the evaluation, we would recommend to use Docker. Assuming you have already computed your estimates and installed docker in your machine, you just need to run the following two lines in your terminal:

1. Pull Docker Container

Pull our precompiled sigsep-mus-eval image from dockerhub:

docker pull faroit/sigsep-mus-eval

2. Run evaluation

To run the evaluation inside of the docker, three absolute paths are required:

  • estimatesdir will stand here for the absolute path to the estimates directory. (For instance /home/faroit/dev/mymethod/musdboutput)
  • musdbdir will stand here for the absolute path to the root folder of musdb. (For instance /home/faroit/dev/data/musdb18)
  • outputdir will stand here for the absolute path to the output directory. (For instance /home/faroit/dev/mymethod/scores)

We just mount these directories into the docker container using the -v flags and start the docker instance:

docker run --rm -v estimatesdir:/est -v musdbdir:/mus -v outputdir:/out faroit/sigsep-mus-eval --musdb /mus -o /out /est

In the line above, replace estimatesdir, musdbdir and outputdir by the absolute paths for your setting. Please note that docker requires absolute paths so you have to rely on your command line environment to convert relative paths to absolute paths (e.g. by using $HOME/ on Unix).

:warning: museval requires a significant amount of memory for the evaluation. Evaluating all five targets for MUSDB18 may require more than 4GB of RAM. It is recommended to adjust your Docker preferences, because the docker container might just quit if its out of memory.

How to contribute

museval is a community focused project, we therefore encourage the community to submit bug-fixes and requests for technical support through github issues. For more details of how to contribute, please follow our CONTRIBUTING.md.

References

A. If you use the museval in the context of source separation evaluation comparing a method it to other methods of SiSEC 2018, please cite

@InProceedings{SiSEC18,
  author="St{\"o}ter, Fabian-Robert and Liutkus, Antoine and Ito, Nobutaka",
  title="The 2018 Signal Separation Evaluation Campaign",
  booktitle="Latent Variable Analysis and Signal Separation:
  14th International Conference, LVA/ICA 2018, Surrey, UK",
  year="2018",
  pages="293--305"
}

B. if you use the software for any other purpose, you can cite the software release

DOI

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

museval-0.4.1.tar.gz (24.4 kB view details)

Uploaded Source

Built Distribution

museval-0.4.1-py2.py3-none-any.whl (20.3 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file museval-0.4.1.tar.gz.

File metadata

  • Download URL: museval-0.4.1.tar.gz
  • Upload date:
  • Size: 24.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.16

File hashes

Hashes for museval-0.4.1.tar.gz
Algorithm Hash digest
SHA256 24d2140c8595fd171674a5aed40f837c9880a0443d82e1a6dbaa99f26bf6086e
MD5 a542f01e46c3e347e37a6f2057d25b91
BLAKE2b-256 10ff30917f3fb1ae02371183a105120c96056ac5c6b0bfabdfc4ce5a0dfc3e4b

See more details on using hashes here.

File details

Details for the file museval-0.4.1-py2.py3-none-any.whl.

File metadata

  • Download URL: museval-0.4.1-py2.py3-none-any.whl
  • Upload date:
  • Size: 20.3 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.16

File hashes

Hashes for museval-0.4.1-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 4b5320bc8aff68b218ea0571959da0c1e2f11aaf78a0264b659e8ac55d98d0f8
MD5 2942984124bc412ba5d7529e05fa04e4
BLAKE2b-256 3f232a4fc9f10f4f889da61c082e92092ff86b3c00f2eda72953293d0d708794

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page