Skip to main content

Unofficial NNabla implementation of CrossNet-Open-Unmix (X-UMX), originally created by Sony Research AI.

Project description

CrossNet-Open-Unmix (X-UMX) for Spleeter Web

This is a modified version of the official X-UMX repo made to be compatible with Spleeter Web!

This repository contains the NNabla implementation of CrossNet-Open-Unmix (X-UMX), an improved version of Open-Unmix (UMX) for music source separation. X-UMX achieves an improved performance without additional learnable parameters compared to the original UMX model. Details of X-UMX can be found in our paper.

Quick Music Source Separation Demo by X-UMX

From the Colab link below, you can try using X-UMX to generate and listen to separated audio sources of your audio music file. Please give it a try!

Open In Colab

Related Projects: x-umx | open-unmix-nnabla | open-unmix-pytorch | musdb | museval | norbert

The Model

As shown in Figure (b), X-UMX has almost the same architecture as the original UMX, but only differs by two additional average operations that link the instrument models together. Since these operations are not DNN layers, the number of learnable parameters of X-UMX is the same as for the original UMX and also the computational complexity is almost the same. Besides the model, there are two more differences compared to the original UMX. In particular, Multi Domain Loss (MDL) and a Combination Loss (CL) are used during training, which are different from the original loss function of UMX. Hence, these three contributions, i.e., (i) Crossing architecture, (ii) MDL and (iii) CL, make the original UMX more effective and successful without additional learnable parameters.

Getting started

Installation

For installation we recommend to use the Anaconda python distribution. To create a conda environment for open-unmix, simply run:

conda env create -f environment-X.yml where X is either [cpu, gpu], depending on your system.

Source separation with pretrained model

How to separate using pre-trained X-UMX

Download here a pre-trained model of X-UMX which results in the scores given in our paper. The model was trained on the MUSDB18 dataset.

In order to use it, please use the following command:

python -m xumx.test  --inputs [Input mixture (any audio format supported by FFMPEG)] --model {path to downloaded x-umx.h5 weights file} --context cpu --chunk-dur 10 --outdir ./results/ 

Please note that our X-UMX integrates the different instrument networks of the original UMX by a crossing operation, and thus X-UMX requires more memory. So, it maybe difficult to run the model on smaller GPU. So, though default choice is GPU inference, above example uses the option --context cpu. Also note that because memory requirement is high, we suggest users to set --chunk-dur with values appropriate for each computer. It is used to break audio into smaller chunks, separate sources and stitch them back together. If your inference crashes, kindly reduce chunk duration and try again.

Evaluation using museval

To perform evaluation in comparison to other SiSEC systems, you would need to install the museval package using

pip install museval

and then run the evaluation using

python -m xums.eval --model [path to downloaded x-umx.h5 model] --root [Path of MUSDB18] --outdir [Path to save musdb estimates] --evaldir [Path to save museval results]

Training X-UMX

X-UMX can be trained using the default parameters of the train.py function.

The MUSDB18 is one of the largest freely available datasets for professionally produced music tracks (~10h duration) of different styles. It comes with isolated drums, bass, vocals and others stems. MUSDB18 contains two subsets: "train", composed of 100 songs, and "test", composed of 50 songs.

To directly train x-umx, we first would need to download the dataset and place in unzipped in a directory of your choice (called root).

Argument Description Default
--root <str> path to root of dataset on disk. None

Also note that, if --root is not specified, we automatically download a 7 second preview version of the MUSDB18 dataset. While this is comfortable for testing purposes, we wouldn't recommend to actually train your model on this.

Training (Single/Distributed training) can be started using below commands.

Single GPU training

python -m xumx.train --root [Path of MUSDB18] --output [Path to save weights]

Distributed Training

For distributed training install NNabla package compatible with Multi-GPU execution. Use the below code to start the distributed training.

export CUDA_VISIBLE_DEVICES=0,1,2,3 {device ids that you want to use}
mpirun -n {no. of devices} python -m xumx.train --root [Path of MUSDB18] --output [Path to save weights]

Please note that above sample training scripts will work on high quality 'STEM' or low quality 'MP4 files'. In case you would like faster data loading, kindly look at more details here to generate decoded 'WAV' files. In that case, please use --is-wav flag for training.

Training MUSDB18 using x-umx comes with several design decisions that we made as part of our defaults to improve efficiency and performance:

  • chunking: we do not feed full audio tracks into open-unmix but instead chunk the audio into 6s excerpts (--seq-dur 6.0).
  • balanced track sampling: to not create a bias for longer audio tracks we randomly yield one track from MUSDB18 and select a random chunk subsequently. In one epoch we select (on average) 64 samples from each track.
  • source augmentation: we apply random gains between 0.25 and 1.25 to all sources before mixing. Furthermore, we randomly swap the channels the input mixture.
  • random track mixing: for a given target we select a random track with replacement. To yield a mixture we draw the interfering sources from different tracks (again with replacement) to increase generalization of the model.
  • fixed validation split: we provide a fixed validation split of 14 tracks. We evaluate on these tracks in full length instead of using chunking to have evaluation as close as possible to the actual test data.

Some of the parameters for the MUSDB sampling can be controlled using the following arguments:

Argument Description Default
--is-wav loads the decoded WAVs instead of STEMS for faster data loading. See more details here. False
--samples-per-track <int> sets the number of samples that are randomly drawn from each track 64
--source-augmentations <list[str]> applies augmentations to each audio source before mixing gain channelswap

Training and Model Parameters

An extensive list of additional training parameters allows researchers to quickly try out different parameterizations such as a different FFT size. The table below, we list the additional training parameters and their default values:

Argument Description Default
--output <str> path where to save the trained output model as well as checkpoints. ./x-umx
--model <str> path to checkpoint of target model to resume training. not set
--epochs <int> Number of epochs to train 1000
--batch-size <int> Batch size has influence on memory usage and performance of the LSTM layer 16
--patience <int> Early stopping patience 1000
--seq-dur <int> Sequence duration in seconds of chunks taken from the dataset. A value of <=0.0 results in full/variable length 6.0
--unidirectional changes the bidirectional LSTM to unidirectional (for real-time applications) not set
--hidden-size <int> Hidden size parameter of dense bottleneck layers 512
--nfft <int> STFT FFT window length in samples 4096
--nhop <int> STFT hop length in samples 1024
--lr <float> learning rate 0.001
--lr-decay-patience <int> learning rate decay patience for plateau scheduler 80
--lr-decay-gamma <float> gamma of learning rate plateau scheduler. 0.3
--weight-decay <float> weight decay for regularization 0.00001
--bandwidth <int> maximum bandwidth in Hertz processed by the LSTM. Input and Output is always full bandwidth! 16000
--nb-channels <int> set number of channels for model (1 for mono (spectral downmix is applied,) 2 for stereo) 2
--seed <int> Initial seed to set the random initialization 42
--valid_dur <float> To prevent GPU memory overflow, validation is calculated and averaged per valid_dur seconds. 100.0

Authors

Ryosuke Sawata(*), Stefan Uhlich(**), Shusuke Takahashi(*) and Yuki Mitsufuji(*)

(*) Sony Corporation, Tokyo, Japan
(**)Sony Europe B.V., Stuttgart, Germany

References

If you use CrossNet-open-unmix for your research – Cite CrossNet-Open-Unmix
@article{sawata20,  
  title={All for One and One for All: Improving Music Separation by Bridging Networks},
  author={Ryosuke Sawata and Stefan Uhlich and Shusuke Takahashi and Yuki Mitsufuji},
  year={2020},
  eprint={2010.04228},
  archivePrefix={arXiv},
  primaryClass={eess.AS}
}

If you use open-unmix for your research – Cite Open-Unmix
@article{stoter19,  
  author={F.-R. St\\"oter and S. Uhlich and A. Liutkus and Y. Mitsufuji},  
  title={Open-Unmix - A Reference Implementation for Music Source Separation},  
  journal={Journal of Open Source Software},  
  year=2019,
  doi = {10.21105/joss.01667},
  url = {https://doi.org/10.21105/joss.01667}
}

If you use the MUSDB dataset for your research - Cite the MUSDB18 Dataset

@misc{MUSDB18,
  author       = {Rafii, Zafar and
                  Liutkus, Antoine and
                  Fabian-Robert St{\"o}ter and
                  Mimilakis, Stylianos Ioannis and
                  Bittner, Rachel},
  title        = {The {MUSDB18} corpus for music separation},
  month        = dec,
  year         = 2017,
  doi          = {10.5281/zenodo.1117372},
  url          = {https://doi.org/10.5281/zenodo.1117372}
}

If compare your results with SiSEC 2018 Participants - Cite the SiSEC 2018 LVA/ICA Paper

@inproceedings{SiSEC18,
  author="St{\"o}ter, Fabian-Robert and Liutkus, Antoine and Ito, Nobutaka",
  title="The 2018 Signal Separation Evaluation Campaign",
  booktitle="Latent Variable Analysis and Signal Separation:
  14th International Conference, LVA/ICA 2018, Surrey, UK",
  year="2018",
  pages="293--305"
}

⚠️ Please note that the official acronym for CrossNet-Open-Unmix is X-UMX.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

xumx-unofficial-0.2.0.tar.gz (32.3 kB view details)

Uploaded Source

Built Distribution

xumx_unofficial-0.2.0-py3-none-any.whl (34.4 kB view details)

Uploaded Python 3

File details

Details for the file xumx-unofficial-0.2.0.tar.gz.

File metadata

  • Download URL: xumx-unofficial-0.2.0.tar.gz
  • Upload date:
  • Size: 32.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.27.1 setuptools/59.6.0 requests-toolbelt/0.9.1 tqdm/4.36.1 CPython/3.7.9

File hashes

Hashes for xumx-unofficial-0.2.0.tar.gz
Algorithm Hash digest
SHA256 54ae887c05a6c041fbe121a62654b8e0319f7c27c5ae2ca5bdde8991c1077b0f
MD5 2af81cccd7e4162e4bc68b51fb3bfd8c
BLAKE2b-256 fda6a019627378afd187648660f3d1ecd180cd462707f48dbb26ac7710e2c307

See more details on using hashes here.

File details

Details for the file xumx_unofficial-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: xumx_unofficial-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 34.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.27.1 setuptools/59.6.0 requests-toolbelt/0.9.1 tqdm/4.36.1 CPython/3.7.9

File hashes

Hashes for xumx_unofficial-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 0283854a7e27d5d253e3d309e6a519823a62e82b19cd6e42d9222dc18566c248
MD5 e6cfd1c06975aed925480e4a2399386b
BLAKE2b-256 4c0c0bef9283491618f4c926947f9f0ca18452b4d3f0984804983982f178e3e9

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page