Skip to main content

Lightweight utilities for music source separation.

Project description

SpliFFT

image image image Ruff MkDocs

Lightweight utilities for music source separation.

This library is a ground-up rewrite of the zfturbo's MSST repo, with a strong focus on robustness, simplicity and extensibility. While it is a fantastic collection of models and training scripts, this rewrite adopts a different architecture to address common pain points in research code.

Key principles:

  • Configuration as code: pydantic models are used instead of untyped dictionaries or ConfigDict. this provides static type safety, runtime data validation, IDE autocompletion, and a single, clear source of truth for all parameters.
  • Data-oriented and functional core: complex class hierarchies and inheritance are avoided. the codebase is built on plain data structures (like dataclasses) and pure, stateless functions.
  • Semantic typing as documentation: we leverage Python's type system to convey intent. types like RawAudioTensor vs. NormalizedAudioTensor make function signatures self-documenting, reducing the need for verbose comments and ensuring correctness.
  • Extensibility without modification: new models can be integrated from external packages without altering the core library. the dynamic model loading system allows easy plug-and-play adhering to the open/closed principle.

⚠️ This is pre-alpha software, expect significant breaking changes.

Features and Roadmap

Short term (high priority)

  • a robust, typed JSON configuration system powered by pydantic
  • inferencing:
    • normalization and denormalization
    • chunk generation: vectorized with unfold
    • chunk stitching: vectorized overlap-add with fold
    • flexible ruleset for stem deriving: add/subtract model outputs or any intermediate output (e.g., creating an instrumental track by subtracting vocals from the mixture).
  • web-based docs: generated with mkdocs with excellent crossrefs.
  • simple CLI for inferencing on a directory of audio files
  • BS-Roformer: ensure bit-for-bit equivalence in pytorch and strive for max perf.
    • initial fp16 support
    • support coremltools and torch.compile
      • handroll complex multiplication implementation
      • handroll stft in forward pass
  • port additional SOTA models from MSST (e.g. Mel Roformer, SCNet)
  • model registry with simple file-based cache
  • evals: SDR, bleedless, fullness, etc.
  • proper benchmarking (MFU, memory...)
  • datasets: MUSDB18-HQ, moises

Medium term

  • simple web-based GUI with FastAPI and SolidJS.
  • Jupyter notebook

Long term (low priority)

  • data augmentation
  • implement a complete, configurable training loop
  • max kernels

Contributing: PRs are very welcome!

Installation & Usage

Documentation on the config (amongst other details) can be found here

CLI

There are three steps. You do not need to have Python installed.

  1. Install uv if you haven't already. It is an awesome Python package and library manager with pip comptability.

    # Linux / MacOS
    wget -qO- https://astral.sh/uv/install.sh | sh
    # Windows
    powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
    
  2. Open a new terminal and install the latest stable PyPI release as a tool. It will install the Python interpreter, all necessary packages and add the splifft executable to your PATH:

    uv tool install "splifft[config,inference,cli,web]"
    
    Explanation of feature flags

    The core is kept as minimal as possible. Pick which ones you need:

    • The config extra is used to parse the model configuration from JSON and discover the registry's default cache dir.
    • The inference extra is used to decode audio formats.
    • The cli extra provides you with the splifft command line tool
    • The web extra is used to download models.
    I want the latest bleeding-edge version

    This directly pulls from the main branch, which may be unstable:

    uv tool install "git+https://github.com/undef13/splifft.git[config,inference,cli,web]"
    
  3. We recommend using our built-in registry-based workflow to manage model config and weights:

    # list all available models, including those not yet available locally
    splifft ls -a
    
    # download model files and config to your user cache directory
    # ~/.cache/splifft on linux
    splifft pull bs_roformer-fruit-sw
    
    # view information about the configuration
    # modify the configuration, such as batch size according to your hardware
    splifft info bs_roformer-fruit-sw
    
    # run inference
    splifft run data/audio/input/3BFTio5296w.flac --model bs_roformer-fruit-sw
    

    Alternatively, you can manage files manually. Go into a new directory and place the model checkpoint and configuration inside it. Assuming your current directory has this structure (doesn't have to be exactly this):

    Minimal reproduction: with example audio from YouTube
    uv tool install yt-dlp
    yt-dlp -f bestaudio -o data/audio/input/3BFTio5296w.flac 3BFTio5296w
    wget -P data/models/ https://huggingface.co/undef13/splifft/resolve/main/roformer-fp16.pt?download=true
    wget -P data/config/ https://raw.githubusercontent.com/undef13/splifft/refs/heads/main/data/config/bs_roformer.json
    
    .
    └── data
        ├── audio
        │   ├── input
        │   │   └── 3BFTio5296w.flac
        │   └── output
        ├── config
        │   └── bs_roformer.json
        └── models
            └── roformer-fp16.pt
    

    Run:

    splifft run data/audio/input/3BFTio5296w.flac --config data/config/bs_roformer.json --checkpoint data/models/roformer-fp16.pt
    
    Console output
    [00:00:41] INFO     using device=device(type='cuda')                                                 __main__.py:111
               INFO     loading configuration from                                                       __main__.py:113
                        config_path=PosixPath('data/config/bs_roformer.json')                                           
               INFO     loading model metadata `BSRoformer` from module `splifft.models.bs_roformer`     __main__.py:126
    [00:00:42] INFO     loading weights from checkpoint_path=PosixPath('data/models/roformer-fp16.pt')   __main__.py:127
               INFO     processing audio file:                                                           __main__.py:135
                        mixture_path=PosixPath('data/audio/input/3BFTio5296w.flac')                                     
    ⠙ processing chunks... ━━━━━━━━━━╺━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  25% 0:00:10 (bs=4 • cuda • float16)
    [00:00:56] INFO     wrote stem `bass` to data/audio/output/3BFTio5296w/bass.flac                     __main__.py:158
               INFO     wrote stem `drums` to data/audio/output/3BFTio5296w/drums.flac                   __main__.py:158
               INFO     wrote stem `other` to data/audio/output/3BFTio5296w/other.flac                   __main__.py:158
    [00:00:57] INFO     wrote stem `vocals` to data/audio/output/3BFTio5296w/vocals.flac                 __main__.py:158
               INFO     wrote stem `guitar` to data/audio/output/3BFTio5296w/guitar.flac                 __main__.py:158
               INFO     wrote stem `piano` to data/audio/output/3BFTio5296w/piano.flac                   __main__.py:158
    [00:00:58] INFO     wrote stem `instrum` to data/audio/output/3BFTio5296w/instrum.flac               __main__.py:158
               INFO     wrote stem `drums_and_bass` to data/audio/output/3BFTio5296w/drums_and_bass.flac __main__.py:158
    

    To update the tool:

    uv tool upgrade splifft --force-reinstall
    

Library

Add splifft to your project:

# latest pypi version
uv add splifft
# latest bleeding edge
uv add git+https://github.com/undef13/splifft.git

This will install the absolutely minimal core dependencies used under the src/splifft/models directory. Higher level components, e.g. inference, training or CLI components must be installed via optional depedencies, as specified in the project.optional-dependencies section of pyproject.toml, for example:

# enable the built-in configuration, inference and CLI
uv add "splifft[config,inference,cli,web]"

This will install splifft in your venv.

Development

If you'd like to make local changes, it is recommended to enable all optional and developer group dependencies:

git clone https://github.com/undef13/splifft.git
cd splifft
uv venv
uv sync --all-extras --all-groups

You may also want to use --editable with sync. Check your code:

# lint & format
just fmt
# build & host documentation
just docs

Format your code:

just fmt

This repo is no longer compatible with zfturbo's repo. The last version that does so is v0.0.1. To pin a specific version in uv, change your pyproject.toml:

[tool.uv.sources]
splifft = { git = "https://github.com/undef13/splifft.git", rev = "287235e520f3bb927b58f9f53749fe3ccc248fac" }

Mojo

While the primary goal is just to have minimalist PyTorch-based inference engine, I will be using this project as an opportunity to learn more about heterogenous computing, particularly with the Mojo language. The ultimate goal will be to understand to what extent can its compile-time metaprogramming and explicit memory layout control be used.

My approach will be incremental and bottom-up: I'll develop, test and benchmark small components against their PyTorch counterparts. The PyTorch implementation will always remain the "source of truth", the fully functional baseline and never be removed.

TODO:

  • evaluate pixi in pyproject.toml.
  • use max.torch.CustomOpLibrary to provide a callable from the pytorch side
  • use DeviceContext to interact with the GPU
  • attention
  • rotary embedding
  • feedforward
  • transformer
  • BandSplit & MaskEstimator
  • full graph compilation

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

splifft-0.0.4.tar.gz (400.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

splifft-0.0.4-py3-none-any.whl (124.1 kB view details)

Uploaded Python 3

File details

Details for the file splifft-0.0.4.tar.gz.

File metadata

  • Download URL: splifft-0.0.4.tar.gz
  • Upload date:
  • Size: 400.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for splifft-0.0.4.tar.gz
Algorithm Hash digest
SHA256 f58ab0adb81aa6f2337e45595f2df79609cace86d77baad000a7368097475bc2
MD5 a8a089e58ba047f63a6a2db45b982def
BLAKE2b-256 269e8c0c7221d656d1833c8d5073990dc3e71edcd8e1d03e4813032010cd67e7

See more details on using hashes here.

Provenance

The following attestation bundles were made for splifft-0.0.4.tar.gz:

Publisher: pypi.yml on undef13/splifft

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file splifft-0.0.4-py3-none-any.whl.

File metadata

  • Download URL: splifft-0.0.4-py3-none-any.whl
  • Upload date:
  • Size: 124.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for splifft-0.0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 88a65470bcf5b956bd590711ed7171c3443a29d62841482d22e827f80c119089
MD5 2fc987177ba18fa2972f806e405ad693
BLAKE2b-256 00f7d96aed7fda5387a0fb12b53ffedc8feac52612e53b5386a66e59ce6d45da

See more details on using hashes here.

Provenance

The following attestation bundles were made for splifft-0.0.4-py3-none-any.whl:

Publisher: pypi.yml on undef13/splifft

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page