Skip to main content

MIDI / symbolic music tokenizers for Deep Learning models.

Project description

MidiTok

Python package to tokenize MIDI music files, presented at the ISMIR 2021 LBD.

MidiTok Logo

PyPI version fury.io Python 3.8 Documentation Status GitHub CI Codecov GitHub license Downloads Code style

Using Deep Learning with symbolic music ? MidiTok can take care of converting (tokenizing) your MIDI files into tokens, ready to be fed to models such as Transformer, for any generation, transcription or MIR task. MidiTok features most known MIDI tokenizations (e.g. REMI, Compound Word...), and is built around the idea that they all share common parameters and methods. It supports Byte Pair Encoding (BPE) and data augmentation.

Documentation: miditok.readthedocs.com

Install

pip install miditok

MidiTok uses MIDIToolkit, which itself uses Mido to read and write MIDI files, and BPE is backed by Hugging Face 🤗tokenizers for super-fast encoding.

Usage example

The most basic and useful methods are summarized here. And here is a simple notebook example showing how to use Hugging Face models to generate music, with MidiTok taking care of tokenizing MIDIs.

from miditok import REMI, TokenizerConfig
from miditoolkit import MidiFile
from pathlib import Path

# Creating a multitrack tokenizer configuration, read the doc to explore other parameters
config = TokenizerConfig(num_velocities=16, use_chords=True, use_programs=True)
tokenizer = REMI(config)

# Loads a midi, converts to tokens, and back to a MIDI
midi = MidiFile('path/to/your_midi.mid')
tokens = tokenizer(midi)  # calling the tokenizer will automatically detect MIDIs, paths and tokens
converted_back_midi = tokenizer(tokens)  # PyTorch / Tensorflow / Numpy tensors supported

# Tokenize a whole dataset and save it at Json files
midi_paths = list(Path("path", "to", "dataset").glob("**/*.mid"))
data_augmentation_offsets = [2, 1, 1]  # data augmentation on 2 pitch octaves, 1 velocity and 1 duration values
tokenizer.tokenize_midi_dataset(midi_paths, Path("path", "to", "tokens_noBPE"),
                                data_augment_offsets=data_augmentation_offsets)

# Constructs the vocabulary with BPE, from the token files
tokenizer.learn_bpe(
    vocab_size=10000,
    tokens_paths=list(Path("path", "to", "tokens_noBPE").glob("**/*.json")),
    start_from_empty_voc=False,
)

# Saving our tokenizer, to retrieve it back later with the load_params method
tokenizer.save_params(Path("path", "to", "save", "tokenizer.json"))
# And pushing it to the Hugging Face hub (you can download it back with .from_pretrained)
tokenizer.push_to_hub("username/model-name", private=True, token="your_hugging_face_token")

# Applies BPE to the previous tokens
tokenizer.apply_bpe_to_dataset(Path('path', 'to', 'tokens_noBPE'), Path('path', 'to', 'tokens_BPE'))

Tokenizations

MidiTok implements the tokenizations: (links to original papers)

You can find short presentations in the documentation.

Contributions

Contributions are gratefully welcomed, feel free to open an issue or send a PR if you want to add a tokenization or speed up the code. You can read the contribution guide for details.

Todos

  • Extend unimplemented additional tokens to all compatible tokenizations;
  • Control Change messages;
  • Option to represent pitch values as pitch intervals, as it seems to improve performances;
  • Speeding up MIDI read / load (using a Rust / C++ io library + Python binding ?);
  • Data augmentation on duration values at the MIDI level.

Citation

If you use MidiTok for your research, a citation in your manuscript would be gladly appreciated. ❤️

[MidiTok paper] [MidiTok original ISMIR publication]

@inproceedings{miditok2021,
    title={{MidiTok}: A Python package for {MIDI} file tokenization},
    author={Fradet, Nathan and Briot, Jean-Pierre and Chhel, Fabien and El Fallah Seghrouchni, Amal and Gutowski, Nicolas},
    booktitle={Extended Abstracts for the Late-Breaking Demo Session of the 22nd International Society for Music Information Retrieval Conference},
    year={2021},
    url={https://archives.ismir.net/ismir2021/latebreaking/000005.pdf},
}

The BibTeX citations of all tokenizations can be found in the documentation

Acknowledgments

Special thanks to all the contributors. We acknowledge Aubay, the LIP6, LERIA and ESEO for the initial financing and support.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

miditok-2.1.8.tar.gz (87.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

miditok-2.1.8-py3-none-any.whl (109.5 kB view details)

Uploaded Python 3

File details

Details for the file miditok-2.1.8.tar.gz.

File metadata

  • Download URL: miditok-2.1.8.tar.gz
  • Upload date:
  • Size: 87.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.18

File hashes

Hashes for miditok-2.1.8.tar.gz
Algorithm Hash digest
SHA256 3deb120be53ac015c4aafb1f21e6e187f9641eb5b79620a348039ea12041e1de
MD5 a2b7059c9585491665a99f506b638fda
BLAKE2b-256 b4f012f1ff30e74866606b20e3369c8e0a26b0223ddd2f4ebadb3fac7c7cf14f

See more details on using hashes here.

File details

Details for the file miditok-2.1.8-py3-none-any.whl.

File metadata

  • Download URL: miditok-2.1.8-py3-none-any.whl
  • Upload date:
  • Size: 109.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.18

File hashes

Hashes for miditok-2.1.8-py3-none-any.whl
Algorithm Hash digest
SHA256 16d9633ca1bd049c35a93164a983b3373a4e2074451b6bacb516e986fad9e192
MD5 d1ddf3eb9a581427dedce769a9dc0870
BLAKE2b-256 5ca32250b760ed33d15cc834a746efa8093aea013d57e982a20573323019354e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page