Python library for building and evaluation/benchmarking of Automated Music Generation models
Project description
MUSIB: Music Inpainting Benchmark
Python library for building and evaluation/benchmarking of Automated Music Generation models
Models
Model | Year | Repo | Paper |
---|---|---|---|
DeepBach | 2017 | Repo | Paper |
CocoNet | 2017 | Repo | Paper |
AnticipationRNN | 2018 | Repo | Paper |
InpaintNet | 2019 | Repo | Paper |
Music SketchNet | 2020 | Repo | Paper |
Variable Length Infilling | 2021 | Repo | Paper |
Datasets
Dataset | Size | Description | Source | Paper | Type |
---|---|---|---|---|---|
AILabs | 1747 | Live Piano Performances | Source | Paper | Single Instrument Polyphony |
JSB Chorales | 385 | Bach Chorales Scores | Source | - | Fixed Voices Polyphony |
IrishFolk | 45849 | Irish Folk Songs | Source | Paper | Monophony |
Data Representation
Music SketchNet
DEFAULT_FRACTION: 24
# 0-127 note, 128 hold, 129 rest
note_seq: [
[48, 128, 128, 128, 128, 128, 50, 128, 128, 128, 128, 128, 52, 128, 128, 128, 128, 128, 53, 128, 128, 128, 128, 128]
]
# [px, rx, len_x, nrx, gd]
factorized: [
[48, 50, 52, 53, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128],
[1, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2],
[4],
[[1, 0, 0], [0, 1, 0], [0, 1, 0], [0, 1, 0], [0, 1, 0], [0, 1, 0], [1, 0, 0], [0, 1, 0], [0, 1, 0], [0, 1, 0], [0, 1, 0], [0, 1, 0], [1, 0, 0], [0, 1, 0], [0, 1, 0], [0, 1, 0], [0, 1, 0], [0, 1, 0], [1, 0, 0], [0, 1, 0], [0, 1, 0], [0, 1, 0], [0, 1, 0], [0, 1, 0]],
[48, 128, 128, 128, 128, 128, 50, 128, 128, 128, 128, 128, 52, 128, 128, 128, 128, 128, 53, 128, 128, 128, 128, 128]
]
model_input: [n_batch, **REVISAR**]
model_output: [n_batch, n_measures_middle, DEFAULT_FRACTION, n_classes]
DeepBach
index2note:
{0: 'D#5', 1: 'E-4', 2: 'E-5', 3: 'rest', 4: 'F#4', 5: 'E#5', 6: 'G#4', 7: 'B4', 8: 'D4', 9: 'A5', 10: 'END', 11: 'G-4', 12: 'C#5', 13: 'G4', 14: 'A3', 15: 'D#4', 16: 'START', 17: 'D5', 18: 'C5', 19: 'F5', 20: 'A-4', 21: 'C4', 22: 'C#4', 23: 'E5', 24: 'E#4', 25: 'A#4', 26: 'D-5', 27: 'E4', 28: 'G-5', 29: 'A-5', 30: 'A4', 31: 'G5', 32: 'B-4', 33: 'F#5', 34: '__', 35: 'F4', 36: 'OOR', 37: 'G#5', 38: 'B3'}
score_tensor = tensor([[36, 34, 34, 34, 36, 34, 34, 34, 34, 34, 34, 34, 8, 34, 34, 34, 38, 34,
34, 34, 34, 34, 14, 34, 36, 34, 34, 34, 36, 34, 34, 34, 34, 34, 14, 34,
38, 34, 34, 34, 14, 34, 34, 34, 34, 34, 34, 34, 38, 34, 34, 34, 8, 34,
34, 34, 34, 34, 34, 34, 21, 34, 34, 34, 38, 34, 34, 34, 14, 34, 34, 34,
34, 34, 34, 34, 36, 34, 34, 34, 34, 34, 34, 34, 38, 34, 34, 34, 38, 34,
34, 34, 21, 34, 34, 34, 8, 34, 34, 34, 8, 34, 34, 34, 34, 34, 21, 34,
38, 34, 34, 34, 14, 34, 34, 34, 34, 34, 34, 34, 36, 34, 34, 34, 38, 34,
34, 34, 34, 34, 34, 34, 21, 34, 34, 34, 8, 34, 34, 34, 34, 34, 34, 34,
21, 34, 34, 34, 38, 34, 34, 34, 34, 34, 34, 34, 34, 34, 34, 34, 36, 34,
34, 34, 34, 34, 34, 34, 38, 34, 34, 34, 8, 34, 34, 34, 34, 34, 34, 34,
21, 34, 34, 34, 38, 34, 34, 34, 34, 34, 34, 34, 14, 34, 34, 34, 36, 34,
34, 34, 34, 34, 14, 34, 38, 34, 34, 34, 14, 34, 34, 34, 34, 34, 34, 34,
38, 34, 34, 34, 8, 34, 34, 34, 34, 34, 34, 34, 21, 34, 34, 34, 38, 34,
34, 34, 14, 34, 34, 34, 34, 34, 34, 34, 36, 34, 34, 34, 34, 34, 34, 34]])
# Metadata = [Fermata, Tick, Key, N_Voice]
metadata_tensor = tensor([[ 0, 0, 15, 0],
[ 0, 1, 15, 0],
[ 0, 2, 15, 0],
[ 0, 3, 15, 0],
[ 0, 0, 15, 0],
[ 0, 1, 15, 0],
[ 0, 3, 15, 0],
[ 0, 0, 15, 0],
[ 1, 2, 15, 0],
[ 1, 3, 15, 0],
[ 1, 0, 15, 0],
[ 1, 1, 15, 0],
[ 1, 2, 15, 0],
[ 1, 3, 15, 0]])
Project Organization
├── README.md <- The top-level README for developers using this project.
├── data
│ ├── interim <- CSV data containing all notes coming from raw sources. Intermediate before vectorization.
│ ├── processed <- Vectorization of data ready to feed the models.
│ └── raw <- The original, immutable data dump.
│
├── models <- Trained models weights.
│
├── results <- Generated results, tables, csvs, etc.
│ └── images <- Generated graphics and figures
│
├── environment.yaml <- Libraries and modules required by the environment to reproduce the project.
│
└── src <- Source code for use in this project.
├── __init__.py <- Makes src a Python module
│
├── data <- Scripts to download or generate data
│ ├── download_from_souce.py
│ ├── standardize_data.py
│ └── process_data.py
│
├── features <- Scripts to turn raw data into features for modeling
│ └── build_features.py
│
├── models <- Scripts to train models and generate new data.
│ ├── sketchnet.py
│ ├── inpaintnet.py
│ ├── arnn.py
│ └── vli.py
│
├── metrics <- Scripts to calculation of metrics.
│
└── visualization <- Scripts to create exploratory and results oriented visualizations
└── plot_metrics.py
Project based on the cookiecutter data science project template.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
musib-0.0.3.tar.gz
(24.3 kB
view details)
Built Distribution
musib-0.0.3-py3-none-any.whl
(28.9 kB
view details)
File details
Details for the file musib-0.0.3.tar.gz
.
File metadata
- Download URL: musib-0.0.3.tar.gz
- Upload date:
- Size: 24.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.10.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 467f8523f7d759fa3e05585e840d9f119ef67196b1a85070f18574a8920a2015 |
|
MD5 | d5a81c29561959b5137ecb1d49f3509a |
|
BLAKE2b-256 | ae49ac7f604c51b1f05a5489f356d679ec1f014da6848c73221dbca4737551f3 |
File details
Details for the file musib-0.0.3-py3-none-any.whl
.
File metadata
- Download URL: musib-0.0.3-py3-none-any.whl
- Upload date:
- Size: 28.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.10.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 09c4d94bea8fba9ff97e5025f82f78a9dd7ad03f365e1f41a2fa88a50a942d4b |
|
MD5 | 12b4f30a7e3348e8be9adec5a624f8fe |
|
BLAKE2b-256 | aeb5424d0c65007b6a91f915ea0aa0a9e689800c2d9493c7445613524c880869 |