Machine Learning Experiment Toolbox
Project description
Lightweight Management of Distributed ML Experiments 🛠️
Coming up with the right hypothesis is hard - testing it should be easy.
ML researchers need to coordinate different types of experiments on separate remote resources. The Machine Learning Experiment (MLE)-Toolbox is designed to facilitate the workflow by providing a simple interface, standardized logging, many common ML experiment types (multi-seed/configurations, grid-searches and hyperparameter optimization pipelines). You can run experiments on your local machine, high-performance compute clusters (Slurm and Sun Grid Engine) as well as on cloud VMs (GCP). The results are archived (locally/GCS bucket) and can easily be retrieved or automatically summarized/reported.
What Does The mle-toolbox
Provide?
- API for launching jobs on cluster/cloud computing platforms (Slurm, GridEngine, GCP).
- Common machine learning research experiment setups:
- Launching and collecting multiple random seeds in parallel/batches or async.
- Hyperparameter searches: Random, Grid, SMBO, PBT and Nevergrad.
- Pre- and post-processing pipelines for data preparation/result visualization.
- Automated report generation for hyperparameter search experiments.
- Storage of results and database in Google Cloud Storage Bucket.
- Resource monitoring with dashboard visualization.
The 4 Step mle-toolbox
Cooking Recipe 🍲
- Follow the instructions below to install the
mle-toolbox
and set up your credentials/configurations. - Read the docs explaining the pillars of the toolbox & the experiment meta-configuration job
.yaml
files . - Check out the examples 📄 to get started: Toy ODE integration, training PyTorch MNIST-CNNs or VAEs in JAX.
- Run your own experiments using the template files, project and
mle run
.
Installation ⏳
If you want to use the toolbox on your local machine follow the instructions locally. Otherwise do so on your respective cluster resource (Slurm/SGE). A PyPI installation is available via:
pip install mle-toolbox
Alternatively, you can clone this repository and afterwards 'manually' install it:
git clone https://github.com/mle-infrastructure/mle-toolbox.git
cd mle-toolbox
pip install -e .
By default this will only install the minimal dependencies (not including specialized packages such as scikit-optimize
, statsmodels
, nevergrad
etc.). To get all requirements for tests or examples you will need to install additional requirements.
Setting Up Your Remote Credentials 🙈
By default the toolbox will only run locally and without any GCS storage of your experiments. If you want to integrate the mle-toolbox
with your SGE/Slurm clusters, you have to provide additional data. There 2 ways to do so:
- After installation type
mle init
. This will walk you through all configuration steps in your CLI and save your configuration in~/mle_config.toml
. - Manually edit the
config_template.toml
template. Move/rename the template to your home directory viamv config_template.toml ~/mle_config.toml
.
The configuration procedure consists of 3 optional steps, which depend on your needs:
- Set whether to store all results & your database locally or remote in a GCS bucket.
- Add SGE and/or Slurm credentials & cluster-specific details (headnode, partitions, proxy server, etc.).
- Add the GCP project, GCS bucket name and database filename to store your results.
The Core Commands of the MLE-Toolbox 🌱
You are now ready to dive deeper into the specifics of job configuration and can start running your first experiments from the cluster (or locally on your machine) with the following commands:
Command | Description | |
---|---|---|
⏳ | mle init |
Setup of credentials & toolbox settings. |
🚀 | mle run |
Start up an experiment. |
🖥️ | mle monitor |
Monitor resource utilisation. |
📥 | mle retrieve |
Retrieve an experiment result. |
💌 | mle report |
Create an experiment report with figures. |
🔄 | mle sync-gcs |
Extract all GCS-stored results to your local drive. |
🔄 | mle project |
Initialize a new project by cloning mle-project . |
📝 | mle protocol |
List a summary of the most recent experiments. |
Examples 🎒
Job Types | Description | |
---|---|---|
📄 Single-Objective | multi-configs , hyperparameter-search |
Core experiment types. |
📄 Multi-Objective | hyperparameter-search |
Multi-objective tuning. |
📄 Multi Bash | multi-configs |
Bash-based jobs. |
📄 Quadratic PBT | population-based-training |
PBT on toy quadratic surrogate. |
📓 Evaluation | - | Evaluation of gridsearch results. |
📓 GIF Animations | - | Walk through a set of animation helpers. |
📓 Testing | - | Perform hypothesis tests on logs. |
📓 PBT Evaluation | - | Inspect the result from PBT. |
Acknowledgements & Citing mle-toolbox
✏️
To cite this repository:
@software{mle_infrastructure2021github,
author = {Robert Tjarko Lange},
title = {{MLE-Infrastructure}: A Set of Lightweight Tools
for Distributed Machine Learning Experimentation},
url = {http://github.com/mle-infrastructure},
version = {0.3.0},
year = {2021},
}
Much of the mle-toolbox
design has been inspired by discussions with Jonathan Frankle and Nandan Rao about the quest for empirically sound and supported claims in Machine Learning. Finally, parts of the mle <subcommands>
were inspired by Tudor Berariu's Liftoff package and parts of the philosophy by wanting to provide a light-weight version of IDISA's sacred package. Further credit goes to Facebook's submitit
and Ray.
Notes, Development & Questions ❓
- If you find a bug or want a new feature, feel free to contact me @RobertTLange or create an issue 🤗
- You can check out the history of release modifications in
CHANGELOG.md
(added, changed, fixed). - You can find a set of open milestones in
CONTRIBUTING.md
.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for mle_toolbox-0.3.3-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | c4833a701df285cd0225a9df9943ff8b6db3054f7e8fb70b96af1ec9c50c1f98 |
|
MD5 | c0664a7d76304a75265150a3d4afb1c2 |
|
BLAKE2b-256 | 85d35008b87b60caeb217a764c1481731aae9c0a616362c3cf73d99760f47f8c |