Backend implementation for running MLFlow projects on Slurm
Project description
MLFlow-Slurm
Backend for executing MLFlow projects on Slurm batch system
Usage
Install this package in the environment from which you will be submitting jobs. If you are submitting jobs from inside jobs, make sure you have this package listed in your conda or pip environment.
Just list this as your --backend
in the job run. You should include a json
config file to control how the batch script is constructed:
mlflow run --backend slurm \
--backend-config slurm_config.json \
examples/sklearn_elasticnet_wine
It will generate a batch script named after the job id and submit it via the
Slurm sbatch
command. It will tag the run with the Slurm JobID
Configure Jobs
You can set values in a json file to control job submission. The supported properties in this file are:
Config File Setting | Use |
---|---|
partition | Which Slurm partition should the job run in? |
account | What account name to run under |
environment | List of additional environment variables to add to the job |
exports | List of environment variables to export to the job |
gpus_per_node | On GPU partitions how many GPUs to allocate per node |
gres | SLURM Generic RESources requests |
mem | Amount of memory to allocate to CPU jobs |
modules | List of modules to load before starting job |
nodes | Number of nodes to request from SLURM |
ntasks | Number of tasks to run on each node |
exclusive | Set to true to insure jobs don't share a node with other jobs |
time | Max CPU time job may run |
sbatch-script-file | Name of batch file to be produced. Leave blank to have service generate a script file name based on the run ID |
Sequential Worker Jobs
There are occasions where you have a job that can't finish in the maximum allowable wall time. If you are able to write out a checkpoint file, you can use sequential worker jobs to continue the job where it left off. This is useful for training deep learning models or other long running jobs.
To use this, you just need to provide a parameter to the mlflow run
command
mlflow run --backend slurm -c ../../slurm_config.json -P sequential_workers=3 .
This will the submit the job as normal, but also submit 3 additional jobs that each depend on the previous job. As soon as the first job terminates, the next job will start. This will continue until all jobs have completed.
Development
The slurm docker deployment is handy for testing and development. You can start up a slurm environment with the included docker-compose file
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file mlflow_slurm-1.0.5.tar.gz
.
File metadata
- Download URL: mlflow_slurm-1.0.5.tar.gz
- Upload date:
- Size: 7.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.8.19
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ba589ce3aac90b9b4bac3fdce78c7dff0532e2675b7f8db063a4b17b79c9370f |
|
MD5 | 0eb91432d05b976faa8b34558e150e47 |
|
BLAKE2b-256 | b663c8996e73be7ec81f8fd3edfb30851bfe58356fb023341fab21aeac8676f4 |
File details
Details for the file mlflow_slurm-1.0.5-py3-none-any.whl
.
File metadata
- Download URL: mlflow_slurm-1.0.5-py3-none-any.whl
- Upload date:
- Size: 9.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.8.19
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | f01ea9fe2022500df6929f324f9591bb5138697976381a867dab858b23140fad |
|
MD5 | 9a73084df719d44c48212f59d1fafda1 |
|
BLAKE2b-256 | 87fb42c96b9db85c9d9136f560c02071e76a27e5d4fa65f9851600dce6977377 |