Machine Learning Experiment Job Scheduler
Project description
Lightweight Cluster/Cloud VM Job Management 🚀
Are you looking for a tool to manage your training runs locally, on Slurm/Open Grid Engine clusters, SSH servers or Google Cloud Platform VMs? mle-scheduler
provides a lightweight API to launch and monitor job queues. It smoothly orchestrates simultaneous runs for different configurations and/or random seeds. It is meant to reduce boilerplate and to make job resource specification intuitive. It comes with two core pillars:
MLEJob
: Launches and monitors a single job on a resource (Slurm, Open Grid Engine, GCP, SSH, etc.).MLEQueue
: Launches and monitors a queue of jobs with different training configurations and/or seeds.
For a quickstart check out the notebook blog or the example scripts 📖
Local | Slurm | Grid Engine | SSH | GCP |
---|
Installation ⏳
pip install mle-scheduler
Managing a Single Job with MLEJob
Locally 🚀
from mle_scheduler import MLEJob
# python train.py -config base_config_1.yaml -exp_dir logs_single -seed_id 1
job = MLEJob(
resource_to_run="local",
job_filename="train.py",
config_filename="base_config_1.yaml",
experiment_dir="logs_single",
seed_id=1
)
_ = job.run()
Managing a Queue of Jobs with MLEQueue
Locally 🚀...🚀
from mle_scheduler import MLEQueue
# python train.py -config base_config_1.yaml -seed 0 -exp_dir logs_queue/<date>_base_config_1
# python train.py -config base_config_1.yaml -seed 1 -exp_dir logs_queue/<date>_base_config_1
# python train.py -config base_config_2.yaml -seed 0 -exp_dir logs_queue/<date>_base_config_2
# python train.py -config base_config_2.yaml -seed 1 -exp_dir logs_queue/<date>_base_config_2
queue = MLEQueue(
resource_to_run="local",
job_filename="train.py",
config_filenames=["base_config_1.yaml",
"base_config_2.yaml"],
random_seeds=[0, 1],
experiment_dir="logs_queue"
)
queue.run()
Launching Slurm Cluster-Based Jobs 🐒
# Each job requests 5 CPU cores & 1 V100S GPU & loads CUDA 10.0
job_args = {
"partition": "<SLURM_PARTITION>", # Partition to schedule jobs on
"env_name": "mle-toolbox", # Env to activate at job start-up
"use_conda_venv": True, # Whether to use anaconda venv
"num_logical_cores": 5, # Number of requested CPU cores per job
"num_gpus": 1, # Number of requested GPUs per job
"gpu_type": "V100S", # GPU model requested for each job
"modules_to_load": "nvidia/cuda/10.0" # Modules to load at start-up
}
queue = MLEQueue(
resource_to_run="slurm-cluster",
job_filename="train.py",
job_arguments=job_args,
config_filenames=["base_config_1.yaml",
"base_config_2.yaml"],
experiment_dir="logs_slurm",
random_seeds=[0, 1]
)
queue.run()
Launching GridEngine Cluster-Based Jobs 🐘
# Each job requests 5 CPU cores & 1 V100S GPU w. CUDA 10.0 loaded
job_args = {
"queue": "<GRID_ENGINE_QUEUE>", # Queue to schedule jobs on
"env_name": "mle-toolbox", # Env to activate at job start-up
"use_conda_venv": True, # Whether to use anaconda venv
"num_logical_cores": 5, # Number of requested CPU cores per job
"num_gpus": 1, # Number of requested GPUs per job
"gpu_type": "V100S", # GPU model requested for each job
"gpu_prefix": "cuda" #$ -l {gpu_prefix}="{num_gpus}"
}
queue = MLEQueue(
resource_to_run="slurm-cluster",
job_filename="train.py",
job_arguments=job_args,
config_filenames=["base_config_1.yaml",
"base_config_2.yaml"],
experiment_dir="logs_grid_engine",
random_seeds=[0, 1]
)
queue.run()
Launching SSH Server-Based Jobs 🦊
ssh_settings = {
"user_name": "<SSH_USER_NAME>", # SSH server user name
"pkey_path": "<PKEY_PATH>", # Private key path (e.g. ~/.ssh/id_rsa)
"main_server": "<SSH_SERVER>", # SSH Server address
"jump_server": '', # Jump host address
"ssh_port": 22, # SSH port
"remote_dir": "mle-code-dir", # Dir to sync code to on server
"start_up_copy_dir": True, # Whether to copy code to server
"clean_up_remote_dir": True # Whether to delete remote_dir on exit
}
job_args = {
"env_name": "mle-toolbox", # Env to activate at job start-up
"use_conda_venv": True # Whether to use anaconda venv
}
queue = MLEQueue(
resource_to_run="ssh-node",
job_filename="train.py",
config_filenames=["base_config_1.yaml",
"base_config_2.yaml"],
random_seeds=[0, 1],
experiment_dir="logs_ssh_queue",
job_arguments=job_args,
ssh_settings=ssh_settings)
queue.run()
Launching GCP VM-Based Jobs 🦄
cloud_settings = {
"project_name": "<GCP_PROJECT_NAME>", # Name of your GCP project
"bucket_name": "<GCS_BUCKET_NAME>", # Name of your GCS bucket
"remote_dir": "<GCS_CODE_DIR_NAME>", # Name of code dir in bucket
"start_up_copy_dir": True, # Whether to copy code to bucket
"clean_up_remote_dir": True # Whether to delete remote_dir on exit
}
job_args = {
"num_gpus": 0, # Number of requested GPUs per job
"gpu_type": None, # GPU requested e.g. "nvidia-tesla-v100"
"num_logical_cores": 1, # Number of requested CPU cores per job
}
queue = MLEQueue(
resource_to_run="gcp-cloud",
job_filename="train.py",
config_filenames=["base_config_1.yaml",
"base_config_2.yaml"],
random_seeds=[0, 1],
experiment_dir="logs_gcp_queue",
job_arguments=job_args,
cloud_settings=cloud_settings,
)
queue.run()
Development & Milestones for Next Release
You can run the test suite via python -m pytest -vv tests/
. If you find a bug or are missing your favourite feature, feel free to contact me @RobertTLange or create an issue :hugs:. In future releases I plan on implementing the following:
- Clean up TPU GCP VM & JAX dependencies case
- Add local launching of cluster jobs via SSH to headnode
- Add Docker/Singularity container setup support
- Add Azure support
- Add AWS support
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file mle_scheduler-0.0.2.tar.gz
.
File metadata
- Download URL: mle_scheduler-0.0.2.tar.gz
- Upload date:
- Size: 17.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.6.0 importlib_metadata/4.8.2 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.10.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0fa994363e2fa2900150c5b4b949e129629922b4fde2cccb435c5058cc6a9eb1 |
|
MD5 | 71049b03a6180a06e47bc12fedcf8980 |
|
BLAKE2b-256 | 6f480c75dfb5d23a68e89c994b4dab83439119430897ab06e26a159636b13e41 |
File details
Details for the file mle_scheduler-0.0.2-py3-none-any.whl
.
File metadata
- Download URL: mle_scheduler-0.0.2-py3-none-any.whl
- Upload date:
- Size: 18.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.6.0 importlib_metadata/4.8.2 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.10.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | d469284b2341f48b84c1e5c734234022ce17ad177e4e3a75cbbfb5b299d63f28 |
|
MD5 | 8cfde2d9d3f1fd8e5e2af4e5e0570961 |
|
BLAKE2b-256 | 91513d9388b0e9c3214d79ca6e7e17fd3b1512194d4fecf46b81a70e85ae0c9c |