A simple tool to perform sweeps on SLURM clusters.
Project description
slurm sweeps
A simple tool to perform parameter sweeps on SLURM clusters.
The main motivation was to provide a lightweight ASHA implementation for SLURM clusters that is fully compatible with pytorch-lightning's ddp.
It is heavily inspired by tools like Ray Tune and Optuna. However, on a SLURM cluster, these tools can be complicated to set up and introduce considerable overhead.
Slurm sweeps is simple, lightweight, and has few dependencies. It uses SLURM Job Steps to run the individual trials.
Installation
pip install slurm-sweeps
Dependencies
- cloudpickle
- numpy
- pandas
- pyyaml
Usage
You can just run this example on your laptop. By default, the maximum number of parallel trials equals the number of CPUs on your machine.
""" Content of test_ss.py """
from time import sleep
import slurm_sweeps as ss
# Define your train function
def train(cfg: dict):
for epoch in range(cfg["epochs"]):
sleep(0.5)
loss = (cfg["parameter"] - 1) ** 2 * epoch
# log your metrics
ss.log({"loss": loss}, epoch)
# Define your experiment
experiment = ss.Experiment(
train=train,
cfg={
"epochs": 10,
"parameter": ss.Uniform(0, 2),
},
asha=ss.ASHA(metric="loss", mode="min"),
)
# Run your experiment
dataframe = experiment.run(n_trials=1000)
# Your results are stored in a pandas DataFrame
print(f"\nBest trial:\n{dataframe.sort_values('loss').iloc[0]}")
Or submit it to a SLURM cluster.
Write a small SLURM script test_ss.slurm
that runs the code above:
#!/bin/bash -l
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=18
#SBATCH --cpus-per-task=4
#SBATCH --mem-per-cpu=1GB
python test_ss.py
By default, this will run $SLURM_NTASKS
trials in parallel.
In the case above: 2 nodes * 18 tasks = 36 trials
Then submit it to the queue:
sbatch test_ss.slurm
See the tests
folder for an advanced example of training a PyTorch model with Lightning's DDP.
API Documentation
CLASS slurm_sweeps.Experiment
def __init__(
self,
train: Callable,
cfg: Dict,
name: str = "MySweep",
local_dir: Union[str, Path] = "./slurm_sweeps",
backend: Optional[Backend] = None,
asha: Optional[ASHA] = None,
restore: bool = False,
overwrite: bool = False,
)
Set up an HPO experiment.
Arguments:
train
- A train function that takes as input thecfg
dict.cfg
- A dict passed on to thetrain
function. It must contain the search spaces viaslurm_sweeps.Uniform
,slurm_sweeps.Choice
, etc.name
- The name of the experiment.local_dir
- Where to store and run the experiments. In this directory we will create the databaseslurm_sweeps.db
and a folder with the experiment name.backend
- A backend to execute the trials. By default, we choose theSlurmBackend
if Slurm is available, otherwise we choose the standardBackend
that simply executes the trial in another process.asha
- An optional ASHA instance to cancel less promising trials.restore
- Restore an experiment with the same name?overwrite
- Overwrite an existing experiment with the same name?
Experiment.run
def run(
self,
n_trials: int = 1,
max_concurrent_trials: Optional[int] = None,
summary_interval_in_sec: float = 5.0,
nr_of_rows_in_summary: int = 10,
summarize_cfg_and_metrics: Union[bool, List[str]] = True,
) -> pd.DataFrame
Run the experiment.
Arguments:
n_trials
- Number of trials to run. For grid searches this parameter is ignored.max_concurrent_trials
- The maximum number of trials running concurrently. By default, we will set this to the number of cpus available, or the number of total Slurm tasks divided by the number of trial Slurm tasks requested.summary_interval_in_sec
- Print a summary of the experiment every x seconds.nr_of_rows_in_summary
- How many rows of the summary table should we print?summarize_cfg_and_metrics
- Should we include the cfg and the metrics in the summary table? You can also pass in a list of strings to only select a few cfg and metric keys.
Returns:
A DataFrame of the database.
CLASS slurm_sweeps.SlurmBackend
def __init__(
self,
exclusive: bool = True,
nodes: int = 1,
ntasks: int = 1,
args: str = ""
)
Execute the training runs on a Slurm cluster via srun
.
Pass an instance of this class to your experiment.
Arguments:
exclusive
- Add the--exclusive
switch.nodes
- How many nodes do you request for your srun?ntasks
- How many tasks do you request for your srun?args
- Additional command line arguments for srun, formatted as a string.
Contact
David Carreto Fidalgo (david.carreto.fidalgo@mpcdf.mpg.de)
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file slurm_sweeps-0.1.1.tar.gz
.
File metadata
- Download URL: slurm_sweeps-0.1.1.tar.gz
- Upload date:
- Size: 19.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 24dc46522922660b5a3e92ed2d2136aef99b9129306a58973259a1213863ce8a |
|
MD5 | b29f3b1218d321c502eaa9682d6ef942 |
|
BLAKE2b-256 | 61adbaf312f504220335329fa43ad5bc4d842ef10243746a6572adbc67e8b993 |
File details
Details for the file slurm_sweeps-0.1.1-py3-none-any.whl
.
File metadata
- Download URL: slurm_sweeps-0.1.1-py3-none-any.whl
- Upload date:
- Size: 18.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 90d188bdba2a5ab93ccd7c37507556de556c2e00c2529641065a0d44b4df7a4a |
|
MD5 | 9f99da4dd2790989279d10572245e5a2 |
|
BLAKE2b-256 | 66dcd21593058ba20abbdde288411461f16cc336d87402fadbb325c4107b5290 |