Skip to main content

A simple tool to perform sweeps on SLURM clusters.

Project description

slurm sweeps logo
slurm sweeps

A simple tool to perform parameter sweeps on SLURM clusters.

License Codecov

The main motivation was to provide a lightweight ASHA implementation for SLURM clusters that is fully compatible with pytorch-lightning's ddp.

It is heavily inspired by tools like Ray Tune and Optuna. However, on a SLURM cluster, these tools can be complicated to set up and introduce considerable overhead.

Slurm sweeps is simple, lightweight, and has few dependencies. It uses SLURM Job Steps to run the individual trials.

Installation

pip install slurm-sweeps

Dependencies

  • cloudpickle
  • numpy
  • pandas
  • pyyaml

Usage

You can just run this example on your laptop. By default, the maximum number of parallel trials equals the number of CPUs on your machine.

""" Content of test_ss.py """
from time import sleep
import slurm_sweeps as ss


# Define your train function
def train(cfg: dict):
    for epoch in range(cfg["epochs"]):
        sleep(0.5)
        loss = (cfg["parameter"] - 1) ** 2 * epoch
        # log your metrics
        ss.log({"loss": loss}, epoch)


# Define your experiment
experiment = ss.Experiment(
    train=train,
    cfg={
        "epochs": 10,
        "parameter": ss.Uniform(0, 2),
    },
    asha=ss.ASHA(metric="loss", mode="min"),
)


# Run your experiment
dataframe = experiment.run(n_trials=1000)

# Your results are stored in a pandas DataFrame
print(f"\nBest trial:\n{dataframe.sort_values('loss').iloc[0]}")

Or submit it to a SLURM cluster. Write a small SLURM script test_ss.slurm that runs the code above:

#!/bin/bash -l
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=18
#SBATCH --cpus-per-task=4
#SBATCH --mem-per-cpu=1GB

python test_ss.py

By default, this will run $SLURM_NTASKS trials in parallel. In the case above: 2 nodes * 18 tasks = 36 trials

Then submit it to the queue:

sbatch test_ss.slurm

See the tests folder for an advanced example of training a PyTorch model with Lightning's DDP.

API Documentation

CLASS slurm_sweeps.Experiment

def __init__(
    self,
    train: Callable,
    cfg: Dict,
    name: str = "MySweep",
    local_dir: Union[str, Path] = "./slurm_sweeps",
    backend: Optional[Backend] = None,
    asha: Optional[ASHA] = None,
    restore: bool = False,
    overwrite: bool = False,
)

Set up an HPO experiment.

Arguments:

  • train - A train function that takes as input the cfg dict.
  • cfg - A dict passed on to the train function. It must contain the search spaces via slurm_sweeps.Uniform, slurm_sweeps.Choice, etc.
  • name - The name of the experiment.
  • local_dir - Where to store and run the experiments. In this directory we will create the database slurm_sweeps.db and a folder with the experiment name.
  • backend - A backend to execute the trials. By default, we choose the SlurmBackend if Slurm is available, otherwise we choose the standard Backend that simply executes the trial in another process.
  • asha - An optional ASHA instance to cancel less promising trials.
  • restore - Restore an experiment with the same name?
  • overwrite - Overwrite an existing experiment with the same name?

Experiment.run

def run(
    self,
    n_trials: int = 1,
    max_concurrent_trials: Optional[int] = None,
    summary_interval_in_sec: float = 5.0,
    nr_of_rows_in_summary: int = 10,
    summarize_cfg_and_metrics: Union[bool, List[str]] = True,
) -> pd.DataFrame

Run the experiment.

Arguments:

  • n_trials - Number of trials to run. For grid searches this parameter is ignored.
  • max_concurrent_trials - The maximum number of trials running concurrently. By default, we will set this to the number of cpus available, or the number of total Slurm tasks divided by the number of trial Slurm tasks requested.
  • summary_interval_in_sec - Print a summary of the experiment every x seconds.
  • nr_of_rows_in_summary - How many rows of the summary table should we print?
  • summarize_cfg_and_metrics - Should we include the cfg and the metrics in the summary table? You can also pass in a list of strings to only select a few cfg and metric keys.

Returns:

A DataFrame of the database.

CLASS slurm_sweeps.SlurmBackend

def __init__(
    self,
    exclusive: bool = True,
    nodes: int = 1,
    ntasks: int = 1,
    args: str = ""
)

Execute the training runs on a Slurm cluster via srun.

Pass an instance of this class to your experiment.

Arguments:

  • exclusive - Add the --exclusive switch.
  • nodes - How many nodes do you request for your srun?
  • ntasks - How many tasks do you request for your srun?
  • args - Additional command line arguments for srun, formatted as a string.

Contact

David Carreto Fidalgo (david.carreto.fidalgo@mpcdf.mpg.de)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

slurm_sweeps-0.1.2.tar.gz (19.6 kB view details)

Uploaded Source

Built Distribution

slurm_sweeps-0.1.2-py3-none-any.whl (18.5 kB view details)

Uploaded Python 3

File details

Details for the file slurm_sweeps-0.1.2.tar.gz.

File metadata

  • Download URL: slurm_sweeps-0.1.2.tar.gz
  • Upload date:
  • Size: 19.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for slurm_sweeps-0.1.2.tar.gz
Algorithm Hash digest
SHA256 7311f0fb78124f89d7b89ba64264a78159e6f50ed1dffef140069f843b2a289f
MD5 c34043843a1662be76a1562283bb8473
BLAKE2b-256 a268291a9a26b4805869f765cd8b6cd240cdefeee13ccdb0fcc48e9f6dbdb19c

See more details on using hashes here.

File details

Details for the file slurm_sweeps-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: slurm_sweeps-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 18.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for slurm_sweeps-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 0693a1bf5fba9af05711aef92c03644db4963b636762c97effbcee7edb96897a
MD5 4241ec48df37ce03378597bafe97951d
BLAKE2b-256 ceb30fa725f7005ddf0a51db2879c362d4c71edb304ec1f5c7c088af04cb2541

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page