Skip to main content

Utility library for easily distributing code execution on clusters

Project description

Cluster Tools

Build Status Code Style

This package provides python Executor classes for distributing tasks on a Slurm cluster, Kubernetes, Dask or via multi processing.

Example

import cluster_tools

def square(n):
  return n * n

if __name__ == '__main__':
  strategy = "slurm"  # other valid values are "multiprocessing" and "sequential"
  with cluster_tools.get_executor(strategy) as executor:
    result = list(executor.map(square, [2, 3, 4]))
    assert result == [4, 9, 16]

Installation

The cluster_tools package requires at least Python 3.8.

You can install it from pypi, e.g. via pip:

pip install cluster_tools

By default only the dependencies for running jobs on Slurm and via multiprocessing are installed. For Kubernetes and Dask run:

pip install cluster_tools[kubernetes]
pip install cluster_tools[dask]

Configuration

Slurm

The cluster_tools automatically determine the slurm limit for maximum array job size and split up larger job batches into multiple smaller batches. Also, the slurm limit for the maximum number of jobs which are allowed to be submitted by a user at the same time is honored by looking up the number of currently submitted jobs and only submitting new batches if they fit within the limit.

If you would like to configure these limits independently, you can do so by setting the SLURM_MAX_ARRAY_SIZE and SLURM_MAX_SUBMIT_JOBS environment variables. You can also limit the maximum number of simultaneously running tasks within the slurm array job(s) by using the SLURM_MAX_RUNNING_SIZE environment variable.

Kubernetes

Resource configuration

Key Description Example
namespace Kubernetes namespace for the resources to be created. Will be created if not existent. cluster-tools
node_selector Which nodes to utilize for the processing. Needs to be a Kubernetes nodeSelector object. {"kubernetes.io/hostname": "node001"}
image The docker image for the containerized jobs to run in. The image needs to have the same version of cluster_tools and the code to run installed and in the PYTHONPATH. scalableminds/voxelytics:latest
mounts Additional mounts for the containerized jobs. The current working directory and the .cfut directory are automatically mounted. ["/srv", "/data"]
cpu CPU requirements for this job. 4
memory Memory requirements for this job. Not required, but highly recommended to avoid congestion. Without resource requirements, all jobs will be run in parallel and RAM will run out soon. 16G
python_executable The python executable may differ in the docker image from the one in the current environment. For images based of FROM python, it should be python. Defaults to python. python3.8
umask umask for the jobs. 0002

Notes

  • The jobs are run with the current uid:gid.
  • The jobs are removed 7 days after completion (successful or not).
  • The logs are stored in the .cfut directory. This is actually redundant, because Kubernetes also stores them.
  • Pods are not restarted upon error.
  • Requires Kubernetes ≥ 1.23.
  • Kubernetes cluster configuration is expected to be the same as for kubectl, i.e. in ~/.kube/config or similar.

Dev Setup

# See ./dockered-slurm/README.md for troubleshooting
cd dockered-slurm
docker-compose up -d
docker exec -it slurmctld bash
docker exec -it c1 bash

Make sure to install all extra dependencies, such as Kubernetes, with poetry install --all-extras.

Tests can be executed with cd tests && poetry run pytest -s tests.py after entering the container. Linting can be run with ./lint.sh. Code formatting (black) can be run with ./format.sh.

Credits

Thanks to sampsyo/clusterfutures for providing the slurm core abstraction and giovtorres/slurm-docker-cluster for providing the slurm docker environment which we use for CI based testing.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cluster_tools-0.14.22.tar.gz (36.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cluster_tools-0.14.22-py3-none-any.whl (45.8 kB view details)

Uploaded Python 3

File details

Details for the file cluster_tools-0.14.22.tar.gz.

File metadata

  • Download URL: cluster_tools-0.14.22.tar.gz
  • Upload date:
  • Size: 36.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.6.1 CPython/3.8.18 Linux/6.5.0-1018-azure

File hashes

Hashes for cluster_tools-0.14.22.tar.gz
Algorithm Hash digest
SHA256 dd86a3d749f83f53acc2e909ecf737f5b0abb524f4e4454ead4a669c49a25ab3
MD5 2db8d503df1f5af6fae51c720f932b42
BLAKE2b-256 9ffd4728dfc7af191056ffc997c4a17abd9391447f2e696df4189d506ee20d55

See more details on using hashes here.

File details

Details for the file cluster_tools-0.14.22-py3-none-any.whl.

File metadata

  • Download URL: cluster_tools-0.14.22-py3-none-any.whl
  • Upload date:
  • Size: 45.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.6.1 CPython/3.8.18 Linux/6.5.0-1018-azure

File hashes

Hashes for cluster_tools-0.14.22-py3-none-any.whl
Algorithm Hash digest
SHA256 33f1dc3c9105d8251d3c55f1566848a921c2224f3b53c164898774f9773ec2f0
MD5 2e2c33b990947a0acef552f819cc2af9
BLAKE2b-256 66466f18e18dbaaed6692a3e06499b91f35ee0565e597fd4b8cdc54ab9406885

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page