Skip to main content

Deploy Dask on job queuing systems like PBS or SLURM

Project description

This helps to deploy Dask on batch-style job schedulers like PBS and SLURM.

Example

from dask_jobqueue import PBSCluster

cluster = PBSCluster(processes=6, threads=4, memory="16GB")
cluster.start_workers(10)

from dask.distributed import Client
client = Client(cluster)

Adaptivity

This can also adapt the cluster size dynamically based on current load. This helps to scale up the cluster when necessary but scale it down and save resources when not actively computing.

cluster.adapt()

History

This package came out of the Pangeo collaboration and was copy-pasted from a live repository at this commit. Unfortunately, development history was not preserved.

Original developers include the following:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dask-jobqueue-0.1.0.tar.gz (23.9 kB view hashes)

Uploaded Source

Built Distribution

dask_jobqueue-0.1.0-py3-none-any.whl (11.9 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page