Skip to main content

dagster integration with slurm

Project description

dagster-slurm

Integrating dagster to orchestrate slurm jobs for HPC systems and frameworks for scaling compute like ray for a better developer experience on supercomputers.

dagster-slurm lets you take the same Dagster assets from a laptop to a Slurm-backed supercomputer with minimal configuration changes.

An European sovereign GPU cloud does not come out of nowhere maybe this project can support making HPC systems more accessible.

Basic example

https://github.com/ascii-supply-networks/dagster-slurm/tree/main/examples

prerequisites

  • installation of pixi: https://pixi.sh/latest/installation/ curl -fsSL https://pixi.sh/install.sh | sh
  • pixi global install git
  • a container runtime like docker or podman; for now we assume docker compose is available to you. You could absolutely also use nerdctl or something similar.

usage

Example

git clone https://github.com/ascii-supply-networks/dagster-slurm.git
docker compose up
cd dagster-slurm/examples

local execution

Execute without slurm.

  • Small data
  • Rapid local prototyping
pixi run start

go to http://localhost:3000 and you should see the dagster webserver running.

docker local execution

  • Test everything works on SLURM
  • Still small data
  • Mainly used for developing this integration

Ensure you have a .env file with the following content:

SLURM_EDGE_NODE_HOST=localhost
SLURM_EDGE_NODE_PORT=2223
SLURM_EDGE_NODE_USER=submitter
SLURM_EDGE_NODE_PASSWORD=submitter
SLURM_DEPLOYMENT_BASE_PATH=/home/submitter/pipelines/deployments
pixi run start-staging

go to http://localhost:3000 and you should see the dagster webserver running.

prod docker local execution

  • Test everything works on SLURM
  • Still small data
  • Mainly used for developing this integration
  • This target instead supports a faster startup of the job

Ensure you have a .env file with the following content:

SLURM_EDGE_NODE_HOST=localhost
SLURM_EDGE_NODE_PORT=2223
SLURM_EDGE_NODE_USER=submitter
SLURM_EDGE_NODE_PASSWORD=submitter
SLURM_DEPLOYMENT_BASE_PATH=/home/submitter/pipelines/deployments

# see the JQ command below for dynamically setting this
# DAGSTER_PROD_ENV_PATH=/home/submitter/pipelines/deployments/<<<your deployment >>>
# we assume your CI-CD pipelines would out of band perform the deployment of the environment
# this allows your jobs to start up faster
pixi run deploy-prod-docker

cat deplyyment_metadata.json
export DAGSTER_PROD_ENV_PATH="$(jq -er '.deployment_path' foo.json)"

pixi run start-prod-docker

go to http://localhost:3000 and you should see the dagster webserver running.

real HPC supercomputer execution

  • Targets clusters like VSC-5 (Austrian Scientific Computing (ASC)) and Leonardo (CINECA).
  • Assets run against the real scheduler, so ensure the account has queue access and quotas.

Create a .env file with the edge-node credentials and select the site profile:

# example for VSC-5
SLURM_EDGE_NODE_HOST=vsc5.vsc.ac.at
SLURM_EDGE_NODE_PORT=22
SLURM_EDGE_NODE_USER=<<your_user>>
SLURM_EDGE_NODE_PASSWORD=<<your_password>>
SLURM_EDGE_NODE_JUMP_HOST=vmos.vsc.ac.at
SLURM_EDGE_NODE_JUMP_USER=<<your_user>>
SLURM_EDGE_NODE_JUMP_PASSWORD=<<your_password>>
SLURM_DEPLOYMENT_BASE_PATH=/home/<<your_user>>/pipelines/deployments
SLURM_PARTITION=zen3_0512
SLURM_QOS=zen3_0512_devel
SLURM_RESERVATION=dagster-slurm_21
SLURM_SUPERCOMPUTER_SITE=vsc5
DAGSTER_DEPLOYMENT=staging_supercomputer

If your account relies on passwords (or passwords + OTP), provide them for both the jump host and the final login node. The automation will answer the standard prompts; any time-based OTP still has to be supplied interactively once per validity window. When an extra prompt appears, Dagster writes Enter ... for <host>: to your terminal (via /dev/tty). Enter the code there to continue.

TTY allocation is handled automatically for password-based sessions, so you do not need to set SLURM_EDGE_NODE_FORCE_TTY unless your centre requires it explicitly.

With the variables in place, validate connectivity and job submission using the staging supercomputer profile:

pixi run start-staging-supercomputer

Staging mode packages dependencies on demand. Expect the first asset run to upload a new environment bundle before dispatching the Slurm job.

For production you should pre-build and upload the execution environment via your CI/CD pipeline (see examples/scripts/deploy_environment.py). Capture the output path and expose it to Dagster as CI_DEPLOYED_ENVIRONMENT_PATH:

python scripts/deploy_environment.py --platform linux-64  # run from CI
# -> produces deployment_metadata.json with "deployment_path"

export CI_DEPLOYED_ENVIRONMENT_PATH=/home/submitter/pipelines/deployments/prod-env-20251018
export DAGSTER_DEPLOYMENT=production_supercomputer
pixi run start-production-supercomputer

If CI_DEPLOYED_ENVIRONMENT_PATH is missing, the production profile will refuse to start to prevent accidental live builds on the cluster.

To confirm a submission landed on the expected queue, run:

ssh -J <<your_user>>@vmos.vsc.ac.at <<your_user>>@vsc5.vsc.ac.at \
  "squeue -j <jobid> -o '%i %P %q %R %T'"

The Partition, QOS, and Reservation columns should match your .env.

Ray launcher overrides

If your cluster needs OS-level tweaks before Ray starts (for example, higher file-descriptor limits), configure pre_start_commands on the Ray launcher. To pass extra arguments to ray start (for example, disabling the dashboard), use ray_start_args:

"launchers": {
    "ray": {
        "pre_start_commands": [
            "ulimit -n 65536",
        ],
    },
}

You can attach these overrides inside a site profile (see SUPERCOMPUTER_SITE_OVERRIDES in the example resources).

contributing

See the contributing guide for how to contribute! Help building and maintaining this project is welcome.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dagster_slurm-1.11.1.tar.gz (74.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

dagster_slurm-1.11.1-py3-none-any.whl (88.9 kB view details)

Uploaded Python 3

File details

Details for the file dagster_slurm-1.11.1.tar.gz.

File metadata

  • Download URL: dagster_slurm-1.11.1.tar.gz
  • Upload date:
  • Size: 74.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for dagster_slurm-1.11.1.tar.gz
Algorithm Hash digest
SHA256 86c7299f743ff40f67b2b86fa4aaf9660140100cdd1499ad547beb22ebc8ec6c
MD5 af5c4f8ec4f9956cb6146540b5ee9152
BLAKE2b-256 fc6e0b7488d60e4503354b2b3cd578f3a99be37b05338cc094ebb9d4bed88037

See more details on using hashes here.

Provenance

The following attestation bundles were made for dagster_slurm-1.11.1.tar.gz:

Publisher: library.yaml on ascii-supply-networks/dagster-slurm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file dagster_slurm-1.11.1-py3-none-any.whl.

File metadata

  • Download URL: dagster_slurm-1.11.1-py3-none-any.whl
  • Upload date:
  • Size: 88.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for dagster_slurm-1.11.1-py3-none-any.whl
Algorithm Hash digest
SHA256 da4399c02ba97a55b74ca50dd5664e24582e302d2d8e386997482567d8de765d
MD5 4bd3f0f9b98c7dd6167724a7b38c3476
BLAKE2b-256 2af17fd7ccca940e4b4f9905e35f1fa00f743878f4003d94d00c48ddf7d2964a

See more details on using hashes here.

Provenance

The following attestation bundles were made for dagster_slurm-1.11.1-py3-none-any.whl:

Publisher: library.yaml on ascii-supply-networks/dagster-slurm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page