Skip to main content

dagster integration with slurm

Project description

dagster-slurm

Integrating dagster to orchestrate slurm jobs for HPC systems and frameworks for scaling compute like ray for a better developer experience on supercomputers.

Basic example

https://github.com/ascii-supply-networks/dagster-slurm/tree/main/examples

prerequisites

  • installation of pixi: https://pixi.sh/latest/installation/ curl -fsSL https://pixi.sh/install.sh | sh
  • pixi global install git
  • a container runtime like docker or podman; for now we assume docker compose is available to you. You could absolutely also use nerdctl or something similar.

usage

Example

git clone https://github.com/ascii-supply-networks/dagster-slurm.git
docker compose up -d --build
cd dagster-slurm/examples

local execution

Execute without slurm.

  • Small data
  • Rapid local prototyping
pixi run start

go to http://localhost:3000 and you should see the dagster webserver running.

docker local execution

  • Test everything works on SLURM
  • Still small data
  • Mainly used for developing this integration

Ensure you have a .env file with the following content:

SLURM_EDGE_NODE_HOST=localhost
SLURM_EDGE_NODE_PORT=2223
SLURM_EDGE_NODE_USER=submitter
SLURM_EDGE_NODE_PASSWORD=submitter
SLURM_DEPLOYMENT_BASE_PATH=/home/submitter/pipelines/deployments
pixi run start-staging

go to http://localhost:3000 and you should see the dagster webserver running.

prod docker local execution

  • Test everything works on SLURM
  • Still small data
  • Mainly used for developing this integration
  • This target instead supports a faster startup of the job

Ensure you have a .env file with the following content:

SLURM_EDGE_NODE_HOST=localhost
SLURM_EDGE_NODE_PORT=2223
SLURM_EDGE_NODE_USER=submitter
SLURM_EDGE_NODE_PASSWORD=submitter
SLURM_DEPLOYMENT_BASE_PATH=/home/submitter/pipelines/deployments

# see the JQ command below for dynamically setting this
# DAGSTER_PROD_ENV_PATH=/home/submitter/pipelines/deployments/<<<your deployment >>>

# we assume your CI-CD pipelines would out of band perform the deployment of the environment
# this allows your jobs to start up faster
pixi run deploy-prod-docker

cat deplyyment_metadata.json
export DAGSTER_PROD_ENV_PATH="$(jq -er '.deployment_path' foo.json)"

pixi run start-prod-docker

go to http://localhost:3000 and you should see the dagster webserver running.

real HPC supercomputer execution

  • Targets clusters like VSC-5 (Vienna Scientific Cluster) and Leonardo (CINECA).
  • Assets run against the real scheduler, so ensure the account has queue access and quotas.

Create a .env file with the edge-node credentials and select the site profile:

# example for VSC-5
SLURM_EDGE_NODE_HOST=vsc5.vsc.ac.at
SLURM_EDGE_NODE_PORT=22
SLURM_EDGE_NODE_USER=<<your_user>>
SLURM_EDGE_NODE_PASSWORD=<<your_password>>
SLURM_EDGE_NODE_JUMP_HOST=vmos.vsc.ac.at
SLURM_EDGE_NODE_JUMP_USER=<<your_user>>
SLURM_EDGE_NODE_JUMP_PASSWORD=<<your_password>>
SLURM_DEPLOYMENT_BASE_PATH=/home/<<your_user>>/pipelines/deployments
SLURM_PARTITION=zen3_0512
SLURM_QOS=zen3_0512_devel
SLURM_RESERVATION=dagster-slurm_21
SLURM_SUPERCOMPUTER_SITE=vsc5
DAGSTER_DEPLOYMENT=staging_supercomputer

If your account relies on passwords (or passwords + OTP), provide them for both the jump host and the final login node. The automation will answer the standard prompts; any time-based OTP still has to be supplied interactively once per validity window. When an extra prompt appears, Dagster writes Enter ... for <host>: to your terminal (via /dev/tty). Enter the code there to continue.

TTY allocation is handled automatically for password-based sessions, so you do not need to set SLURM_EDGE_NODE_FORCE_TTY unless your centre requires it explicitly.

With the variables in place, validate connectivity and job submission using the staging supercomputer profile:

pixi run start-staging-supercomputer

Staging mode packages dependencies on demand. Expect the first asset run to upload a new environment bundle before dispatching the Slurm job.

For production you should pre-build and upload the execution environment via your CI/CD pipeline (see examples/scripts/deploy_environment.py). Capture the output path and expose it to Dagster as CI_DEPLOYED_ENVIRONMENT_PATH:

python scripts/deploy_environment.py --platform linux-64  # run from CI
# -> produces deployment_metadata.json with "deployment_path"

export CI_DEPLOYED_ENVIRONMENT_PATH=/home/submitter/pipelines/deployments/prod-env-20251018
export DAGSTER_DEPLOYMENT=production_supercomputer
pixi run start-production-supercomputer

If CI_DEPLOYED_ENVIRONMENT_PATH is missing, the production profile will refuse to start to prevent accidental live builds on the cluster.

To confirm a submission landed on the expected queue, run:

ssh -J <<your_user>>@vmos.vsc.ac.at <<your_user>>@vsc5.vsc.ac.at \
  "squeue -j <jobid> -o '%i %P %q %R %T'"

The Partition, QOS, and Reservation columns should match your .env.

contributing

See Details here: docs for how to contribute! Help building and maintaining this project is welcome.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dagster_slurm-1.6.0.tar.gz (53.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

dagster_slurm-1.6.0-py3-none-any.whl (67.7 kB view details)

Uploaded Python 3

File details

Details for the file dagster_slurm-1.6.0.tar.gz.

File metadata

  • Download URL: dagster_slurm-1.6.0.tar.gz
  • Upload date:
  • Size: 53.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for dagster_slurm-1.6.0.tar.gz
Algorithm Hash digest
SHA256 7ead43968da20cbe0d5ef08e4c04940d4e2fb5753ed7f82c2cf110c06c1aa7ac
MD5 321fbdd80cdc6f676b2c8e365f77631b
BLAKE2b-256 1cc434f5c41b09270bf935121f46f8ca99de0d44c21ef163ab708923fd6750e0

See more details on using hashes here.

Provenance

The following attestation bundles were made for dagster_slurm-1.6.0.tar.gz:

Publisher: library.yaml on ascii-supply-networks/dagster-slurm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file dagster_slurm-1.6.0-py3-none-any.whl.

File metadata

  • Download URL: dagster_slurm-1.6.0-py3-none-any.whl
  • Upload date:
  • Size: 67.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for dagster_slurm-1.6.0-py3-none-any.whl
Algorithm Hash digest
SHA256 c71e746686aed48ff5d9e1a0a67ccbe241ebbbd66cc639575d97db0726a240fc
MD5 9fa4af57d4c9ccd70728f9efe60b780d
BLAKE2b-256 5fff1cea6f41bc5a359fd7d05815c9e2169ac0ff9197b75ac6a7cfecbf63f6dd

See more details on using hashes here.

Provenance

The following attestation bundles were made for dagster_slurm-1.6.0-py3-none-any.whl:

Publisher: library.yaml on ascii-supply-networks/dagster-slurm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page