Skip to main content

Up-scale python functions for high performance computing (HPC) with executorlib.

Project description

executorlib

Pipeline codecov Binder status GitHub Repo stars

Up-scale python functions for high performance computing (HPC) with executorlib.

Key Features

  • Up-scale your Python functions beyond a single computer. - executorlib extends the Executor interface from the Python standard library and combines it with job schedulers for high performance computing (HPC) including the Simple Linux Utility for Resource Management (SLURM) and flux. With this combination executorlib allows users to distribute their Python functions over multiple compute nodes.
  • Parallelize your Python program one function at a time - executorlib allows users to assign dedicated computing resources like CPU cores, threads or GPUs to one Python function call at a time. So you can accelerate your Python code function by function.
  • Permanent caching of intermediate results to accelerate rapid prototyping - To accelerate the development of machine learning pipelines and simulation workflows executorlib provides optional caching of intermediate results for iterative development in interactive environments like jupyter notebooks.

Examples

The Python standard library provides the Executor interface with the ProcessPoolExecutor and the ThreadPoolExecutor for parallel execution of Python functions on a single computer. executorlib extends this functionality to distribute Python functions over multiple computers within a high performance computing (HPC) cluster. This can be either achieved by submitting each function as individual job to the HPC job scheduler with an HPC Cluster Executor - or by requesting a job from the HPC cluster and then distribute the Python functions within this job with an HPC Job Executor. Finally, to accelerate the development process executorlib also provides a Single Node Executor - to use the executorlib functionality on a laptop, workstation or single compute node for testing. Starting with the Single Node Executor:

from executorlib import SingleNodeExecutor


with SingleNodeExecutor() as exe:
    future_lst = [exe.submit(sum, [i, i]) for i in range(1, 5)]
    print([f.result() for f in future_lst])

In the same way executorlib can also execute Python functions which use additional computing resources, like multiple CPU cores, CPU threads or GPUs. For example if the Python function internally uses the Message Passing Interface (MPI) via the mpi4py Python libary:

from executorlib import SingleNodeExecutor


def calc(i):
    from mpi4py import MPI

    size = MPI.COMM_WORLD.Get_size()
    rank = MPI.COMM_WORLD.Get_rank()
    return i, size, rank


with SingleNodeExecutor() as exe:
    fs = exe.submit(calc, 3, resource_dict={"cores": 2})
    print(fs.result())

The additional resource_dict parameter defines the computing resources allocated to the execution of the submitted Python function. In addition to the compute cores cores, the resource dictionary can also define the threads per core as threads_per_core, the GPUs per core as gpus_per_core, the working directory with cwd, the option to use the OpenMPI oversubscribe feature with openmpi_oversubscribe and finally for the Simple Linux Utility for Resource Management (SLURM) queuing system the option to provide additional command line arguments with the slurm_cmd_args parameter - resource dictionary This flexibility to assign computing resources on a per-function-call basis simplifies the up-scaling of Python programs. Only the part of the Python functions which benefit from parallel execution are implemented as MPI parallel Python funtions, while the rest of the program remains serial.

The same function can be submitted to the SLURM job scheduler by replacing the SingleNodeExecutor with the SlurmClusterExecutor. The rest of the example remains the same, which highlights how executorlib accelerates the rapid prototyping and up-scaling of HPC Python programs.

from executorlib import SlurmClusterExecutor


def calc(i):
    from mpi4py import MPI

    size = MPI.COMM_WORLD.Get_size()
    rank = MPI.COMM_WORLD.Get_rank()
    return i, size, rank


with SlurmClusterExecutor() as exe:
    fs = exe.submit(calc, 3, resource_dict={"cores": 2})
    print(fs.result())

In this case the Python simple queuing system adapter (pysqa) is used to submit the calc() function to the SLURM job scheduler and request an allocation with two CPU cores for the execution of the function - HPC Cluster Executor. In the background the sbatch command is used to request the allocation to execute the Python function.

Within a given SLURM job executorlib can also be used to assign a subset of the available computing resources to execute a given Python function. In terms of the SLURM commands, this functionality internally uses the srun command to receive a subset of the resources of a given queuing system allocation.

from executorlib import SlurmJobExecutor


def calc(i):
    from mpi4py import MPI

    size = MPI.COMM_WORLD.Get_size()
    rank = MPI.COMM_WORLD.Get_rank()
    return i, size, rank


with SlurmJobExecutor() as exe:
    fs = exe.submit(calc, 3, resource_dict={"cores": 2})
    print(fs.result())

In addition, to support for SLURM executorlib also provides support for the hierarchical flux job scheduler. The flux job scheduler is developed at Larwence Livermore National Laboratory to address the needs for the up-coming generation of Exascale computers. Still even on traditional HPC clusters the hierarchical approach of the flux is beneficial to distribute hundreds of tasks within a given allocation. Even when SLURM is used as primary job scheduler of your HPC, it is recommended to use SLURM with flux as hierarchical job scheduler within the allocations.

Documentation

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

executorlib-1.5.2.tar.gz (37.7 kB view details)

Uploaded Source

Built Distribution

executorlib-1.5.2-py3-none-any.whl (62.5 kB view details)

Uploaded Python 3

File details

Details for the file executorlib-1.5.2.tar.gz.

File metadata

  • Download URL: executorlib-1.5.2.tar.gz
  • Upload date:
  • Size: 37.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for executorlib-1.5.2.tar.gz
Algorithm Hash digest
SHA256 966ad75050afa0f98f185a4f22c2c3a249a17166a864565dd84415ae28c37930
MD5 97bbbc2a1eacec2ef7358bb305500138
BLAKE2b-256 195470e471ac22ebb170fe2b7f1a60818f5f02d41770f27d6080f1ed8fd6ecb8

See more details on using hashes here.

Provenance

The following attestation bundles were made for executorlib-1.5.2.tar.gz:

Publisher: deploy.yml on pyiron/executorlib

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file executorlib-1.5.2-py3-none-any.whl.

File metadata

  • Download URL: executorlib-1.5.2-py3-none-any.whl
  • Upload date:
  • Size: 62.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for executorlib-1.5.2-py3-none-any.whl
Algorithm Hash digest
SHA256 9162b2921d8800b354f2a72d85d6eab760d6438ddee4cc274c46d43a46f6a443
MD5 170698619d218690c56b7f5095bfe4e6
BLAKE2b-256 bbc2d41362e7904e7a485b38ef356b452a45947d097b653306b55b445b024d3a

See more details on using hashes here.

Provenance

The following attestation bundles were made for executorlib-1.5.2-py3-none-any.whl:

Publisher: deploy.yml on pyiron/executorlib

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page