futures for remote execution on clusters
This module provides a Python concurrent.futures executor that lets you run functions on remote systems in your HTCondor or Slurm cluster. Stop worrying about writing job files, scattering/gathering, and serialization—this module does it all for you.
It uses the cloudpickle library to allow (most) closures to be used transparently, so you’re not limited to “pure” functions.
pip install clusterfutures
import cfut def square(n): return n * n with cfut.SlurmExecutor() as executor: for result in executor.map(square, [5, 7, 11]): print(result)
See slurm_example.py and condor_example.py for further examples. The easiest way to get started is to ignore the fact that futures are being used at all and just use the provided map function, which behaves like itertools.imap but transparently distributes your work across the cluster.
Goals & design
clusterfutures is a simple wrapper to run Python functions in batch jobs on an HPC cluster. Each future corresponds to one batch job. The functions that you run through clusterfutures should normally run for at least a few seconds each: running smaller functions will be inefficient because of the overhead of launching jobs and moving data.
Functions, parameters and return values are sent by creating files; this assumes that the control process and the worker nodes have a shared filesystem. This mechanism is convenient for relatively small amounts of data; it’s probably not the best way to transfer large amounts of data to & from workers.
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Hashes for clusterfutures-0.5-py3-none-any.whl