Parallel Processing Coordinator. Splits dataset processing to run parallel cluster jobs and aggregates outputs
Project description
ParProcCo
Requires a YAML configuration file in grandparent directory of package, CONDA_PREFIX/etc or /etc
--- !PPCConfig
allowed_programs:
rs_map: msmapper_utils
blah1: whatever_package1
blah2: whatever_package2
url: https://slurm.local:8443
extra_property_envs: # optional dictionary for slurm job properties and environment variables
account: MY_ACCOUNT # env var that holds account
An entry point called ParProcCo.allowed_programs
can be added to other packages' setup.py
:
setup(
...
entry_points={PPC_ENTRY_POINT: ['blah1 = whatever_package1']},
)
which will look for a module called blah1_wrapper
in whatever_package1
package.
Testing
Tests can be run with
$ pytest tests
To test interactions with Slurm, set the following environment variables:
SLURM_REST_URL # URL for server and port where the REST endpoints are hosted
SLURM_PARTITION # Slurm cluster parition
SLURM_JWT # JSON web token for access to REST endpoints
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
ParProcCo-2.0.0.tar.gz
(36.8 kB
view hashes)
Built Distribution
ParProcCo-2.0.0-py3-none-any.whl
(46.0 kB
view hashes)
Close
Hashes for ParProcCo-2.0.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | eb9027e2e843dc4e6a62ed30454ffbca275d85051dbb14771c94ec19b8b73daf |
|
MD5 | aa85ca74e06d013a6c8aab9b8cace366 |
|
BLAKE2b-256 | c524bbb27ca62e437e5f06fa0c884ece5c0b5eb5f17e0da4c982b35620057db0 |