Distributed processing with NumPy and Zarr.
Zappy - distributed processing with NumPy and Zarr
Zappy is for distributed processing of chunked NumPy arrays on engines like Pywren, Apache Spark, and Apache Beam.
zappy.base module defines a
ZappyArray class that exposes the same interface as
numpy.ndarray, and which
is backed by distributed storage and processing. The array is broken into chunks, and is typically loaded from Zarr,
and each chunk is processed independently.
There are a few engines provided:
- direct - for eager in-memory processing
- spark - for processing using Spark
- beam - for processing using Beam or Google Dataflow
- executor - for processing using Python's concurrent.futures.Executor, of which Pywren is a notable implementation
Beam currently only runs on Python 2.
pip install zappy
conda install -c conda-forge zappy
Take a look at the rendered demo Jupyter notebook, or try it out yourself as follows.
Create and activate a Python 3 virtualenv, and install the requirements:
python3 -m venv venv . venv/bin/activate pip install -r requirements.txt pip install -e . pip install s3fs jupyter
Then run the notebook with:
jupyter notebook demo.ipynb
There is a test suite for all the engines, covering both Python 2 and 3.
Run everything in one go with tox:
pip install tox tox
pip install black black zappy tests/* *.py
pip install pytest-cov pytest --cov-report html --cov=zappy open htmlcov/index.html
pip install twine python setup.py sdist twine upload -r pypi dist/zappy-0.1.0.tar.gz
If successful, the package will be available on PyPI.