Skip to main content

Package for efficiently parallelising zarr write operations based on awareness of source chunks

Project description

Zarr Parallel Cacher

Current Git Release PyPI version

This package has been developed as part of the NERC EDS FRAME-FM AI project. It has been separated into its own module for ease of reusability across multiple projects. AI-specific steps may form part of the package, but may also be disabled by default.

See the main documentation page for more details.

Basic Usage

from zarr_parallel import ZarrParallelAssembler

zp = ZarrParallelAssembler(data_uri=uri, preprocessors=preprocessors,
            chunks=chunks,
            engine='kerchunk',
            variables={'d2m':{}}, 
            cache_label='_v1')

zp.cache(
    cache_dir='/gws/ssde/j25b/eds_ai/frame-fm/data/zarr_cache',
    deploy_mode='dask_distributed',
    simultaneous_worker_limit=4)

The above code snippet demonstrates the use of this package. The data_uri and engine parameters refer to the xarray open_dataset method for accessing the source object. chunks are required to specify the output chunking in the zarr cache, which is also required for organising the parallel jobs. variables is optional to add, and includes the ability to run transforms on specific data arrays (such as renaming) which are applied individually.

The preprocessors list defines the set of preprocessing transforms to apply to the dataset (including selection) at the point of caching. This should include all transforms that should be applied to the dataset before writing to the zarr cache.

The num_jobs and simultaneous_worker_limit parameters are used to configure for parallel deployment. If no num_jobs is provided, the assembler will calculate the optimal number of jobs for your memory limit (recommended). The default memory limit is 2GB and the timeout is set at 30 minutes, although this only applies to SLURM deployments at present.

Transforms/Preprocessors

Transformations to the data may be specified via the selector option passed in the above example. Xarray-native transformations are supported, as well as transforms from the FRAME-FM package if installed.

Selection Recommendations

The assembler will halt to recommend alternative data selections based on the underlying chunk structure. Proceeding without recommendations is not advised, as mismatched chunk-region borders may involve duplicating chunk requests and significantly increasing memory requirements per worker.

Version 0.3 Changes

  • Heartbeats between jobs in the dask workers.
  • Now able to shut off dask distributed info messages.
  • Added ability to add attributes

Version 0.4 Changes

  • Job parallelisation now distributed to workers for efficiency
    • Small parallel writes were found to be inefficient, so the writes are parallelised to the largest possible selection while adhering to memory/timeout limits.
  • Tiling parallelisation now available. Caveats:
    • Tiling necessitates rechunking to single chunk-per-tile. This means tile size may need to be smaller than expected to account for memory limitations of individual worker - specifically where source chunking scheme inflates the size of data initially retrieved. Error will be raised if the estimated memory requirement per tile is larger than the memory limit for the worker.

### Version 0.5 Changes

  • Fixed bugs with chunk identification for both tiled and non-tiled datasets.
  • Attributes now set for parallel and series writes to zarr.
  • Logging now enabled in the assembler directly - pass log_level argument as int from 0 to 2 for warnings/info/debugging.
  • Documentation added using Mkdocs!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

zarr_parallel-0.5.0.tar.gz (22.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

zarr_parallel-0.5.0-py3-none-any.whl (28.4 kB view details)

Uploaded Python 3

File details

Details for the file zarr_parallel-0.5.0.tar.gz.

File metadata

  • Download URL: zarr_parallel-0.5.0.tar.gz
  • Upload date:
  • Size: 22.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.3.2 CPython/3.12.2 Linux/5.14.0-611.27.1.el9_7.x86_64

File hashes

Hashes for zarr_parallel-0.5.0.tar.gz
Algorithm Hash digest
SHA256 60b18e3a67171fabc95e98228568b719ec6afed6b40c5153d8a9d5beb8bafdf8
MD5 6c7d564adafa897e7be3a253e8ceed2a
BLAKE2b-256 c807b1c88fca6b547c09fdd30be0d6cac089b19e13774f81cf9ef9b90f6be096

See more details on using hashes here.

File details

Details for the file zarr_parallel-0.5.0-py3-none-any.whl.

File metadata

  • Download URL: zarr_parallel-0.5.0-py3-none-any.whl
  • Upload date:
  • Size: 28.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.3.2 CPython/3.12.2 Linux/5.14.0-611.27.1.el9_7.x86_64

File hashes

Hashes for zarr_parallel-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 70856c4802929ba0cda23281db5340b5ba9f2ef6ab8ab0a7455d0ef57d423f97
MD5 40abe5b449cf2aa5256ef3bc71d30cef
BLAKE2b-256 116f39c14027064a721b03a8ec8d0d88a0d583893a03fba9ce0fed955715959a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page