Skip to main content

PyMPDATA + numba-mpi coupler sandbox

Project description

PyMPDATA-MPI

Python 3 LLVM Linux OK macOS OK Maintenance

PL Funding License: GPL v3 Copyright DOI

GitHub issues GitHub issues
GitHub issues GitHub issues
Github Actions Build Status PyPI version API docs Coverage Status

PyMPDATA-MPI constitutes a PyMPDATA + numba-mpi coupler enabling numerical solutions of transport equations with the MPDATA numerical scheme in a hybrid parallelisation model with both multi-threading and MPI distributed memory communication. PyMPDATA-MPI adapts to API of PyMPDATA offering domain decomposition logic.

Hello world examples

In a minimal setup, PyMPDATA-MPI can be used to solve the following transport equation: $$\partial_t (G \psi) + \nabla \cdot (Gu \psi)= 0$$ in an environment with multiple nodes. Every node (process) is responsible for computing its part of the decomposed domain.

Spherical scenario (2D)

In spherical geometry, the $G$ factor represents the Jacobian of coordinate transformation. In this example (based on a test case from Williamson & Rasch 1989), domain decomposition is done cutting the sphere along meridians. The inner dimension uses the MPIPolar boundary condition class, while the outer dimension uses MPIPeriodic. Note that the spherical animations below depict simulations without MPDATA corrective iterations, i.e. only plain first-order upwind scheme is used (FIX ME).

1 worker (n_threads = 1)

2 workers (MPI_DIM = 0, n_threads = 1)

Cartesian scenario (2D)

In the cartesian example below (based on a test case from Arabas et al. 2014), a constant advector field $u$ is used (and $G=1$). MPI (Message Passing Interface) is used for handling data transfers and synchronisation with the domain decomposition across MPI workers done in either inner or in the outer dimension (user setting). Multi-threading (using, e.g., OpenMP via Numba) is used for shared-memory parallelisation within subdomains (indicated by dotted lines in the animations below) with threading subdomain split done across the inner dimension (internal PyMPDATA logic). In this example, two corrective MPDATA iterations are employed.

1 worker (n_threads=3)

2 workers (MPI_DIM = 0, n_threads = 3)

2 workers (MPI_DIM = -1, n_threads = 3)

3 workers (MPI_DIM = 0, n_threads = 3)

3 workers (MPI_DIM = -1, n_threads = 3)

Package architecture

    flowchart BT

    H5PY ---> HDF{{HDF5}}
    subgraph pythonic-dependencies [Python]
      TESTS --> H[pytest-mpi]
      subgraph PyMPDATA-MPI ["PyMPDATA-MPI"]
        TESTS["PyMPDATA-MPI[tests]"] --> CASES(simulation scenarios)
        A1["PyMPDATA-MPI[examples]"] --> CASES
        CASES --> D[PyMPDATA-MPI]
      end
      A1 ---> C[py-modelrunner]
      CASES ---> H5PY[h5py]
      D --> E[numba-mpi]
      H --> X[pytest]
      E --> N
      F --> N[Numba]
      D --> F[PyMPDATA]
    end
    H ---> MPI
    C ---> slurm{{slurm}}
    N --> OMPI{{OpenMP}}
    N --> L{{LLVM}}
    E ---> MPI{{MPI}}
    HDF --> MPI
    slurm --> MPI

style D fill:#7ae7ff,stroke-width:2px,color:#2B2B2B

click H "https://pypi.org/p/pytest-mpi"
click X "https://pypi.org/p/pytest"
click F "https://pypi.org/p/PyMPDATA"
click N "https://pypi.org/p/numba"
click C "https://pypi.org/p/py-modelrunner"
click H5PY "https://pypi.org/p/h5py"
click E "https://pypi.org/p/numba-mpi"
click A1 "https://pypi.org/p/PyMPDATA-MPI"
click D "https://pypi.org/p/PyMPDATA-MPI"
click TESTS "https://pypi.org/p/PyMPDATA-MPI"

Rectangular boxes indicate pip-installable Python packages (click to go to pypi.org package site).

Credits:

Development of PyMPDATA-MPI has been supported by the Poland's National Science Centre (grant no. 2020/39/D/ST10/01220).

We acknowledge Poland’s high-performance computing infrastructure PLGrid (HPC Centers: ACK Cyfronet AGH) for providing computer facilities and support within computational grant no. PLG/2023/016369

copyright: Jagiellonian University & AGH University of Krakow
licence: GPL v3

Design goals

  • MPI support for PyMPDATA implemented externally (i.e., not incurring any overhead or additional dependencies for PyMPDATA users)
  • MPI calls within Numba njitted code (hence not using mpi4py, but rather numba-mpi)
  • hybrid domain-decomposition parallelism: threading (internal in PyMPDATA, in the inner dimension) + MPI (either inner or outer dimension)
  • example simulation scenarios featuring HDF5/MPI-IO output storage (using h5py)
  • py-modelrunner simulation orchestration
  • portability across Linux & macOS (no Windows support as of now due to challenges in getting HDF5/MPI-IO to work there)
  • Continuous Integration (CI) with different OSes and different MPI implementations (leveraging to mpi4py's setup-mpi Github Action)
  • full test coverage including CI builds asserting on same results with multi-node vs. single-node computations (with help of pytest-mpi)
  • ships as a pip-installable package - aimed to be a dependency of domain-specific packages

Related resources

open-source Large-Eddy-Simulation and related software

Julia

C++

C/CUDA

FORTRAN

Python (incl. Cython)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

PyMPDATA-MPI-0.0.9.tar.gz (37.3 kB view details)

Uploaded Source

Built Distribution

PyMPDATA_MPI-0.0.9-py3-none-any.whl (17.3 kB view details)

Uploaded Python 3

File details

Details for the file PyMPDATA-MPI-0.0.9.tar.gz.

File metadata

  • Download URL: PyMPDATA-MPI-0.0.9.tar.gz
  • Upload date:
  • Size: 37.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.12.2

File hashes

Hashes for PyMPDATA-MPI-0.0.9.tar.gz
Algorithm Hash digest
SHA256 11257692960dcb2c78e975065759bbfbc492aaa1b82993e63feae394495d9166
MD5 f616398728c376398b22bfe1f7bd8b3c
BLAKE2b-256 fe35b9bf7af01c48aa4dd8c811b702d6a8d4a0210ed53ac0ab991fa94b985bfb

See more details on using hashes here.

File details

Details for the file PyMPDATA_MPI-0.0.9-py3-none-any.whl.

File metadata

File hashes

Hashes for PyMPDATA_MPI-0.0.9-py3-none-any.whl
Algorithm Hash digest
SHA256 2dc43e55936e9224c03b4a27994ed93124c8361b0fdeab1a8b24daee14edfc68
MD5 76d2f3fec8148d55d6b2614c23517945
BLAKE2b-256 4882b000e23588158cdb4c54a858080b97887d490242d7b36b37b043ad1195af

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page