Skip to main content

MPI volume decomposition and particle distribution tools

Project description

https://img.shields.io/pypi/v/mpipartition.svg https://github.com/ArgonneCPAC/MPIPartition/actions/workflows/pypi.yml/badge.svg https://github.com/ArgonneCPAC/MPIPartition/actions/workflows/sphinx.yml/badge.svg

A python module for MPI volume decomposition and particle distribution

Features

  • Cartesian partitioning of a cubic volume (arbitrary dimensions) among MPI ranks

  • Equal area decomposition of the spherical shell (S2) among MPI ranks

  • distributing particle-data among ranks to the corresponding subvolume / surface segment

  • overloading particle-data at rank boundaries (“ghost particles”)

Installation

Installing from the PyPI repository:

pip install mpipartition

Installing the development version from the GIT repository

git clone https://github.com/ArgonneCPAC/mpipartition.git
cd mpipartition
python setup.py develop

Requirements

These packages will be automatically installed if they are not already present:

  • Python >= 3.8

  • mpi4py: MPI for Python

  • numpy: Python array library

  • numba: Python JIT compiler

Basic Usage

Check the documentation for an in-depth explanation / documentation.

# this code goes into mpipartition_example.py

from mpipartition import Partition, distribute, overload
import numpy as np

# create a partition of the unit cube with available MPI ranks
box_size = 1.
partition = Partition()

if partition.rank == 0:
    print(f"Number of ranks: {partition.nranks}")
    print(f"Volume decomposition: {partition.decomposition}")

# create random data
nparticles_local = 1000
data = {
    "x": np.random.uniform(0, 1, nparticles_local),
    "y": np.random.uniform(0, 1, nparticles_local),
    "z": np.random.uniform(0, 1, nparticles_local)
}

# distribute data to ranks assigned to corresponding subvolume
data = distribute(partition, box_size, data, ('x', 'y', 'z'))

# overload "edge" of each subvolume by 0.05
data = overload(partition, box_size, data, 0.05, ('x', 'y', 'z'))

This code can then be executed with mpi:

mpirun -n 10 python mpipartition_example.py

A more applied example, using halo catalogs from a HACC cosmological simulation (in the GenericIO data format):

from mpipartition import Partition, distribute, overload
import numpy as np
import pygio

# create a partition with available MPI ranks
box_size = 64.  # box size in Mpc/h
partition = Partition(3)  # by default, the dimension is 3

# read GenericIO data in parallel
data = pygio.read_genericio("m000p-499.haloproperties")

# distribute
data = distribute(partition, box_size, data, [f"fof_halo_center_{x}" for x in "xyz"])

# mark "owned" data with rank (allows differentiating owned and overloaded data)
data["status"] = partition.rank * np.ones(len(data["fof_halo_center_x"]), dtype=np.uint16)

# overload by 4Mpc/h
data = overload(partition, box_size, data, 4., [f"fof_halo_center_{x}" for x in "xyz"])

# now we can do analysis such as 2pt correlation functions (up to 4Mpc/h)
# or neighbor finding, etc.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mpipartition-1.2.0.tar.gz (18.0 kB view hashes)

Uploaded Source

Built Distribution

mpipartition-1.2.0-py3-none-any.whl (23.3 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page