Skip to main content

JAX bindings for the cuDecomp library

Project description

jaxDecomp: JAX Library for 3D Domain Decomposition and Parallel FFTs

Build Code Formatting Tests MIT License Documentation

Important Version 0.2.0 includes a pure JAX backend that no longer requires MPI. For multi-node runs, MPI and NCCL backends are still available through cuDecomp.

JAX reimplementation and bindings for NVIDIA's cuDecomp library (Romero et al. 2022), enabling multi-node parallel FFTs and halo exchanges directly in low-level NCCL/CUDA-Aware MPI from your JAX code.

Important Starting from version 0.2.8, jaxDecomp supports JAX's Shardy partitioner, which can be activated via jax.config.update('jax_use_shardy_partitioner', True). This partitioner is enabled by default in JAX 0.7.x and later versions. Shardy support is an internal implementation change and users should not expect any behavioral differences outside of what the JAX sharding mechanism provides, as explained in the JAX Shardy migration documentation.


Usage

Below is a simple code snippet illustrating how to perform a 3D FFT on a distributed 3D array, followed by a halo exchange. For demonstration purposes, we force 8 CPU devices via environment variables:

import os
os.environ["XLA_FLAGS"] = "--xla_force_host_platform_device_count=8"
os.environ["JAX_PLATFORM_NAME"] = "cpu"

import jax
from jax.sharding import Mesh, PartitionSpec as P, NamedSharding
import jaxdecomp

# Create a 2x4 mesh of devices on CPU
pdims = (2, 4)
mesh = jax.make_mesh(pdims, axis_names=('x', 'y'))
sharding = NamedSharding(mesh, P('x', 'y'))

# Create a random 3D array and enforce sharding
a = jax.random.normal(jax.random.PRNGKey(0), (1024, 1024, 1024))
a = jax.lax.with_sharding_constraint(a, sharding)

# Parallel FFTs
k_array = jaxdecomp.fft.pfft3d(a)
rec_array = jaxdecomp.fft.pifft3d(a)

# Parallel halo exchange
exchanged = jaxdecomp.halo_exchange(a, halo_extents=(16, 16), halo_periods=(True, True))

All these functions are JIT-compatible and support automatic differentiation (with some caveats).

See also:

Important Multi-node FFTs work with both JAX and cuDecomp backends
For CPU with JAX, Multi-node is supported starting JAX v0.5.1 (with gloo backend)


Running on an HPC Cluster

On HPC clusters (e.g., Jean Zay, Perlmutter), you typically launch your script with:

srun python demo.py

or

mpirun -n 8 python demo.py

See the Slurm README and template script for more details.


Using cuDecomp (MPI and NCCL)

For other features, compile and install with cuDecomp enabled as described in install:

import jaxdecomp

# Optionally select communication backends (defaults to NCCL)
jaxdecomp.config.update('halo_comm_backend', jaxdecomp.HALO_COMM_MPI)
jaxdecomp.config.update('transpose_comm_backend', jaxdecomp.TRANSPOSE_COMM_MPI_A2A)

# Then specify 'backend="cudecomp"' in your FFT or halo calls:
karray = jaxdecomp.fft.pfft3d(global_array, backend='cudecomp')
recarray = jaxdecomp.fft.pifft3d(karray, backend='cudecomp')
exchanged_array = jaxdecomp.halo_exchange(
    padded_array, halo_extents=(16, 16), halo_periods=(True, True), backend='cudecomp'
)

Install

1. Pure JAX Version (Easy / Recommended)

jaxDecomp is on PyPI:

  1. Install the appropriate JAX wheel:
    • GPU:
      pip install --upgrade "jax[cuda]"
      
    • CPU:
      pip install --upgrade "jax[cpu]"
      
  2. Install jaxdecomp:
    pip install jaxdecomp
    

This setup uses the pure-JAX backend—no MPI required.

2. JAX + cuDecomp Backend (Advanced)

If you need to use MPI instead of NCCL for GPU or gloo for CPU, you can build from GitHub with cuDecomp enabled. This requires the NVIDIA HPC SDK or a similar environment providing a CUDA-aware MPI toolchain.

pip install -U pip
pip install git+https://github.com/DifferentiableUniverseInitiative/jaxDecomp -Ccmake.define.JD_CUDECOMP_BACKEND=ON
  • If CMake cannot find NVHPC, set:
    export CMAKE_PREFIX_PATH=$CMAKE_PREFIX_PATH:$NVCOMPILERS/$NVARCH/22.9/cmake
    
    and then install again.

Machine-Specific Notes

IDRIS Jean Zay HPE SGI 8600 supercomputer

As of February 2025, loading modules in this exact order works:

module load nvidia-compilers/23.9 cuda/12.2.0 cudnn/8.9.7.29-cuda openmpi/4.1.5-cuda nccl/2.18.5-1-cuda cmake

# Install JAX
pip install --upgrade "jax[cuda]"

# Install jaxDecomp with cuDecomp
export CMAKE_PREFIX_PATH=$NVHPC_ROOT/cmake # sometimes needed
pip install git+https://github.com/DifferentiableUniverseInitiative/jaxDecomp -Ccmake.define.JD_CUDECOMP_BACKEND=ON

Note: If using only the pure-JAX backend, you do not need NVHPC.

NERSC Perlmutter HPE Cray EX supercomputer

As of November 2022:

module load PrgEnv-nvhpc python
export CRAY_ACCEL_TARGET=nvidia80

# Install JAX
pip install --upgrade "jax[cuda]"

# Install jaxDecomp w/ cuDecomp
export CMAKE_PREFIX_PATH=/opt/nvidia/hpc_sdk/Linux_x86_64/22.5/cmake
pip install git+https://github.com/DifferentiableUniverseInitiative/jaxDecomp -CCmake.define.JD_CUDECOMP_BACKEND=ON

Backend Configuration (cuDecomp Only)

By default, cuDecomp uses NCCL for inter-device communication. You can customize this at runtime:

import jaxdecomp

# Choose MPI or NVSHMEM for halo and transpose ops
jaxdecomp.config.update('transpose_comm_backend', jaxdecomp.TRANSPOSE_COMM_MPI_A2A)
jaxdecomp.config.update('halo_comm_backend', jaxdecomp.HALO_COMM_MPI)

This can also be managed via environment variables, as described in the docs.


Autotune Computational Mesh (cuDecomp Only)

The cuDecomp library can autotune the partition layout to maximize performance:

automesh = jaxdecomp.autotune(shape=[512,512,512])
# 'automesh' is an optimized partition layout.
# You can then create a JAX Sharding spec from this:
from jax.sharding import PositionalSharding
sharding = PositionalSharding(automesh)

License: This project is licensed under the MIT License.

For more details, see the examples directory and the documentation. Contributions and issues are welcome!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

jaxdecomp-0.2.8.tar.gz (42.6 MB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

jaxdecomp-0.2.8-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (170.4 kB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

jaxdecomp-0.2.8-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (171.0 kB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

jaxdecomp-0.2.8-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (169.6 kB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

File details

Details for the file jaxdecomp-0.2.8.tar.gz.

File metadata

  • Download URL: jaxdecomp-0.2.8.tar.gz
  • Upload date:
  • Size: 42.6 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for jaxdecomp-0.2.8.tar.gz
Algorithm Hash digest
SHA256 0a35b3ca8b9083e6a9ee7d5dc88d683633dd23f9258a441a44bb5ad859643826
MD5 66d780582d1bc8b8abb26f8fcac6890f
BLAKE2b-256 8f14024fa1ce2b19b224d38947f0bcbcf4c2d1739495efb01471d109b2914f05

See more details on using hashes here.

Provenance

The following attestation bundles were made for jaxdecomp-0.2.8.tar.gz:

Publisher: github-deploy.yml on DifferentiableUniverseInitiative/jaxDecomp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file jaxdecomp-0.2.8-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for jaxdecomp-0.2.8-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 f32003c0c96949a9d17ea12ab87665d3ef7aaf8a4dc2dab003f21f4783162de1
MD5 a94dda7d12c6f5994af588e93a860d4e
BLAKE2b-256 042953e325f5c29d4ee928ce693b5ba95ef16c6fc80067b0387299b72e3e375e

See more details on using hashes here.

Provenance

The following attestation bundles were made for jaxdecomp-0.2.8-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: github-deploy.yml on DifferentiableUniverseInitiative/jaxDecomp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file jaxdecomp-0.2.8-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for jaxdecomp-0.2.8-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 120fdf14ce26620a5fae84b15755ccc23af6e4694d21dd45043b641337cdc94f
MD5 e4cdefaa01cdff1f74f52a1bfe320769
BLAKE2b-256 30d8d4f15045c24e49496d2798093680a9d0f2c3df06429676534c564e3fcbbd

See more details on using hashes here.

Provenance

The following attestation bundles were made for jaxdecomp-0.2.8-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: github-deploy.yml on DifferentiableUniverseInitiative/jaxDecomp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file jaxdecomp-0.2.8-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for jaxdecomp-0.2.8-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 bdb0eb3d67607f1f629ecf1977445944a5ca65673a704b68e4a8d14e57073adb
MD5 76eba552e56d7d1976eebb2db41c620d
BLAKE2b-256 3b35819d2a08f0f19bf26cac25b86483b72bd84d45350783ac4e5b43f28d0a17

See more details on using hashes here.

Provenance

The following attestation bundles were made for jaxdecomp-0.2.8-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: github-deploy.yml on DifferentiableUniverseInitiative/jaxDecomp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page