Skip to main content

GPU-accelerated SPH neighbor search and operators built with NVIDIA Warp and PyTorch

Project description

sphWarpCore

sphWarpCore is a Warp- and PyTorch-based backend for core Smoothed Particle Hydrodynamics (SPH) operations. The package focuses on GPU-accelerated neighborhood construction and particle operators such as density estimation, interpolation, gradients, divergence, curl, Laplacians, and CRK-corrected variants.

The repository also includes notebooks that compare the Warp implementation against diffSPH reference workflows and demonstrate common operator setups.

Highlights

  • Compact-hash radius search for particle neighborhoods
  • Unified SPH operator entry point via sphOperation_warp
  • Multiple support and gradient schemes
  • CRK correction utilities for corrected interpolation and gradients
  • PyTorch tensor inputs and outputs, with Warp kernels under the hood

Installation

Install the package in editable mode while developing:

pip install -e .

If you want to run the example notebooks in this repository, install the notebook extras as well:

pip install -e ".[notebooks]"

For packaging and publishing, install the development extras or at least build and twine:

pip install -e ".[dev]"

Requirements

Core package requirements:

  • Python 3.10+
  • PyTorch
  • warp-lang
  • NumPy

Notebook and plotting extras used in this repository:

  • matplotlib
  • ipywidgets
  • ipympl

GPU notes:

  • For GPU execution, use a CUDA-capable PyTorch build and a compatible NVIDIA driver.
  • A separate CUDA toolkit installation is not required anymore for this package setup.
  • Warp still needs to be initialized once per process with wp.init().

Package Overview

Common entry points:

  • sphWarpCore.radiusSearchCompactHashMap: builds adjacency lists with compact hashing
  • sphWarpCore.sphOperation_warp: dispatches SPH operators from one high-level function
  • sphWarpCore.crk.computeCRKFactors: computes CRK apparent area and correction tensors
  • sphWarpCore.util.generateNeighborTestData: helper for generating regular particle test sets
  • sphWarpCore.util.getNextPrime: helper for choosing compact-hash table sizes

Important enums:

  • WarpOperation: selects the SPH operator
  • KernelFunctions: selects the smoothing kernel
  • SupportScheme: selects gather/scatter/symmetric support handling
  • GradientScheme: selects the gradient formulation
  • OperationDirection: selects all-to-all or filtered interaction directionality

Example: Radius Search with Compact Hashing

This follows the same setup pattern used by prepData in the demo utilities.

import torch
import warp as wp

from sphWarpCore import radiusSearchCompactHashMap
from sphWarpCore.util import generateNeighborTestData, getNextPrime

wp.init()

device = torch.device("cuda")
nx = 128
dim = 2
target_num_neighbors = 50
periodic = True

positions, supports, num_particles, domain, dx = generateNeighborTestData(
    nx, target_num_neighbors, dim, periodic, device
)

query_positions = positions.contiguous()
reference_positions = positions.contiguous()
query_supports = supports
reference_supports = supports

hash_map_length = getNextPrime(num_particles)

adjacency = radiusSearchCompactHashMap(
    query_positions,
    reference_positions,
    query_supports,
    reference_supports,
    domain.periodic,
    domain,
    "gather",
    hash_map_length,
)

print(adjacency.numNeighbors.shape)
print(adjacency.edgeOffsets[:5])
print(adjacency.j[:10])

The returned adjacency list stores edge-level connectivity in COO/CSR-like form:

  • adjacency.i: query particle index per edge
  • adjacency.j: neighbor particle index per edge
  • adjacency.numNeighbors: neighbor count per query particle
  • adjacency.edgeOffsets: CSR-style start offset per query particle

Example: SPH Interpolation

This mirrors the interpolation workflow used in warp_interpolate.ipynb.

import torch
import warp as wp

from sphWarpCore import radiusSearchCompactHashMap, sphOperation_warp
from sphWarpCore.enumTypes import KernelFunctions, OperationDirection, SupportScheme, WarpOperation
from sphWarpCore.util import generateNeighborTestData, getNextPrime

wp.init()

device = torch.device("cuda")
nx = 128
dim = 2
target_num_neighbors = 50
periodic = True

positions, supports, num_particles, domain, dx = generateNeighborTestData(
    nx, target_num_neighbors, dim, periodic, device
)

query_positions = positions.contiguous()
reference_positions = positions.contiguous()
query_supports = supports
reference_supports = supports

particle_mass = dx ** dim
query_masses = torch.full((num_particles,), particle_mass, device=device)
reference_masses = torch.full((num_particles,), particle_mass, device=device)

adjacency = radiusSearchCompactHashMap(
    query_positions,
    reference_positions,
    query_supports,
    reference_supports,
    domain.periodic,
    domain,
    "gather",
    getNextPrime(num_particles),
)

densities = sphOperation_warp(
    query_positions,
    reference_positions,
    query_supports,
    reference_supports,
    query_masses,
    reference_masses,
    None,
    None,
    None,
    None,
    domain,
    adjacency,
    operation=WarpOperation.Density,
    kernel=KernelFunctions.Wendland2,
    supportMode=SupportScheme.Gather,
)

field = torch.sin(query_positions[:, 0])

interpolated = sphOperation_warp(
    query_positions,
    reference_positions,
    query_supports,
    reference_supports,
    query_masses,
    reference_masses,
    densities,
    densities,
    field,
    field,
    domain=domain,
    adjacency=adjacency,
    operation=WarpOperation.Interpolate,
    operationMode=OperationDirection.AllToAll,
    kernel=KernelFunctions.Wendland2,
    supportMode=SupportScheme.Gather,
)

print(interpolated.shape)

Example: CRK-Corrected Gradient

This matches the corrected linear-field gradient workflow in warp_gradient.ipynb.

import torch
import warp as wp

from sphWarpCore import radiusSearchCompactHashMap, sphOperation_warp
from sphWarpCore.crk import computeCRKFactors
from sphWarpCore.enumTypes import (
    GradientScheme,
    KernelFunctions,
    OperationDirection,
    SupportScheme,
    WarpOperation,
)
from sphWarpCore.util import generateNeighborTestData, getNextPrime

wp.init()

device = torch.device("cuda")
nx = 128
dim = 2
target_num_neighbors = 50
periodic = True

positions, supports, num_particles, domain, dx = generateNeighborTestData(
    nx, target_num_neighbors, dim, periodic, device
)

query_positions = positions.contiguous()
reference_positions = positions.contiguous()
query_supports = supports
reference_supports = supports

particle_mass = dx ** dim
query_masses = torch.full((num_particles,), particle_mass, device=device)
reference_masses = torch.full((num_particles,), particle_mass, device=device)

adjacency = radiusSearchCompactHashMap(
    query_positions,
    reference_positions,
    query_supports,
    reference_supports,
    domain.periodic,
    domain,
    "gather",
    getNextPrime(num_particles),
)

densities = sphOperation_warp(
    query_positions,
    reference_positions,
    query_supports,
    reference_supports,
    query_masses,
    reference_masses,
    None,
    None,
    None,
    None,
    domain,
    adjacency,
    operation=WarpOperation.Density,
    kernel=KernelFunctions.Wendland2,
    supportMode=SupportScheme.Gather,
)

apparent_area, crk_density, A, B, gradA, gradB = computeCRKFactors(
    query_positions,
    reference_positions,
    query_supports,
    reference_supports,
    query_masses,
    reference_masses,
    domain=domain,
    adjacency=adjacency,
    operationMode=OperationDirection.AllToAll,
    kernel=KernelFunctions.Wendland2,
    supportMode=SupportScheme.Gather,
)

field = query_positions[:, 0] * 5.0 + 10.0

gradient = sphOperation_warp(
    query_positions,
    reference_positions,
    query_supports,
    reference_supports,
    query_masses,
    reference_masses,
    densities,
    densities,
    field,
    field,
    domain=domain,
    adjacency=adjacency,
    operation=WarpOperation.Gradient,
    operationMode=OperationDirection.AllToAll,
    kernel=KernelFunctions.Wendland2,
    supportMode=SupportScheme.SuperSymmetric,
    gradientMode=GradientScheme.Naive,
    useCRK=True,
    crk_A=A,
    crk_B=B,
    crk_gradA=gradA,
    crk_gradB=gradB,
    useVolume=True,
    queryVolumes=apparent_area,
    referenceVolumes=apparent_area,
)

print(gradient[:5])

computeCRKFactors also returns crk_density, which can be useful in other corrected workflows, but the gradient notebook example uses the standard density estimate together with CRK volume and correction tensors.

Repository Notes

  • demo_util.py contains helper setup code used across the notebooks.
  • warp_interpolate.ipynb demonstrates interpolation calls and visualization.
  • warp_gradient.ipynb demonstrates both standard and CRK-corrected gradients.
  • packages.md is no longer authoritative for CUDA toolkit setup; the separate CUDA install noted there is not required anymore.

Publishing To PyPI

This repository now includes two helper scripts:

  • scripts/setup_pypi_token.sh: stores a PyPI API token in ~/.pypirc
  • scripts/publish_pypi.sh: builds, validates, and uploads the package

Typical release flow:

bash scripts/setup_pypi_token.sh pypi
bash scripts/publish_pypi.sh

For a TestPyPI dry run:

bash scripts/setup_pypi_token.sh testpypi
bash scripts/publish_pypi.sh --testpypi

Before publishing, bump the package version in both pyproject.toml and src/sphWarpCore/__init__.py. The publish script checks that these two versions match and stops if they do not.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sphwarpcore-0.1.0.tar.gz (61.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

sphwarpcore-0.1.0-py3-none-any.whl (99.2 kB view details)

Uploaded Python 3

File details

Details for the file sphwarpcore-0.1.0.tar.gz.

File metadata

  • Download URL: sphwarpcore-0.1.0.tar.gz
  • Upload date:
  • Size: 61.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.13

File hashes

Hashes for sphwarpcore-0.1.0.tar.gz
Algorithm Hash digest
SHA256 cf5e781fab8855dfba45cac0117b60a09c9f2a8676a3b2f71101d048436c5b44
MD5 b550870e2a710b34c7a16d03aa899cab
BLAKE2b-256 86d46c32216cb3ebae5b7461fa709fce7314d0bdf3d11ff0d636766e058571be

See more details on using hashes here.

File details

Details for the file sphwarpcore-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: sphwarpcore-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 99.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.13

File hashes

Hashes for sphwarpcore-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 72dfa056252cfb913dbbab3dd70ec86868f71f306501dbe315b37cb18c4c8c43
MD5 8ebfb533e6a32a4d0739a217d8ad613c
BLAKE2b-256 cd73c9ec45b1a0d8cbe6c55bd13ed01cf599d45598c3d777a4524294c0e80ef5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page