Skip to main content

GPU-accelerated SPH neighbor search and operators built with NVIDIA Warp and PyTorch

Project description

sphWarpCore

sphWarpCore is a Warp- and PyTorch-based backend for core Smoothed Particle Hydrodynamics (SPH) operations. The package focuses on GPU-accelerated neighborhood construction and particle operators such as density estimation, interpolation, gradients, divergence, curl, Laplacians, and CRK-corrected variants.

The repository also includes notebooks that compare the Warp implementation against diffSPH reference workflows and demonstrate common operator setups.

Highlights

  • Compact-hash radius search for particle neighborhoods
  • Unified SPH operator entry point via sphOperation_warp
  • Multiple support and gradient schemes
  • CRK correction utilities for corrected interpolation and gradients
  • PyTorch tensor inputs and outputs, with Warp kernels under the hood

Installation

Install the package in editable mode while developing:

pip install -e .

If you want to run the example notebooks in this repository, install the notebook extras as well:

pip install -e ".[notebooks]"

For packaging and publishing, install the development extras or at least build and twine:

pip install -e ".[dev]"

Requirements

Core package requirements:

  • Python 3.10+
  • PyTorch
  • warp-lang
  • NumPy

Notebook and plotting extras used in this repository:

  • matplotlib
  • ipywidgets
  • ipympl

GPU notes:

  • For GPU execution, use a CUDA-capable PyTorch build and a compatible NVIDIA driver.
  • A separate CUDA toolkit installation is not required anymore for this package setup.
  • Warp still needs to be initialized once per process with wp.init().

Package Overview

Common entry points:

  • sphWarpCore.radiusSearchCompactHashMap: builds adjacency lists with compact hashing
  • sphWarpCore.sphOperation_warp: dispatches SPH operators from one high-level function
  • sphWarpCore.crk.computeCRKFactors: computes CRK apparent area and correction tensors
  • sphWarpCore.util.generateNeighborTestData: helper for generating regular particle test sets
  • sphWarpCore.util.getNextPrime: helper for choosing compact-hash table sizes

Important enums:

  • WarpOperation: selects the SPH operator
  • KernelFunctions: selects the smoothing kernel
  • SupportScheme: selects gather/scatter/symmetric support handling
  • GradientScheme: selects the gradient formulation
  • OperationDirection: selects all-to-all or filtered interaction directionality

Example: Radius Search with Compact Hashing

This follows the same setup pattern used by prepData in the demo utilities.

import torch
import warp as wp

from sphWarpCore import radiusSearchCompactHashMap
from sphWarpCore.util import generateNeighborTestData, getNextPrime

wp.init()

device = torch.device("cuda")
nx = 128
dim = 2
target_num_neighbors = 50
periodic = True

positions, supports, num_particles, domain, dx = generateNeighborTestData(
    nx, target_num_neighbors, dim, periodic, device
)

query_positions = positions.contiguous()
reference_positions = positions.contiguous()
query_supports = supports
reference_supports = supports

hash_map_length = getNextPrime(num_particles)

adjacency = radiusSearchCompactHashMap(
    query_positions,
    reference_positions,
    query_supports,
    reference_supports,
    domain.periodic,
    domain,
    "gather",
    hash_map_length,
)

print(adjacency.numNeighbors.shape)
print(adjacency.edgeOffsets[:5])
print(adjacency.j[:10])

The returned adjacency list stores edge-level connectivity in COO/CSR-like form:

  • adjacency.i: query particle index per edge
  • adjacency.j: neighbor particle index per edge
  • adjacency.numNeighbors: neighbor count per query particle
  • adjacency.edgeOffsets: CSR-style start offset per query particle

Example: SPH Interpolation

This mirrors the interpolation workflow used in warp_interpolate.ipynb.

import torch
import warp as wp

from sphWarpCore import radiusSearchCompactHashMap, sphOperation_warp
from sphWarpCore.enumTypes import KernelFunctions, OperationDirection, SupportScheme, WarpOperation
from sphWarpCore.util import generateNeighborTestData, getNextPrime

wp.init()

device = torch.device("cuda")
nx = 128
dim = 2
target_num_neighbors = 50
periodic = True

positions, supports, num_particles, domain, dx = generateNeighborTestData(
    nx, target_num_neighbors, dim, periodic, device
)

query_positions = positions.contiguous()
reference_positions = positions.contiguous()
query_supports = supports
reference_supports = supports

particle_mass = dx ** dim
query_masses = torch.full((num_particles,), particle_mass, device=device)
reference_masses = torch.full((num_particles,), particle_mass, device=device)

adjacency = radiusSearchCompactHashMap(
    query_positions,
    reference_positions,
    query_supports,
    reference_supports,
    domain.periodic,
    domain,
    "gather",
    getNextPrime(num_particles),
)

densities = sphOperation_warp(
    query_positions,
    reference_positions,
    query_supports,
    reference_supports,
    query_masses,
    reference_masses,
    None,
    None,
    None,
    None,
    domain,
    adjacency,
    operation=WarpOperation.Density,
    kernel=KernelFunctions.Wendland2,
    supportMode=SupportScheme.Gather,
)

field = torch.sin(query_positions[:, 0])

interpolated = sphOperation_warp(
    query_positions,
    reference_positions,
    query_supports,
    reference_supports,
    query_masses,
    reference_masses,
    densities,
    densities,
    field,
    field,
    domain=domain,
    adjacency=adjacency,
    operation=WarpOperation.Interpolate,
    operationMode=OperationDirection.AllToAll,
    kernel=KernelFunctions.Wendland2,
    supportMode=SupportScheme.Gather,
)

print(interpolated.shape)

Example: CRK-Corrected Gradient

This matches the corrected linear-field gradient workflow in warp_gradient.ipynb.

import torch
import warp as wp

from sphWarpCore import radiusSearchCompactHashMap, sphOperation_warp
from sphWarpCore.crk import computeCRKFactors
from sphWarpCore.enumTypes import (
    GradientScheme,
    KernelFunctions,
    OperationDirection,
    SupportScheme,
    WarpOperation,
)
from sphWarpCore.util import generateNeighborTestData, getNextPrime

wp.init()

device = torch.device("cuda")
nx = 128
dim = 2
target_num_neighbors = 50
periodic = True

positions, supports, num_particles, domain, dx = generateNeighborTestData(
    nx, target_num_neighbors, dim, periodic, device
)

query_positions = positions.contiguous()
reference_positions = positions.contiguous()
query_supports = supports
reference_supports = supports

particle_mass = dx ** dim
query_masses = torch.full((num_particles,), particle_mass, device=device)
reference_masses = torch.full((num_particles,), particle_mass, device=device)

adjacency = radiusSearchCompactHashMap(
    query_positions,
    reference_positions,
    query_supports,
    reference_supports,
    domain.periodic,
    domain,
    "gather",
    getNextPrime(num_particles),
)

densities = sphOperation_warp(
    query_positions,
    reference_positions,
    query_supports,
    reference_supports,
    query_masses,
    reference_masses,
    None,
    None,
    None,
    None,
    domain,
    adjacency,
    operation=WarpOperation.Density,
    kernel=KernelFunctions.Wendland2,
    supportMode=SupportScheme.Gather,
)

apparent_area, crk_density, A, B, gradA, gradB = computeCRKFactors(
    query_positions,
    reference_positions,
    query_supports,
    reference_supports,
    query_masses,
    reference_masses,
    domain=domain,
    adjacency=adjacency,
    operationMode=OperationDirection.AllToAll,
    kernel=KernelFunctions.Wendland2,
    supportMode=SupportScheme.Gather,
)

field = query_positions[:, 0] * 5.0 + 10.0

gradient = sphOperation_warp(
    query_positions,
    reference_positions,
    query_supports,
    reference_supports,
    query_masses,
    reference_masses,
    densities,
    densities,
    field,
    field,
    domain=domain,
    adjacency=adjacency,
    operation=WarpOperation.Gradient,
    operationMode=OperationDirection.AllToAll,
    kernel=KernelFunctions.Wendland2,
    supportMode=SupportScheme.SuperSymmetric,
    gradientMode=GradientScheme.Naive,
    useCRK=True,
    crk_A=A,
    crk_B=B,
    crk_gradA=gradA,
    crk_gradB=gradB,
    useVolume=True,
    queryVolumes=apparent_area,
    referenceVolumes=apparent_area,
)

print(gradient[:5])

computeCRKFactors also returns crk_density, which can be useful in other corrected workflows, but the gradient notebook example uses the standard density estimate together with CRK volume and correction tensors.

Repository Notes

  • demo_util.py contains helper setup code used across the notebooks.
  • warp_interpolate.ipynb demonstrates interpolation calls and visualization.
  • warp_gradient.ipynb demonstrates both standard and CRK-corrected gradients.
  • packages.md is no longer authoritative for CUDA toolkit setup; the separate CUDA install noted there is not required anymore.

Publishing To PyPI

This repository now includes two helper scripts:

  • scripts/setup_pypi_token.sh: stores a PyPI API token in ~/.pypirc
  • scripts/publish_pypi.sh: builds, validates, and uploads the package

Typical release flow:

bash scripts/setup_pypi_token.sh pypi
bash scripts/publish_pypi.sh

For a TestPyPI dry run:

bash scripts/setup_pypi_token.sh testpypi
bash scripts/publish_pypi.sh --testpypi

Before publishing, bump the package version in both pyproject.toml and src/sphWarpCore/__init__.py. The publish script checks that these two versions match and stops if they do not.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sphwarpcore-0.1.1.tar.gz (61.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

sphwarpcore-0.1.1-py3-none-any.whl (99.3 kB view details)

Uploaded Python 3

File details

Details for the file sphwarpcore-0.1.1.tar.gz.

File metadata

  • Download URL: sphwarpcore-0.1.1.tar.gz
  • Upload date:
  • Size: 61.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.12

File hashes

Hashes for sphwarpcore-0.1.1.tar.gz
Algorithm Hash digest
SHA256 1c9319e9fc985058de5829baacb55d43d443036f84c1448956ee107c02891757
MD5 1b662b2e834bd01d2bdfe2b3cca0d865
BLAKE2b-256 cc36a109e46838ebb9795cd02d005cbad0ffba4e4f25cec2235d2ce3ec2d330f

See more details on using hashes here.

File details

Details for the file sphwarpcore-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: sphwarpcore-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 99.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.12

File hashes

Hashes for sphwarpcore-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 60f64e86a31b8e22e1f259e0e670476e269d665a293228d7903213eb2e8af556
MD5 6231d528088d63e20e3a54f70db7a25f
BLAKE2b-256 ca1aeed2b3135c9b98d8ea0c583e130778f453c3a2618c854fa4ad56c7ea5d9c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page