PyTorch Geometric-based hypergraph neural networks library
Project description
PyG-Hyper-NN
PyTorch Geometric-based hypergraph neural networks library with 19+ state-of-the-art models for research and production.
PyG-Hyper-NN is a comprehensive library of hypergraph neural network models built on PyTorch Geometric. All implementations are faithfully ported from DHG-Bench (ICLR 2026), preserving the exact mathematical operations and algorithmic logic from the original papers. The library provides clean, typed implementations with standardized interfaces, comprehensive tests (373 tests), and production-ready code quality.
๐ Key Features
๐ง 19+ State-of-the-Art Models (from DHG-Bench)
- Basic Models: MLP, HGNN, HCHA, UniGNN, UniGCNII โ
- Set-Based Models: AllSet (SetGNN with PMA attention), EquivSetGNN โ
- Diffusion Models: TFHNN (Training-free PageRank propagation), HyperND (p-norm diffusion) โ
- Phenomenological Models: PhenomNN (Multi-scale iterative propagation), PhenomNNS (Simplified variant) โ
- Graph Expansion: CEGCN, CEGAT (Clique expansion), HyperGCN (Mediator-based), LEGCN (Line expansion) โ
- Transformers: HyperGT (Kernelized attention with O(N) complexity) โ
- Degree-Based: HNHN (Learnable alpha/beta normalization), HJRL (Joint node-edge representation) โ
- Advanced Architectures: SheafHyperGNN (Sheaf theory with CP decomposition), EDGNN (Equivariant diffusion) โ
- Diverse Approaches: Message passing, attention, transformers, equivariant operations, diffusion, phenomenological modeling, sheaf theory
- Research-Backed: Faithful implementations from published papers (AAAI 2019-2024, IJCAI 2021, ICLR 2022-2025, NeurIPS 2019-2023, ICML 2022-2023, etc.)
๐ฏ Clean Architecture
- Modular Design: Separate
layers/andmodels/for maximum reusability - Standardized Interface: Consistent API across all models
- Type Safety: Full type annotations with
tychecking - No External Config: Pure PyTorch, no args/config file dependencies
๐ฌ Research-Ready
- Faithful Implementations: All models faithfully ported from DHG-Bench, preserving exact mathematical operations
- Comprehensive Tests: 373 tests covering all layers and models (100% pass rate)
- Gradient Flow Verified: All models tested for proper backpropagation
- Reproducible: Fixed initialization and deterministic operations
- Documented: Docstrings with paper references and parameter explanations
- No Research Fraud: Never simplified or created "fake" implementationsโmathematical integrity preserved
โก Production-Quality
- Modern Python: Python 3.12+, type hints, dataclasses
- Code Quality: Ruff linting + ty type checking (100% pass rate)
- CI/CD Ready: GitHub Actions workflows for testing and deployment
- GPU Optimized: CUDA 12.6 support with mixed precision training
๐ฆ Installation
Prerequisites
This package requires PyTorch Geometric to be installed. Install it first:
pip install torch torch-geometric
For GPU support with CUDA 12.6:
pip install torch --index-url https://download.pytorch.org/whl/cu126
pip install torch-geometric
Using uv (Recommended)
# Clone the repository
git clone https://github.com/nishide-dev/pyg-hyper-nn.git
cd pyg-hyper-nn
# Install with all dependencies
uv sync
# Verify installation
uv run python -c "from pyg_hyper_nn.models import HGNN; print('โ
Installation successful!')"
Using pip
# Install from source
git clone https://github.com/nishide-dev/pyg-hyper-nn.git
cd pyg-hyper-nn
pip install -e .
# Or install directly from GitHub (when published)
pip install git+https://github.com/nishide-dev/pyg-hyper-nn.git
Requirements
- Python โฅ 3.12
- PyTorch โฅ 2.8
- PyTorch Geometric โฅ 2.6
- torch-scatter, torch-sparse
- NumPy โฅ 1.24
๐ฏ Quick Start
Basic Node Classification
import torch
from pyg_hyper_nn.models import HGNN
# Create model
model = HGNN(
in_channels=16, # Input feature dimension
hidden_channels=32, # Hidden layer dimension
out_channels=7, # Number of classes
num_layers=2, # Number of convolution layers
dropout=0.6, # Dropout probability
)
# Prepare data
x = torch.randn(100, 16) # Node features [num_nodes, in_channels]
hyperedge_index = torch.tensor([
[0, 1, 2, 1, 2, 3], # Node indices
[0, 0, 0, 1, 1, 1], # Hyperedge indices
])
# Forward pass
out = model(x, hyperedge_index) # Output: [num_nodes, out_channels]
print(f"Output shape: {out.shape}") # torch.Size([100, 7])
Using Different Models
from pyg_hyper_nn.models import HGNN, HyperGCN, UniGNN, MLP
# HGNN - Classic hypergraph convolution (Feng et al., AAAI 2019)
hgnn = HGNN(16, 32, 7, num_layers=2)
# HyperGCN - Supremum-infimum projection (Yadati et al., NeurIPS 2019)
hypergcn = HyperGCN(
in_channels=16,
hidden_channels=32,
out_channels=7,
num_layers=2,
fast=True, # Precompute structure once
mediators=True, # Use two-star expansion with mediators
)
# UniGNN - Universal message passing (Huang & Yang, IJCAI 2021)
unignn = UniGNN(
in_channels=16,
hidden_channels=32,
out_channels=7,
num_layers=2,
heads=4, # Multi-head attention
first_aggregate="mean", # Vertex-to-hyperedge aggregation
)
# MLP - Baseline model (no graph structure)
mlp = MLP(16, 32, 7, num_layers=2, normalization="bn")
# All models share the same interface
for model in [hgnn, hypergcn, unignn, mlp]:
out = model(x, hyperedge_index) # MLP ignores hyperedge_index
print(f"{model.__class__.__name__}: {out.shape}")
Training Example
import torch
import torch.nn.functional as F
from pyg_hyper_nn.models import HGNN
# Setup
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = HGNN(in_channels=16, hidden_channels=32, out_channels=7, num_layers=2).to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-4)
# Training loop
model.train()
for epoch in range(100):
optimizer.zero_grad()
# Forward pass
out = model(x.to(device), hyperedge_index.to(device))
loss = F.cross_entropy(out[train_mask], y[train_mask])
# Backward pass
loss.backward()
optimizer.step()
if epoch % 10 == 0:
print(f"Epoch {epoch:3d} | Loss: {loss.item():.4f}")
# Evaluation
model.eval()
with torch.no_grad():
out = model(x.to(device), hyperedge_index.to(device))
pred = out.argmax(dim=1)
acc = (pred[test_mask] == y[test_mask]).float().mean()
print(f"Test Accuracy: {acc:.4f}")
๐ Available Models
โ Implemented Models (19/26)
All implementations are based on the official DHG-Bench implementations, faithfully preserving the mathematical operations and algorithmic logic from the original papers.
| Model | Paper | Venue | Year | Key Features |
|---|---|---|---|---|
| MLP | - | Baseline | - | Standard multi-layer perceptron |
| HGNN | Hypergraph neural networks | AAAI | 2019 | Symmetric degree normalization (D_v^{-1/2} H W D_e^{-1} H^T D_v^{-1/2}) |
| HCHA | Hypergraph convolution and hypergraph attention | PR | 2020 | Asymmetric normalization without D_v^{-1/2} |
| HyperGCN | HyperGCN: A New Method of Training Graph Convolutional Networks on Hypergraphs | NeurIPS | 2019 | Supremum-infimum projection with mediators |
| HNHN | HNHN: Hypergraph Networks with Hyperedge Neurons | ICML WS | 2020 | Two-stage message passing with learnable alpha/beta normalization |
| UniGNN | UniGNN: a Unified Framework for Graph and Hypergraph Neural Networks | IJCAI | 2021 | Universal vertex-hyperedge message passing framework |
| UniGCNII | UniGNN: a Unified Framework for Graph and Hypergraph Neural Networks | IJCAI | 2021 | GCNII-style initial residual + adaptive identity mapping |
| AllSet | You are AllSet: A Multiset Function Framework for Hypergraph Neural Networks | ICLR | 2022 | Multiset functions with PMA attention mechanism |
| HyperND | Nonlinear Feature Diffusion on Hypergraphs | ICML | 2022 | Iterative p-norm diffusion with personalized PageRank restart |
| LEGCN | Semi-supervised Hypergraph Node Classification on Hypergraph Line Expansion | CIKM | 2022 | Line expansion GCN for hypergraphs |
| EquivSetGNN | Equivariant Hypergraph Neural Networks | ECCV | 2022 | Equivariant set operations with deep residuals |
| ED-HNN | Equivariant Hypergraph Diffusion Neural Operators | ICLR | 2023 | Equivariant hypergraph diffusion (implemented as EDGNN) |
| PhenomNN | From Hypergraph Energy Functions to Hypergraph Neural Networks | ICML | 2023 | Multi-scale phenomenological modeling with two normalization schemes |
| SheafHyperGNN | Sheaf Hypergraph Networks | NeurIPS | 2023 | Sheaf-theoretic learning with diagonal restriction maps and CP decomposition |
| HJRL | Hypergraph Joint Representation Learning for Hypervertices and Hyperedges via Cross Expansion | AAAI | 2024 | Joint node-edge representation learning with 4 propagation paths |
| HyperGT | Hypergraph Transformer for Semi-Supervised Classification | ICASSP | 2024 | Kernelized attention with O(N) complexity using random Fourier features |
| TFHNN | Training-Free Message Passing for Learning on Hypergraphs | ICLR | 2025 | Training-free tensor factorization with personalized PageRank |
| CEGCN | Based on clique expansion | - | - | Clique expansion + standard GCN layers |
| CEGAT | Based on clique expansion | - | - | Clique expansion + GAT layers with multi-head attention |
Note: PhenomNNS (Simplified PhenomNN) and PlainUnigencoder are additional utility models for faster computation and group pooling operations.
๐ง Coming Soon (7 models remaining)
The remaining models from DHG-Bench present significant implementation challenges due to custom data structures and non-standard interfaces:
| Model | Paper | Venue | Year | Implementation Status |
|---|---|---|---|---|
| T-HyperGNNs | T-HyperGNNs: Hypergraph Neural Networks via Tensor Representations | TNNLS | 2024 | Requires custom tensor aggregation (TMPHN-style) |
| DPHGNN | DPHGNN: A Dual Perspective Hypergraph Neural Networks | KDD | 2024 | Requires TAA (Topology-Aware Attention) and multiple graph expansions |
| EHNN | Equivariant hypergraph neural networks | ECCV | 2022 | Requires hypernetwork for dynamic weight generation |
Why these are difficult: All remaining models depend on special data structures (neig_dict, ehnn_cache, multiple adjacency matrices) that cannot be easily expressed with standard PyG's hyperedge_index interface. See CLAUDE.md for detailed technical analysis.
Utility function available: build_neighbor_dict() has been implemented as a foundation for future TMPHN implementation (6 tests passing).
See docs/models.md for detailed model descriptions and API documentation.
๐๏ธ Project Structure
pyg-hyper-nn/
โโโ src/pyg_hyper_nn/
โ โโโ __init__.py # Package entry point
โ โโโ py.typed # Type information marker
โ โโโ layers/ # Reusable layer implementations
โ โ โโโ __init__.py
โ โ โโโ conv.py # HypergraphConv, AllSetConv, etc.
โ โ โโโ attention.py # PMA, Multi-head attention
โ โ โโโ pooling.py # Hypergraph pooling layers
โ โ โโโ mlp.py # MLP blocks
โ โ โโโ utils.py # Initialization utilities
โ โโโ models/ # Complete model implementations
โ โโโ __init__.py
โ โโโ mlp.py # MLP baseline โ
โ โโโ hgnn.py # HGNN, HCHA โ
โ โโโ unignn.py # UniGNN โ
โ โโโ hypergcn.py # HyperGCN (coming soon)
โ โโโ allset.py # AllSet (coming soon)
โ โโโ ... # 23 more models
โโโ tests/
โ โโโ test_layers/ # Layer unit tests
โ โ โโโ test_conv.py # HypergraphConv tests (15 tests) โ
โ โ โโโ test_attention.py # Attention tests
โ โโโ test_models/ # Model integration tests
โ โโโ test_basic.py # MLP, HGNN, UniGNN tests (10 tests) โ
โโโ docs/
โ โโโ models.md # Model documentation
โ โโโ layers.md # Layer API reference
โ โโโ examples/ # Usage examples
โโโ .github/
โ โโโ workflows/
โ โโโ test.yml # CI testing
โ โโโ publish.yml # PyPI publishing
โโโ pyproject.toml # Project configuration
โโโ ruff.toml # Ruff linting config
โโโ README.md # This file
๐งช Development
Running Tests
# Run all tests (367 tests passing!)
uv run pytest tests/ -v
# Run specific test suite
uv run pytest tests/test_models/test_basic.py -v
uv run pytest tests/test_layers/test_conv.py -v
# Run with coverage
uv run pytest tests/ --cov=src --cov-report=html
# View coverage report
open htmlcov/index.html
Code Quality Checks
# Run all quality checks
uv run ruff check src/ tests/ # Linting
uv run ruff format src/ tests/ # Formatting
uv run ty check src/ # Type checking
# Auto-fix issues
uv run ruff check --fix src/ tests/
# Current status: โ
All checks passing!
Pre-commit Hooks
Set up automatic code quality checks before commits:
# Install pre-commit
uv add --dev pre-commit
# Install the git hooks
uv run pre-commit install
# Run hooks manually on all files
uv run pre-commit run --all-files
# Now hooks run automatically on every commit!
# - ruff lint --fix: Auto-fix linting issues
# - ruff format: Auto-format code
# - ty check: Type checking
Adding Dependencies
# Add runtime dependency
uv add <package-name>
# Add development dependency
uv add --dev <package-name>
# Update all dependencies
uv lock --upgrade
๐ Model Interface Standard
All models in PyG-Hyper-NN follow a consistent interface:
class ModelName(nn.Module):
"""Model description with paper reference.
Args:
in_channels: Size of input node features.
hidden_channels: Size of hidden layer features.
out_channels: Number of output classes.
num_layers: Number of convolution layers.
dropout: Dropout probability. Default: 0.5.
**kwargs: Model-specific parameters.
"""
def __init__(
self,
in_channels: int,
hidden_channels: int,
out_channels: int,
num_layers: int,
dropout: float = 0.5,
**kwargs,
):
...
def reset_parameters(self) -> None:
"""Reset all learnable parameters."""
...
def forward(
self,
x: Tensor,
hyperedge_index: Tensor,
hyperedge_weight: Optional[Tensor] = None,
) -> Tensor:
"""Forward pass.
Args:
x: Node feature matrix of shape (num_nodes, in_channels).
hyperedge_index: Hyperedge indices in COO format of shape (2, num_edges).
hyperedge_weight: Optional hyperedge weights of shape (num_hyperedges,).
Returns:
Output predictions of shape (num_nodes, out_channels).
"""
...
๐ Documentation
- Model Documentation - Detailed model descriptions and API
- Layer API - Layer-level documentation
- Examples - Usage examples and tutorials
- Contributing Guide - Development guidelines
- CUDA Setup - GPU environment setup
๐ค Contributing
Contributions are welcome! We're actively implementing the remaining 23 models.
Priority Tasks
- High Priority Models: UniGCNII, AllSet, HyperGT, CEGCN/CEGAT
- Preprocessing Infrastructure: Support for line expansion, Laplacian computation
- Documentation: Model docs, usage examples
- Testing: Additional edge cases, performance benchmarks
See CONTRIBUTING.md for detailed guidelines.
๐ Citation
This library is based on implementations from DHG-Bench, a comprehensive benchmark for deep hypergraph learning. We faithfully preserve the mathematical operations and algorithmic logic from the original papers.
If you use pyg-hyper-nn or any of the implemented models in your research, please consider citing the DHG-Bench paper:
@article{li2025dhg,
title={DHG-Bench: A Comprehensive Benchmark for Deep Hypergraph Learning},
author={Li, Fan and Wang, Xiaoyang and Zhang, Wenjie and Zhang, Ying and Lin, Xuemin},
journal={arXiv preprint arXiv:2508.12244},
year={2025}
}
DHG-Bench Paper: https://openreview.net/forum?id=lhsb1ChUDF
Individual Model Citations
When using specific models, please also cite the original papers:
Click to expand model citations
HGNN (AAAI 2019):
@inproceedings{feng2019hypergraph,
title={Hypergraph neural networks},
author={Feng, Yifan and You, Haoxuan and Zhang, Zizhao and Ji, Rongrong and Gao, Yue},
booktitle={Proceedings of the AAAI conference on artificial intelligence},
volume={33},
pages={3558--3565},
year={2019}
}
HyperGCN (NeurIPS 2019):
@inproceedings{yadati2019hypergcn,
title={Hypergcn: A new method for training graph convolutional networks on hypergraphs},
author={Yadati, Naganand and Nimishakavi, Madhav and Yadav, Prateek and Nitin, Vikram and Louis, Anand and Talukdar, Partha},
booktitle={Advances in neural information processing systems},
volume={32},
year={2019}
}
UniGNN (IJCAI 2021):
@inproceedings{huang2021unignn,
title={UniGNN: a unified framework for graph and hypergraph neural networks},
author={Huang, Jing and Yang, Jie},
booktitle={Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence},
pages={2563--2569},
year={2021}
}
AllSet (ICLR 2022):
@inproceedings{chien2022you,
title={You are allset: A multiset function framework for hypergraph neural networks},
author={Chien, Eli and Pan, Chao and Peng, Jianhao and Milenkovic, Olgica},
booktitle={International Conference on Learning Representations},
year={2022}
}
HyperND (ICML 2022):
@inproceedings{prokopchik2022nonlinear,
title={Nonlinear feature diffusion on hypergraphs},
author={Prokopchik, Konstantin and Besta, Maciej and Hoefler, Torsten},
booktitle={International Conference on Machine Learning},
pages={17932--17951},
year={2022}
}
PhenomNN (ICML 2023):
@inproceedings{wang2023hypergraph,
title={From hypergraph energy functions to hypergraph neural networks},
author={Wang, Yuxin and Yao, Quan and Kwok, James T and Ni, Lionel M},
booktitle={International Conference on Machine Learning},
pages={36433--36448},
year={2023}
}
SheafHyperGNN (NeurIPS 2023):
@inproceedings{yu2023sheaf,
title={Sheaf hypergraph networks},
author={Yu, Tianyu and Li, Jiajie and Gong, Hongyang and Li, Mengzhao},
booktitle={Advances in Neural Information Processing Systems},
volume={36},
pages={76714--76733},
year={2023}
}
HyperGT (ICASSP 2024):
@inproceedings{gao2024hypergraph,
title={Hypergraph Transformer for Semi-Supervised Classification},
author={Gao, Zeyu and Zhang, Chao and Zhang, Zhenpeng and Zhu, Fengli and Li, Jianan and Yu, Jing},
booktitle={ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing},
pages={5690--5694},
year={2024}
}
HJRL (AAAI 2024):
@inproceedings{ju2024hypergraph,
title={Hypergraph Joint Representation Learning for Hypervertices and Hyperedges via Cross Expansion},
author={Ju, Wei and Luo, Yi and Fang, Yifan and Zhang, Zhiping and Zhang, Ming},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={38},
pages={8633--8641},
year={2024}
}
TFHNN (ICLR 2025):
@inproceedings{luo2025training,
title={Training-Free Message Passing for Learning on Hypergraphs},
author={Luo, Bohan and Lin, Zhezheng and Feng, Yilong and Wu, Zheng-Jun and Wang, Stan Z},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025}
}
For other models (HNHN, LEGCN, EDGNN, HCHA, etc.), please refer to the DHG-Bench paper and the original papers listed in the model table above.
๐ License
MIT License - see LICENSE file for details.
Built with โค๏ธ for hypergraph learning research | Based on DHG-Bench
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pyg_hyper_nn-0.1.1.tar.gz.
File metadata
- Download URL: pyg_hyper_nn-0.1.1.tar.gz
- Upload date:
- Size: 81.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.11.7 {"installer":{"name":"uv","version":"0.11.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
42b764a1f01fba63725c89d55e8adfb0cff770d762645e024dee52c939953225
|
|
| MD5 |
33f1e5bb96e824cb34749339c992b17e
|
|
| BLAKE2b-256 |
56652dc2d6e6c92a7feae318d5e6af7cd02a632cab2534ca9c801cfe3f3dcc73
|
File details
Details for the file pyg_hyper_nn-0.1.1-py3-none-any.whl.
File metadata
- Download URL: pyg_hyper_nn-0.1.1-py3-none-any.whl
- Upload date:
- Size: 88.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.11.7 {"installer":{"name":"uv","version":"0.11.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f37c82504db15c61baf31c248b219f92620ce9ae6df3a11674468b86714fa18c
|
|
| MD5 |
ad34d901b48e01610f4f567215a8d6be
|
|
| BLAKE2b-256 |
44abc2221c5ab3452cde9f4ad8341c8bfe8ee8289259d3b03cef1dc8bc767879
|