GPU-Accelerated Phase-Amplitude Coupling calculation using PyTorch
Project description
gPAC: GPU-Accelerated Phase-Amplitude Coupling
gPAC is a PyTorch-based package for efficient computation of Phase-Amplitude Coupling (PAC) using Modulation Index (MI) with GPU acceleration. It provides:
- 341.8x speedup over TensorPAC (tested on real benchmarks)
- Smart memory management with auto/chunked/sequential strategies
- Full differentiability for deep learning integration
- Production-ready with comprehensive tests and examples
- High correlation with TensorPAC (0.81 ± 0.04 across diverse PAC configurations)
🎯 Example Applications
Static PAC analysis with comodulogram visualization
Trainable PAC features for neural network classification
🔬 Comparison with TensorPAC
Example comparison: Phase 10Hz, Amplitude 60Hz (PAC Correlation: 0.847)
Shows side-by-side PAC comodulograms with ground truth markers (cyan crosses) and difference plots
High correlation between gPAC and TensorPAC across 16 diverse PAC configurations
Mean correlation: 0.8113 ± 0.0419 (range: 0.7365 - 0.8585)
Sample Comparison Results
| Example 1: Phase: 4Hz, Amp: 40Hz PAC Correlation: 0.826 |
Example 2: Phase: 12Hz, Amp: 100Hz PAC Correlation: 0.730 |
The comparison uses identical frequency bands (25 log-spaced phase bands × 35 log-spaced amplitude bands) for both methods, ensuring a fair comparison. Ground truth PAC locations are marked with crosses.
📊 Performance Benchmarks
Parameter scaling comparison between gPAC (blue) and Tensorpac (red)
Parameter Performance Analysis
PAC comodulogram comparison showing consistent results
🚀 Quick Start
# Installation
pip install gpu-pac
# Or install from source
git clone https://github.com/ywatanabe1989/gPAC.git
cd gPAC
pip install -e .
Quick Start
import torch
from torch.utils.data import DataLoader
from gpac import PAC
from gpac.dataset import SyntheticDataGenerator
# Generate synthetic PAC dataset
generator = SyntheticDataGenerator(fs=512, duration_sec=2.0)
dataset = generator.dataset(n_samples=100, balanced=True)
dataloader = DataLoader(dataset, batch_size=32, shuffle=True)
# Method 1: Specify frequency range and number of bands
pac_model = PAC(
seq_len=dataset[0][0].shape[-1],
fs=512,
pha_range_hz=(2, 20), # Phase: 2-20 Hz
pha_n_bands=10, # 10 linearly spaced bands
amp_range_hz=(30, 100), # Amplitude: 30-100 Hz
amp_n_bands=10, # 10 linearly spaced bands
)
# Method 2: Direct band specification (alternative)
# pac_model = PAC(
# seq_len=dataset[0][0].shape[-1],
# fs=512,
# pha_bands_hz=[[4, 8], [8, 12], [12, 20]], # Theta, Alpha, Beta
# amp_bands_hz=[[30, 50], [50, 80], [80, 120]], # Low, Mid, High Gamma
# )
# Move to GPU if available
device = 'cuda' if torch.cuda.is_available() else 'cpu'
pac_model = pac_model.to(device)
# Process a batch
for signals, labels, metadata in dataloader:
signals = signals.to(device)
# Calculate PAC
results = pac_model(signals)
pac_values = results['pac'] # Shape: (batch, channels, pha_bands, amp_bands)
print(f"Batch PAC shape: {pac_values.shape}")
print(f"Max PAC value: {pac_values.max().item():.3f}")
# Access frequency band definitions
print(f"Phase bands: {pac_model.pha_bands_hz}") # Tensor of shape (n_pha, 2) with [low, high] Hz
print(f"Amplitude bands: {pac_model.amp_bands_hz}") # Tensor of shape (n_amp, 2) with [low, high] Hz
break # Just show first batch
For more examples, see the examples directory.
🔧 Core Features
Flexible Frequency Band Configuration
- Range-based: Specify frequency range and number of bands for automatic spacing
- Direct specification: Define custom frequency bands for precise control
- Standard bands: Compatible with theta, alpha, beta, gamma conventions
- High resolution: Support for 50+ bands for detailed analysis
- Band access: Direct access to frequency band definitions via
pac.pha_bands_hzandpac.amp_bands_hzproperties
GPU Optimization
- Multi-GPU support: Automatic data parallelism across GPUs
- FP16 mode: Half-precision computation for 2x memory efficiency
- Torch compilation: JIT compilation for additional speedup
- Batch processing: Efficient handling of multiple signals
Scientific Features
- Permutation testing: Statistical validation with n_perm surrogates
- Z-score normalization: Automatic statistical significance testing
- Modulation Index: Standard MI calculation with 18 phase bins
- Full differentiability: Gradient support for deep learning applications
🤝 Contributing
Contributions are welcome! Please see our contributing guidelines.
📖 Citation
If you use gPAC in your research, please cite:
@software{watanabe2025gpac,
author = {Watanabe, Yusuke},
title = {gPAC: GPU-Accelerated Phase-Amplitude Coupling},
year = {2025},
url = {https://github.com/ywatanabe1989/gPAC}
}
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
🙏 Acknowledgments
- TensorPAC team for the reference implementation
- For fair comparison with TensorPAC, use identical frequency bands as demonstrated in
./benchmark/pac_values_comparison_with_tensorpac/generate_16_comparison_pairs.py
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file gpu_pac-0.2.1.tar.gz.
File metadata
- Download URL: gpu_pac-0.2.1.tar.gz
- Upload date:
- Size: 38.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.10.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d4c137285a2fd87c08ad3c6b870bca61579d9092ba550261470a46b769a2f236
|
|
| MD5 |
453410b30b825d9bd29bf973e632a7bc
|
|
| BLAKE2b-256 |
8ed50dd760083824f1e678dda1960b6ee535c9d86b1e1599cf84e69ffc752017
|
File details
Details for the file gpu_pac-0.2.1-py3-none-any.whl.
File metadata
- Download URL: gpu_pac-0.2.1-py3-none-any.whl
- Upload date:
- Size: 45.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.10.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9a9c83ebff7bc18151dc1481adcc4dd992337d08bcd1bbed69305ea16fc3370f
|
|
| MD5 |
af7a4a694b662bd11966ef73b30a02dd
|
|
| BLAKE2b-256 |
194d69dbf1a03741298cba22db205fadca01908bbdff90184233432e55e3cc4f
|