Skip to main content

GPU-Accelerated Phase-Amplitude Coupling calculation using PyTorch

Project description

gPAC: GPU-Accelerated Phase-Amplitude Coupling

PyPI version pytest codecov License: MIT Python 3.8+

gPAC is a PyTorch-based package for efficient computation of Phase-Amplitude Coupling (PAC) using Modulation Index (MI) with GPU acceleration. It provides:

  • 341.8x speedup over TensorPAC (tested on real benchmarks)
  • Smart memory management with auto/chunked/sequential strategies
  • Full differentiability for deep learning integration
  • Production-ready with comprehensive tests and examples
  • High correlation with TensorPAC (0.81 ± 0.04 across diverse PAC configurations)

🎯 Example Applications

Static PAC Analysis
Static PAC Analysis
Comodulogram visualization
Trainable PAC Classification
Trainable PAC Classification
Deep learning integration
PAC Comparison
Static vs Trainable Comparison
Performance & accuracy analysis
Amplitude Distributions
Amplitude Distributions
Phase preference for clinical analysis

🔬 PAC Values Comparison with TensorPAC

Comparison Pair 1
Phase: 4Hz, Amp: 40Hz
Correlation: 0.826
Comparison Pair 4
Phase: 12Hz, Amp: 100Hz
Correlation: 0.730
Correlation Summary
Overall Correlation
0.811 ± 0.042 (n=16)

Click images to view full size. Ground truth PAC locations marked with crosses.

📊 Performance Benchmarks

Parameter Scaling
Parameter Scaling Comparison
gPAC (blue) vs TensorPAC (red)
Performance Analysis
Performance Analysis
Speed & memory efficiency

Click images to view detailed performance metrics

🚀 Quick Start

# Installation
pip install gpu-pac

Quick Start

import torch
from torch.utils.data import DataLoader
from gpac import PAC
from gpac.dataset import SyntheticDataGenerator

# Generate synthetic PAC dataset
generator = SyntheticDataGenerator(fs=512, duration_sec=2.0)
dataset = generator.dataset(n_samples=100, balanced=True)
dataloader = DataLoader(dataset, batch_size=32, shuffle=True)

# Method 1: Specify frequency range and number of bands
pac_model = PAC(
    seq_len=dataset[0][0].shape[-1],
    fs=512,
    pha_range_hz=(2, 20),    # Phase: 2-20 Hz
    pha_n_bands=10,          # 10 linearly spaced bands
    amp_range_hz=(30, 100),  # Amplitude: 30-100 Hz  
    amp_n_bands=10,          # 10 linearly spaced bands
)

# Method 2: Direct band specification (alternative)
# pac_model = PAC(
#     seq_len=dataset[0][0].shape[-1],
#     fs=512,
#     pha_bands_hz=[[4, 8], [8, 12], [12, 20]],      # Theta, Alpha, Beta
#     amp_bands_hz=[[30, 50], [50, 80], [80, 120]],  # Low, Mid, High Gamma
# )

# Move to GPU if available
device = 'cuda' if torch.cuda.is_available() else 'cpu'
pac_model = pac_model.to(device)

# Process a batch
for signals, labels, metadata in dataloader:
    signals = signals.to(device)
    
    # Calculate PAC
    results = pac_model(signals)
    pac_values = results['pac']  # Shape: (batch, channels, pha_bands, amp_bands)
    
    print(f"Batch PAC shape: {pac_values.shape}")
    print(f"Max PAC value: {pac_values.max().item():.3f}")
    
    # Access frequency band definitions
    print(f"Phase bands: {pac_model.pha_bands_hz}")  # Tensor of shape (n_pha, 2) with [low, high] Hz
    print(f"Amplitude bands: {pac_model.amp_bands_hz}")  # Tensor of shape (n_amp, 2) with [low, high] Hz
    
    # Advanced: Get amplitude distributions for phase preference analysis
    results_with_dist = pac_model(signals, compute_distributions=True)
    amp_distributions = results_with_dist['amplitude_distributions']
    print(f"Amplitude distributions shape: {amp_distributions.shape}")
    # Shape: (batch, channels, pha_bands, amp_bands, n_phase_bins=18)
    
    break  # Just show first batch

For more examples, see the examples directory.

📊 Amplitude Distributions for Clinical Analysis

The compute_distributions=True option provides detailed phase preference analysis, particularly useful for seizure detection and neurophysiological research:

# Compute PAC with amplitude distributions
results = pac_model(signals, compute_distributions=True)

# Access distributions
pac_values = results['pac']
amp_distributions = results['amplitude_distributions']
phase_bin_centers = results['phase_bin_centers']  # Phase bins in radians

# Analyze phase preference for strongest coupling
batch_idx, ch_idx = 0, 0
max_idx = pac_values[batch_idx, ch_idx].argmax()
pha_idx, amp_idx = np.unravel_index(max_idx, pac_values[batch_idx, ch_idx].shape)

# Get the amplitude distribution across phase bins
phase_dist = amp_distributions[batch_idx, ch_idx, pha_idx, amp_idx]

# Calculate phase preference metrics
preferred_phase = phase_bin_centers[phase_dist.argmax()]
distribution_entropy = -torch.sum(phase_dist * torch.log(phase_dist + 1e-10))

print(f"Preferred phase: {preferred_phase * 180/np.pi:.1f}°")
print(f"Distribution entropy: {distribution_entropy:.3f}")

Clinical Applications:

  • Seizure onset detection: Phase preference changes may precede visible PAC strength changes
  • Distribution shape analysis: Bimodal distributions indicate competing neural dynamics
  • Temporal tracking: Monitor distribution evolution for state transitions
  • Network synchronization: Compare distributions across frequency pairs

🔧 Core Features

Flexible Frequency Band Configuration

  • Range-based: Specify frequency range and number of bands for automatic spacing
  • Direct specification: Define custom frequency bands for precise control
  • Standard bands: Compatible with theta, alpha, beta, gamma conventions
  • High resolution: Support for 50+ bands for detailed analysis
  • Band access: Direct access to frequency band definitions via pac.pha_bands_hz and pac.amp_bands_hz properties

GPU Optimization

  • Multi-GPU support: Automatic data parallelism across GPUs
  • FP16 mode: Half-precision computation for 2x memory efficiency
  • Torch compilation: JIT compilation for additional speedup
  • Batch processing: Efficient handling of multiple signals

Scientific Features

  • Permutation testing: Statistical validation with n_perm surrogates
  • Z-score normalization: Automatic statistical significance testing
  • Modulation Index: Standard MI calculation with 18 phase bins
  • Full differentiability: Gradient support for deep learning applications
  • Amplitude distributions: Optional phase preference analysis for clinical applications

🤝 Contributing

Contributions are welcome! Please see our contributing guidelines.

📖 Citation

If you use gPAC in your research, please cite:

@software{watanabe2025gpac,
  author = {Watanabe, Yusuke},
  title = {gPAC: GPU-Accelerated Phase-Amplitude Coupling},
  year = {2025},
  url = {https://github.com/ywatanabe1989/gPAC}
}

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgments

  • TensorPAC team for the reference implementation
  • For fair comparison with TensorPAC, use identical frequency bands as demonstrated in ./benchmark/pac_values_comparison_with_tensorpac/generate_16_comparison_pairs.py

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gpu_pac-0.3.1.tar.gz (39.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

gpu_pac-0.3.1-py3-none-any.whl (46.2 kB view details)

Uploaded Python 3

File details

Details for the file gpu_pac-0.3.1.tar.gz.

File metadata

  • Download URL: gpu_pac-0.3.1.tar.gz
  • Upload date:
  • Size: 39.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.4

File hashes

Hashes for gpu_pac-0.3.1.tar.gz
Algorithm Hash digest
SHA256 f78e5326f300eec53aa26e933de6b8e844a386bb80a85063f23654c59128727c
MD5 fb2133a8faf194231f7d0e7605e28ccd
BLAKE2b-256 09d398dc8fbebb6b51702f1bc9b8f99adf0b725e9f50d5bcbc3633118d1f18c0

See more details on using hashes here.

File details

Details for the file gpu_pac-0.3.1-py3-none-any.whl.

File metadata

  • Download URL: gpu_pac-0.3.1-py3-none-any.whl
  • Upload date:
  • Size: 46.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.4

File hashes

Hashes for gpu_pac-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 29a415438e23d0313da7cd763f1b91424a879881c482c5afcb7414ed8e573c4e
MD5 a46e5900ef3e718ecb9cf660b4a06567
BLAKE2b-256 e55d9d6e4540cd1d972acbb1297df2eb25aed8c8b4c98b4cbb146890bc7ca90a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page