GPU-accelerated neural network operations using Vulkan compute shaders
Project description
Grilly
GPU-accelerated neural network framework using Vulkan compute shaders. Supports AMD, NVIDIA, and Intel GPUs.
Release Status
- Current release line: v0.3.0
- Package name:
grilly - Python support:
>=3.10 - Release channel: PyPI
Versioning note:
- The repository may publish timestamped patch builds within the
0.3.0line (for example0.3.0.<build>), while keeping documentation and release notes aligned tov0.3.0.
Features
Neural Network Operations
- Feedforward Networks: Linear layers, activations (ReLU, GELU, SiLU, SoftMax, SwiGLU, RoSwish, GCU)
- Convolutional Networks: Conv2D, MaxPool2D, AvgPool2D, BatchNorm2D (forward and backward)
- Recurrent Networks: LSTM cells
- Attention Mechanisms: Flash Attention 2, multi-head attention, RoPE, prosody modulation
- Normalization: LayerNorm, RMSNorm, BatchNorm
- Activations: GELU, SiLU, ReLU, SoftMax, SoftPlus, SwiGLU, GEGLU, ReGLU, RoSwish, GCU
- Fused Operations: Linear+activation fusion, QKV projection, layer normalization+linear
Spiking Neural Networks
- Neuron Models: LIF (Leaky Integrate-and-Fire), GIF (Generalized Integrate-and-Fire)
- Learning: STDP (Spike-Timing-Dependent Plasticity), Hebbian learning
- Synaptic Dynamics: Forward propagation, STDP traces, weight updates
- Bridges: Continuous-to-spike, spike-to-continuous conversion
- Operations: SNN matmul, softmax, readout, expert readout
Memory & Retrieval
- Memory Operations: Read, write, context aggregation
- Memory Injection: Concatenation, gating, residual connections
- Capsule Networks: Capsule projection, dentate gyrus sparse expansion
- FAISS Integration: Distance computation, top-k selection, IVF filtering, quantization, k-means
Learning Algorithms
- Optimization: Adam, natural gradients, Fisher information matrix
- Continual Learning: EWC (Elastic Weight Consolidation), Fisher penalties
- Adaptive Filtering: NLMS (Normalized Least Mean Squares), ensemble, prediction
- Regularization: Dropout, whitening transforms
Specialized Operations
- Place & Time Cells: Spatial encoding, temporal encoding, theta-gamma oscillations
- FFT: Bit-reversal, butterfly operations, magnitude, power spectrum
- Domain Adaptation: Domain classification, routing, expert combination
- Embeddings: Lookup, position encoding, attention, FFN, pooling, normalization
- Loss Functions: Cross-entropy, BCE, contrastive loss
- Semantic Encoding: Affect MLP, affective processing
Transformer Support
- Architecture-Specific Optimizations: BERT, GPT, T5, RoBERTa, DistilBERT, MPNet, XLM-RoBERTa, ALBERT
- HuggingFace Bridge: Load pre-trained models without PyTorch runtime
- Model Components: Multi-head attention, positional encoding, layer normalization
- Fine-Tuning: LoRA (Low-Rank Adaptation), gradient checkpointing
LoRA Fine-Tuning
- Parameter-efficient fine-tuning for transformers
- Backward pass support for LoRA layers
- Memory-efficient training on 12GB VRAM
Installation
From PyPI
pip install grilly
From Source
git clone https://github.com/grillcheese-ai/grilly.git
cd grilly
make install
# Or with development dependencies
make install-dev
# Or manually
pip install -e .
Requirements
- Python >= 3.10
- Vulkan drivers
- NumPy >= 1.24.0
- Supported GPUs: AMD (tested on RX 6750 XT), NVIDIA, Intel Arc
Quick Start
import grilly
import numpy as np
# Initialize compute backend
backend = grilly.Compute()
# Spiking neural network example
input_current = np.random.randn(1000).astype(np.float32)
membrane = np.zeros(1000, dtype=np.float32)
refractory = np.zeros(1000, dtype=np.float32)
membrane, refractory, spikes = backend.snn.lif_step(
input_current, membrane, refractory,
dt=0.001, tau_mem=20.0, v_thresh=1.0
)
# Feedforward network example
x = np.random.randn(32, 384).astype(np.float32)
weight = np.random.randn(384, 128).astype(np.float32)
bias = np.zeros(128, dtype=np.float32)
output = backend.fnn.linear(x, weight, bias)
activated = backend.fnn.swiglu(output)
# Flash Attention 2
q = np.random.randn(32, 8, 64, 64).astype(np.float32) # (batch, heads, seq, dim)
k = np.random.randn(32, 8, 64, 64).astype(np.float32)
v = np.random.randn(32, 8, 64, 64).astype(np.float32)
attention_out = backend.attention.flash_attention2(q, k, v)
# FAISS similarity search
query = np.random.randn(1, 384).astype(np.float32)
database = np.random.randn(10000, 384).astype(np.float32)
distances = backend.faiss.compute_distances(query, database)
top_k_distances, top_k_indices = backend.faiss.topk(distances, k=10)
API Reference
Core Interfaces
grilly.Compute()- Main compute backend (alias for VulkanCompute)grilly.SNNCompute()- High-level spiking neural network interfacegrilly.Learning()- Learning algorithms (EWC, NLMS, etc.)
Backend Namespaces
backend.snn.*- Spiking neural network operationsbackend.fnn.*- Feedforward network operationsbackend.attention.*- Attention mechanismsbackend.memory.*- Memory operationsbackend.faiss.*- Vector similarity searchbackend.learning.*- Learning algorithmsbackend.cells.*- Place and time cells
Shader Statistics
- Total GLSL shaders: 137
- Compiled SPIR-V shaders: 138
- Categories: 12+ operation types
Compiling Shaders
Shaders are pre-compiled and included. To recompile:
# Compile all shaders (cross-platform)
make compile-shaders
# Verify compilation
make verify-shaders
# Or manually:
# Windows: .\scripts\compile_all_shaders.ps1
# Linux/Mac: ./compile_shaders.sh
# Single shader
glslc shader.glsl -o spv/shader.spv
GPU Selection
# Set GPU index (if multiple GPUs)
export VK_GPU_INDEX=0
# Enable debug logging
export GRILLY_DEBUG=1
# Allow CPU fallback
export ALLOW_CPU_VULKAN=1
Testing
# All tests
make test
# CPU-only tests (skip GPU)
make test-cpu
# GPU tests only
make test-gpu
# With coverage report
make test-coverage
# Or use pytest directly
pytest grilly/tests/ -v
Architecture
Grilly uses Vulkan compute shaders for cross-platform GPU acceleration. Each operation is implemented as a GLSL compute shader compiled to SPIR-V bytecode.
Design Principles
- Pure Vulkan backend (no CUDA dependency)
- Hardware-agnostic (AMD, NVIDIA, Intel)
- Zero-copy GPU memory operations
- Minimal CPU-GPU transfers
- CPU fallback for unsupported operations
Performance
Tested on AMD RX 6750 XT (12GB VRAM):
- LIF neuron simulation: 1M neurons at >1000 FPS
- Flash Attention 2: 32 batch, 8 heads, 512 seq length at ~50ms
- FAISS top-k: 10K vectors, 384D, k=10 at ~5ms
Examples
See examples/ directory for detailed usage:
- Transformer fine-tuning with LoRA
- Spiking neural network training
- FAISS similarity search
- Continual learning with EWC
Development
Quick Start
# Clone and setup
git clone https://github.com/grillcheese-ai/grilly.git
cd grilly
# Install with dev dependencies
make install-dev
# Run tests
make test
# Format code
make format
# Run linters
make lint
# Build package
make build
Project Structure
grilly/
├── backend/ # Vulkan backend implementation
├── nn/ # High-level neural network modules
├── shaders/ # GLSL compute shaders
│ └── spv/ # Compiled SPIR-V bytecode
├── tests/ # Test suite
├── utils/ # HuggingFace bridge, utilities
└── Makefile # Build automation
Makefile Commands
Run make help to see all available commands:
make install- Install packagemake test- Run testsmake compile-shaders- Compile shadersmake build- Build distributionmake format- Format codemake lint- Run lintersmake clean- Clean build artifacts
Publishing to PyPI
Use the release script:
# from repository root
powershell -ExecutionPolicy Bypass -File .\scripts\publish_pypi.ps1
TestPyPI dry run:
powershell -ExecutionPolicy Bypass -File .\scripts\publish_pypi.ps1 -TestPyPI
Required environment variable:
PYPI_API_TOKEN(for PyPI)TEST_PYPI_API_TOKEN(optional, if using-TestPyPI)
Contributing
- Fork the repository
- Create a feature branch
- Add tests for new features
- Run
make checkto verify - Submit a pull request
License
MIT License - see LICENSE file for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file grilly-0.3.0.20260207103851.tar.gz.
File metadata
- Download URL: grilly-0.3.0.20260207103851.tar.gz
- Upload date:
- Size: 618.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
77270d8954d431d0d3efea187477ca4a41650df4a62f16e2cacc75a311a71bd7
|
|
| MD5 |
8340b35aef91c95a5941ae301e114d78
|
|
| BLAKE2b-256 |
5e9f1f7d6dd4855bbc6292278470428675447061e954a3137b5bd21850937465
|
File details
Details for the file grilly-0.3.0.20260207103851-py3-none-any.whl.
File metadata
- Download URL: grilly-0.3.0.20260207103851-py3-none-any.whl
- Upload date:
- Size: 848.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fbf1181fcd87506a3fa66bd769015972e8ee30da63ce982024063b4cf71fce22
|
|
| MD5 |
40b112211423943424c3864f71ae4db7
|
|
| BLAKE2b-256 |
f4f8d7fa743fe072bb868b124b9db3e97b675914c2e49d109352e9cd04cc2035
|