A lightweight GPU runtime for Python with Rust-powered scheduler, NVRTC JIT compilation, and NumPy-like API
Project description
PyGPUkit — Lightweight GPU Runtime for Python
A minimal, modular GPU runtime with Rust-powered scheduler, NVRTC JIT compilation, and a clean NumPy-like API.
Documentation
| Guide | Description |
|---|---|
| Getting Started | Installation, quick start, basic usage |
| API Reference | Complete API documentation with examples |
| LLM Guide | SafeTensors, Tokenizer, GPT-2 model loading |
| Performance Tuning | TF32, FP16, CUTLASS optimization |
| Scheduler Guide | Multi-LLM concurrent execution |
Overview
PyGPUkit is a lightweight GPU runtime for Python that provides:
- Single-binary distribution — works with just GPU drivers, no CUDA Toolkit needed
- Rust-powered scheduler with admission control, QoS, and resource partitioning
- NVRTC JIT (optional) for custom kernel compilation
- A NumPy-like
GPUArraytype - Kubernetes-inspired GPU scheduling (bandwidth + memory guarantees)
PyGPUkit aims to be the "micro-runtime for GPU computing": small, fast, and ideal for research, inference tooling, DSP, and real-time systems.
Note: PyGPUkit is NOT a PyTorch/CuPy replacement—it's a lightweight runtime for custom GPU workloads where full ML frameworks are overkill.
What's New in v0.2.7
CUTLASS Epilogue Fusion
Fused Linear + Bias + GELU operations using CUTLASS epilogue fusion for improved performance in transformer workloads.
import pygpukit as gpk
import numpy as np
# Create tensors
batch, in_feat, out_feat = 512, 768, 3072
input = gpk.from_numpy(np.random.randn(batch, in_feat).astype(np.float32))
weight = gpk.from_numpy(np.random.randn(out_feat, in_feat).astype(np.float32))
bias = gpk.from_numpy(np.random.randn(out_feat).astype(np.float32))
# Fused linear + bias + GELU (single kernel, no intermediate memory)
output = gpk.linear_bias_gelu(input, weight, bias)
Multi-SM CUTLASS Kernels
Runtime SM detection with optimized kernel variants:
- SM80 (A100): 4-stage pipeline optimized for 48KB shared memory
- SM86+ (RTX 30xx/40xx, H100): 5-stage pipeline for 100KB+ shared memory
New Operations
| Operation | Description |
|---|---|
gpk.transpose(a) |
GPU-native matrix transpose |
gpk.bias_add_inplace(out, bias) |
In-place bias addition |
gpk.linear_bias_gelu(x, w, b) |
Fused linear + bias + GELU |
API Improvements
- Complete public API exports (all operations accessible via
gpk.*) - Consistent snake_case naming convention
- Full docstrings for all public functions
LLM Support
PyGPUkit includes built-in support for loading and running LLM models. See the LLM Guide for detailed documentation.
from pygpukit.llm import SafeTensorsFile, Tokenizer
# Load safetensors (memory-mapped, zero-copy)
st = SafeTensorsFile("model.safetensors")
print(f"Tensors: {st.num_tensors}, Size: {st.file_size / 1e9:.2f} GB")
# Tokenizer (HuggingFace format)
tok = Tokenizer("tokenizer.json")
ids = tok.encode("Hello, world!")
text = tok.decode(ids)
| Component | Description |
|---|---|
SafeTensorsFile |
Memory-mapped .safetensors loading |
Tokenizer |
BPE tokenizer (HuggingFace format) |
GPT2Model |
GPT-2 model (MLP-only MVP) |
Linear, LayerNorm, MLP |
Model building blocks |
What's New in v0.2.6
CUTLASS Backend (Default)
NVIDIA CUTLASS v4.3.0 is now the default GEMM backend, delivering optimized TensorCore performance out of the box.
| Feature | Description |
|---|---|
| TF32 TensorCore | 31+ TFLOPS for FP32 inputs (automatic) |
| FP16 TensorCore | 63 TFLOPS |
| BF16 TensorCore | 63 TFLOPS |
| Zero Config | No environment variables needed |
import pygpukit as gpk
import numpy as np
# CUTLASS TF32 is automatic for FP32 (31+ TFLOPS)
a = gpk.from_numpy(np.random.randn(8192, 8192).astype(np.float32))
b = gpk.from_numpy(np.random.randn(8192, 8192).astype(np.float32))
c = a @ b # Uses CUTLASS TF32 TensorCore
# For full FP32 precision (no TF32), set:
# PYGPUKIT_NO_TF32=1
Multi-LLM Concurrent Execution
Run multiple AI models (LLM, TTS, Vision) concurrently on a single GPU with independent CUDA streams and VRAM budgets.
| Feature | Description |
|---|---|
| Execution Control | User controls execution order |
| Stream Isolation | No implicit sync between streams |
| VRAM Budgeting | Safe memory sharing per model |
| Concurrent Safety | "Running simultaneously doesn't break" |
| asyncio Integration | Native Python async/await support |
Note: On a single GPU, Multi-LLM scheduling enables concurrent execution, not faster execution, for compute-bound workloads. Speedup benefits apply to I/O-bound workloads or multi-GPU setups.
import asyncio
from pygpukit.scheduler import (
create_context, context_session, GB, initialize
)
# Create execution contexts with VRAM budgets
initialize(device_id=0)
llm_ctx = create_context("llm", max_vram=4 * GB)
tts_ctx = create_context("tts", max_vram=2 * GB)
async def run_parallel():
async with context_session(llm_ctx), context_session(tts_ctx):
# Run models concurrently with asyncio.gather
llm_task = asyncio.create_task(run_llm_inference())
tts_task = asyncio.create_task(run_tts_synthesis())
text, audio = await asyncio.gather(llm_task, tts_task)
return text, audio
result = asyncio.run(run_parallel())
FP16/BF16 TensorCore (via CUTLASS)
| Feature | Description |
|---|---|
| FP16 TensorCore | 63 TFLOPS (automatic via CUTLASS) |
| BF16 TensorCore | 63 TFLOPS (automatic via CUTLASS) |
| FP32 Accumulation | Numerical stability maintained |
import pygpukit as gpk
import numpy as np
# FP16 TensorCore matmul (63 TFLOPS on RTX 3090 Ti)
# No environment variable needed - CUTLASS is automatic
a = gpk.from_numpy(np.random.randn(8192, 8192).astype(np.float16))
b = gpk.from_numpy(np.random.randn(8192, 8192).astype(np.float16))
c = a @ b # Uses CUTLASS TensorCore
Note: CUTLASS requires matrix dimensions divisible by 16.
What's New in v0.2.5
FP16 / BF16 Support
| Feature | Description |
|---|---|
| FP16 (float16) | Half-precision floating point |
| BF16 (bfloat16) | Brain floating point (better dynamic range) |
| FP32 Accumulation | Numerical stability via FP32 intermediate |
| Type Conversion | astype() for seamless dtype conversion |
import pygpukit as gpk
import numpy as np
# FP16 operations
a = gpk.from_numpy(np.random.randn(1024, 1024).astype(np.float16))
b = gpk.from_numpy(np.random.randn(1024, 1024).astype(np.float16))
c = a @ b # FP16 matmul
# BF16 operations
arr = np.random.randn(1024, 1024).astype(np.float32)
a_bf16 = gpk.from_numpy(arr).astype(gpk.bfloat16)
b_bf16 = gpk.from_numpy(arr).astype(gpk.bfloat16)
c_bf16 = a_bf16 @ b_bf16 # BF16 matmul
result = c_bf16.astype(gpk.float32) # Convert back to FP32
Reduction Operations
| Operation | Description |
|---|---|
gpk.sum(a) |
Sum of all elements |
gpk.mean(a) |
Mean of all elements |
gpk.max(a) |
Maximum element |
Operator Overloads
c = a + b # Element-wise add
c = a - b # Element-wise subtract
c = a * b # Element-wise multiply
c = a / b # Element-wise divide
c = a @ b # Matrix multiplication
What's New in v0.2.4
Single-Binary Distribution
| Feature | Description |
|---|---|
| Driver-only mode | Only nvcuda.dll (GPU driver) required |
| Dynamic NVRTC | JIT loaded at runtime, optional |
| No cudart dependency | Eliminated CUDA Runtime dependency |
| Smaller wheel | No bundled DLLs |
import pygpukit as gp
# Works with just GPU drivers!
print(f"CUDA: {gp.is_cuda_available()}") # True (if GPU driver installed)
print(f"NVRTC: {gp.is_nvrtc_available()}") # True (if CUDA Toolkit installed)
print(f"NVRTC Path: {gp.get_nvrtc_path()}") # Path to NVRTC DLL (if available)
TF32 TensorCore GEMM
| Feature | Description |
|---|---|
| PTX mma.sync | Direct TensorCore access via inline PTX assembly |
| cp.async Pipeline | Double-buffered async memory transfers |
| TF32 Precision | 19-bit mantissa (vs FP32's 23-bit), ~0.1% per-op error |
| SM 80+ Required | Ampere architecture (RTX 30XX+) required |
Performance
Benchmark Comparison (RTX 3090 Ti, 8192×8192)
| Library | FP32 | TF32 | FP16 | BF16 | Requirements |
|---|---|---|---|---|---|
| NumPy (OpenBLAS) | ~0.8 TFLOPS | — | — | — | CPU only |
| cuBLAS | ~21 TFLOPS | ~59 TFLOPS | ~75 TFLOPS | ~83 TFLOPS | CUDA Toolkit |
| PyGPUkit (CUTLASS) | 18 TFLOPS | 31 TFLOPS | 63 TFLOPS | 63 TFLOPS | GPU drivers only |
Built-in matmul kernels are pre-compiled. Driver-Only and Full (JIT) modes have identical matmul performance. JIT is only needed for custom kernels.
PyGPUkit Performance by Matrix Size
| Matrix Size | FP32 (NO_TF32) | TF32 (CUTLASS) | FP16 (CUTLASS) | BF16 (CUTLASS) |
|---|---|---|---|---|
| 2048×2048 | 9.6 TFLOPS | 13 TFLOPS | 15 TFLOPS | 21 TFLOPS |
| 4096×4096 | 14.7 TFLOPS | 22 TFLOPS | 44 TFLOPS | 44 TFLOPS |
| 8192×8192 | 18 TFLOPS | 31 TFLOPS | 63 TFLOPS | 63 TFLOPS |
Note: CUTLASS is automatic for compatible sizes (16-aligned). Use
PYGPUKIT_NO_TF32=1for full FP32 precision.
Installation
pip install pygpukit
From source:
git clone https://github.com/m96-chan/PyGPUkit
cd PyGPUkit
pip install -e .
Requirements
- Python 3.10+
- NVIDIA GPU with drivers installed
- Optional: CUDA Toolkit (for JIT compilation of custom kernels)
Note: NVRTC (NVIDIA Runtime Compiler) is included in CUDA Toolkit. Pre-compiled GPU operations (matmul, add, mul, etc.) work with just GPU drivers.
Supported GPUs
- RTX 30XX series (Ampere, SM 80+) and above
- Older GPUs (RTX 20XX, GTX 10XX, etc.) are NOT supported (SM < 80)
Runtime Modes
| Mode | Requirements | Features |
|---|---|---|
| Full JIT | GPU drivers + CUDA Toolkit | All features including custom kernels |
| Pre-compiled | GPU drivers only | Built-in ops (matmul, add, mul) |
| CPU simulation | None | Testing/development without GPU |
Quick Start
Basic Operations
import pygpukit as gp
# Allocate arrays
x = gp.zeros((1024, 1024), dtype="float32")
y = gp.ones((1024, 1024), dtype="float32")
# Operations
z = gp.add(x, y)
w = gp.matmul(x, y)
# CPU <-> GPU transfer
arr = z.to_numpy()
garr = gp.from_numpy(arr)
Custom JIT Kernel (requires CUDA Toolkit)
src = '''
extern "C" __global__
void scale(float* x, float factor, int n) {
int idx = blockIdx.x * blockDim.x + threadIdx.x;
if (idx < n) x[idx] *= factor;
}
'''
if gp.is_nvrtc_available():
kernel = gp.jit(src, func="scale")
kernel(x, factor=0.5, n=x.size)
else:
print("JIT not available. Using pre-compiled ops.")
Rust Scheduler
import _pygpukit_rust as rust
# Memory Pool with LRU eviction
pool = rust.MemoryPool(quota=100 * 1024 * 1024, enable_eviction=True)
block = pool.allocate(4096)
# QoS-aware task scheduling
evaluator = rust.QosPolicyEvaluator(total_memory=8*1024**3, total_bandwidth=1.0)
task = rust.QosTaskMeta.guaranteed("task-1", "Critical Task", 256*1024*1024)
result = evaluator.evaluate(task)
# GPU Partitioning
manager = rust.PartitionManager(rust.PartitionConfig(total_memory=8*1024**3))
manager.create_partition("inference", "Inference",
rust.PartitionLimits().memory(4*1024**3).compute(0.5))
Features
Core Infrastructure (Rust)
| Feature | Description |
|---|---|
| Memory Pool | LRU eviction, size-class free lists |
| Scheduler | Priority queue, memory reservation |
| Transfer Engine | Separate H2D/D2H streams, priority |
| Kernel Dispatch | Per-stream limits, lifecycle tracking |
Advanced Scheduler
| Feature | Description |
|---|---|
| Admission Control | Deterministic admission, quota enforcement |
| QoS Policy | Guaranteed/Burstable/BestEffort tiers |
| Kernel Pacing | Bandwidth-based throttling per stream |
| GPU Partitioning | Resource isolation, multi-tenant support |
| Multi-LLM Execution | Concurrent AI model execution with stream isolation |
| asyncio Integration | Native Python async/await for concurrent inference |
Project Goals
- Provide the smallest usable GPU runtime for Python
- Expose GPU scheduling (bandwidth, memory, partitioning)
- Make writing custom GPU kernels easy
- Serve as a building block for inference engines, DSP systems, and real-time workloads
Project Structure
PyGPUkit/
src/pygpukit/ # Python API (NumPy-compatible)
native/ # C++ backend (CUDA Driver API, NVRTC)
rust/ # Rust backend (memory pool, scheduler)
pygpukit-core/ # Pure Rust core logic
pygpukit-python/ # PyO3 bindings
examples/ # Demo scripts
tests/ # Test suite
Roadmap
Released
| Version | Highlights |
|---|---|
| v0.1 | GPUArray, NVRTC JIT, add/mul/matmul, wheels |
| v0.2.0 | Rust scheduler (QoS, partitioning), memory pool (LRU), 106 tests |
| v0.2.1 | API stabilization, error propagation |
| v0.2.2 | Ampere SGEMM (cp.async, float4), 18 TFLOPS FP32 |
| v0.2.3 | TF32 TensorCore (PTX mma.sync), 28 TFLOPS |
| v0.2.4 | Single-binary distribution, dynamic NVRTC, driver-only mode |
| v0.2.5 | FP16/BF16 support, reduction ops, operator overloads, TF32 v2 (~30 TFLOPS) |
| v0.2.6 | CUTLASS backend (31 TFLOPS TF32, 63 TFLOPS FP16/BF16), Multi-LLM concurrent execution |
| v0.2.7 | Epilogue fusion (linear+bias+gelu), Multi-SM kernels, API review |
Planned
| Version | Goals |
|---|---|
| v0.3 | Triton backend, advanced ops (softmax), MPS/MIG |
API Stability & Backward Compatibility
Version Policy
- v0.2.x: Backward compatible within minor versions. New features may be added, but existing APIs remain stable.
- v0.3+: May introduce breaking changes with deprecation warnings in prior version.
Stable Public API (v0.2.x)
All functions exported via pygpukit.* are part of the stable public API:
| Category | Functions |
|---|---|
| Factory | zeros, ones, empty, from_numpy |
| Elementwise | add, sub, mul, div |
| Math | exp, log, relu, gelu |
| Matrix | matmul, transpose |
| Reductions | sum, mean, max |
| Neural | layernorm, bias_add_inplace, linear_bias_gelu |
| Types | GPUArray, DataType, float32, float64, float16, bfloat16 |
| LLM | llm.SafeTensorsFile, llm.Tokenizer, llm.GPT2Model, llm.Linear |
Deprecation Policy
APIs to be removed will emit DeprecationWarning for at least one minor version before removal.
Contributing
Contributions and discussions are welcome! Please open Issues for feature requests, bugs, or design proposals.
License
MIT License
Acknowledgements
Inspired by: CUDA Runtime, NVRTC, PyCUDA, CuPy, Triton
PyGPUkit aims to fill the gap for a tiny, embeddable GPU runtime for Python.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distributions
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pygpukit-0.2.7.tar.gz.
File metadata
- Download URL: pygpukit-0.2.7.tar.gz
- Upload date:
- Size: 34.0 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e52dd1d741359c09a1126e166dfddcaea44f071c4806d9a85771aaa1552d78d0
|
|
| MD5 |
21bcdf3462c806d053728e7953fef04e
|
|
| BLAKE2b-256 |
5806b0b0bd59767e99a9133e10cdb5664325f32bbb7bebfbb9d20cb45574dd54
|
Provenance
The following attestation bundles were made for pygpukit-0.2.7.tar.gz:
Publisher:
release.yml on m96-chan/PyGPUkit
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
pygpukit-0.2.7.tar.gz -
Subject digest:
e52dd1d741359c09a1126e166dfddcaea44f071c4806d9a85771aaa1552d78d0 - Sigstore transparency entry: 766005481
- Sigstore integration time:
-
Permalink:
m96-chan/PyGPUkit@4bdb29cf6fb13afc72d46d7910a7bf61d1975cd9 -
Branch / Tag:
refs/tags/v0.2.7 - Owner: https://github.com/m96-chan
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@4bdb29cf6fb13afc72d46d7910a7bf61d1975cd9 -
Trigger Event:
push
-
Statement type:
File details
Details for the file pygpukit-0.2.7-cp312-cp312-win_amd64.whl.
File metadata
- Download URL: pygpukit-0.2.7-cp312-cp312-win_amd64.whl
- Upload date:
- Size: 2.7 MB
- Tags: CPython 3.12, Windows x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
563248c3c28193ca1ac8ba16fb0ad4cdd5377b9c9eaf78195957b735f9d8ee74
|
|
| MD5 |
a43f506bd503705257041bbb4020f1c7
|
|
| BLAKE2b-256 |
85c1977b70b9a03290587d2277bea33a22a8b0b1f35425da203a03230e508e24
|
Provenance
The following attestation bundles were made for pygpukit-0.2.7-cp312-cp312-win_amd64.whl:
Publisher:
release.yml on m96-chan/PyGPUkit
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
pygpukit-0.2.7-cp312-cp312-win_amd64.whl -
Subject digest:
563248c3c28193ca1ac8ba16fb0ad4cdd5377b9c9eaf78195957b735f9d8ee74 - Sigstore transparency entry: 766005483
- Sigstore integration time:
-
Permalink:
m96-chan/PyGPUkit@4bdb29cf6fb13afc72d46d7910a7bf61d1975cd9 -
Branch / Tag:
refs/tags/v0.2.7 - Owner: https://github.com/m96-chan
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@4bdb29cf6fb13afc72d46d7910a7bf61d1975cd9 -
Trigger Event:
push
-
Statement type:
File details
Details for the file pygpukit-0.2.7-cp312-cp312-manylinux_2_34_x86_64.manylinux_2_35_x86_64.whl.
File metadata
- Download URL: pygpukit-0.2.7-cp312-cp312-manylinux_2_34_x86_64.manylinux_2_35_x86_64.whl
- Upload date:
- Size: 3.0 MB
- Tags: CPython 3.12, manylinux: glibc 2.34+ x86-64, manylinux: glibc 2.35+ x86-64
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
49292a31ff6dba6856f8b5882513e0bd7041ee15e0053e2c11e82b9eae54fe60
|
|
| MD5 |
5050dc96400ab09f0313a92157e80c06
|
|
| BLAKE2b-256 |
46ea2f32ae9b6eb81512f78c4b7e3407c2b51c1bf3050331c24d2f1da80651e6
|
Provenance
The following attestation bundles were made for pygpukit-0.2.7-cp312-cp312-manylinux_2_34_x86_64.manylinux_2_35_x86_64.whl:
Publisher:
release.yml on m96-chan/PyGPUkit
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
pygpukit-0.2.7-cp312-cp312-manylinux_2_34_x86_64.manylinux_2_35_x86_64.whl -
Subject digest:
49292a31ff6dba6856f8b5882513e0bd7041ee15e0053e2c11e82b9eae54fe60 - Sigstore transparency entry: 766005484
- Sigstore integration time:
-
Permalink:
m96-chan/PyGPUkit@4bdb29cf6fb13afc72d46d7910a7bf61d1975cd9 -
Branch / Tag:
refs/tags/v0.2.7 - Owner: https://github.com/m96-chan
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@4bdb29cf6fb13afc72d46d7910a7bf61d1975cd9 -
Trigger Event:
push
-
Statement type: