Skip to main content

A lightweight GPU runtime for Python with Rust-powered scheduler, NVRTC JIT compilation, and NumPy-like API

Project description

PyGPUkit — Lightweight GPU Runtime for Python

A minimal, modular GPU runtime with Rust-powered scheduler, NVRTC JIT compilation, and a clean NumPy-like API.

PyPI version License: MIT


Documentation

Guide Description
Getting Started Installation, quick start, basic usage
API Reference Complete API documentation with examples
LLM Guide SafeTensors, Tokenizer, GPT-2 model loading
Performance Tuning TF32, FP16, CUTLASS optimization
Scheduler Guide Multi-LLM concurrent execution

Overview

PyGPUkit is a lightweight GPU runtime for Python that provides:

  • Single-binary distribution — works with just GPU drivers, no CUDA Toolkit needed
  • Rust-powered scheduler with admission control, QoS, and resource partitioning
  • NVRTC JIT (optional) for custom kernel compilation
  • A NumPy-like GPUArray type
  • Kubernetes-inspired GPU scheduling (bandwidth + memory guarantees)

PyGPUkit aims to be the "micro-runtime for GPU computing": small, fast, and ideal for research, inference tooling, DSP, and real-time systems.

Note: PyGPUkit is NOT a PyTorch/CuPy replacement—it's a lightweight runtime for custom GPU workloads where full ML frameworks are overkill.


What's New in v0.2.7

CUTLASS Epilogue Fusion

Fused Linear + Bias + GELU operations using CUTLASS epilogue fusion for improved performance in transformer workloads.

import pygpukit as gpk
import numpy as np

# Create tensors
batch, in_feat, out_feat = 512, 768, 3072
input = gpk.from_numpy(np.random.randn(batch, in_feat).astype(np.float32))
weight = gpk.from_numpy(np.random.randn(out_feat, in_feat).astype(np.float32))
bias = gpk.from_numpy(np.random.randn(out_feat).astype(np.float32))

# Fused linear + bias + GELU (single kernel, no intermediate memory)
output = gpk.linear_bias_gelu(input, weight, bias)

Multi-SM CUTLASS Kernels

Runtime SM detection with architecture-optimized kernel variants:

Architecture GPU Examples Pipeline Features
SM80 A100 4-stage 48KB shared memory
SM86 RTX 3090, RTX 3080 5-stage 100KB shared memory
SM89 RTX 4090, RTX 4080 6-stage Ada Lovelace optimizations
SM90 H100 CUTLASS 3.x WGMMA/TMA instructions
SM100/120 Blackwell (B100, B200) CUTLASS 3.x Next-gen TensorCore

Note: SM100+ (Blackwell) requires CUDA 13.x. Windows wheels include SM100/120 support.

New Operations

Operation Description
gpk.transpose(a) GPU-native matrix transpose
gpk.bias_add_inplace(out, bias) In-place bias addition
gpk.linear_bias_gelu(x, w, b) Fused linear + bias + GELU

API Improvements

  • Complete public API exports (all operations accessible via gpk.*)
  • Consistent snake_case naming convention
  • Full docstrings for all public functions

LLM Support

PyGPUkit includes built-in support for loading and running LLM models. See the LLM Guide for detailed documentation.

from pygpukit.llm import SafeTensorsFile, Tokenizer

# Load safetensors (memory-mapped, zero-copy)
st = SafeTensorsFile("model.safetensors")
print(f"Tensors: {st.num_tensors}, Size: {st.file_size / 1e9:.2f} GB")

# Tokenizer (HuggingFace format)
tok = Tokenizer("tokenizer.json")
ids = tok.encode("Hello, world!")
text = tok.decode(ids)
Component Description
SafeTensorsFile Memory-mapped .safetensors loading
Tokenizer BPE tokenizer (HuggingFace format)
GPT2Model GPT-2 model (MLP-only MVP)
Linear, LayerNorm, MLP Model building blocks

What's New in v0.2.6

CUTLASS Backend (Default)

NVIDIA CUTLASS v4.3.0 is now the default GEMM backend, delivering optimized TensorCore performance out of the box.

Feature Description
TF32 TensorCore 31+ TFLOPS for FP32 inputs (automatic)
FP16 TensorCore 63 TFLOPS
BF16 TensorCore 63 TFLOPS
Zero Config No environment variables needed
import pygpukit as gpk
import numpy as np

# CUTLASS TF32 is automatic for FP32 (31+ TFLOPS)
a = gpk.from_numpy(np.random.randn(8192, 8192).astype(np.float32))
b = gpk.from_numpy(np.random.randn(8192, 8192).astype(np.float32))
c = a @ b  # Uses CUTLASS TF32 TensorCore

# For full FP32 precision (no TF32), set:
# PYGPUKIT_NO_TF32=1

Multi-LLM Concurrent Execution

Run multiple AI models (LLM, TTS, Vision) concurrently on a single GPU with independent CUDA streams and VRAM budgets.

Feature Description
Execution Control User controls execution order
Stream Isolation No implicit sync between streams
VRAM Budgeting Safe memory sharing per model
Concurrent Safety "Running simultaneously doesn't break"
asyncio Integration Native Python async/await support

Note: On a single GPU, Multi-LLM scheduling enables concurrent execution, not faster execution, for compute-bound workloads. Speedup benefits apply to I/O-bound workloads or multi-GPU setups.

import asyncio
from pygpukit.scheduler import (
    create_context, context_session, GB, initialize
)

# Create execution contexts with VRAM budgets
initialize(device_id=0)
llm_ctx = create_context("llm", max_vram=4 * GB)
tts_ctx = create_context("tts", max_vram=2 * GB)

async def run_parallel():
    async with context_session(llm_ctx), context_session(tts_ctx):
        # Run models concurrently with asyncio.gather
        llm_task = asyncio.create_task(run_llm_inference())
        tts_task = asyncio.create_task(run_tts_synthesis())

        text, audio = await asyncio.gather(llm_task, tts_task)
        return text, audio

result = asyncio.run(run_parallel())

FP16/BF16 TensorCore (via CUTLASS)

Feature Description
FP16 TensorCore 63 TFLOPS (automatic via CUTLASS)
BF16 TensorCore 63 TFLOPS (automatic via CUTLASS)
FP32 Accumulation Numerical stability maintained
import pygpukit as gpk
import numpy as np

# FP16 TensorCore matmul (63 TFLOPS on RTX 3090 Ti)
# No environment variable needed - CUTLASS is automatic
a = gpk.from_numpy(np.random.randn(8192, 8192).astype(np.float16))
b = gpk.from_numpy(np.random.randn(8192, 8192).astype(np.float16))
c = a @ b  # Uses CUTLASS TensorCore

Note: CUTLASS requires matrix dimensions divisible by 16.


What's New in v0.2.5

FP16 / BF16 Support

Feature Description
FP16 (float16) Half-precision floating point
BF16 (bfloat16) Brain floating point (better dynamic range)
FP32 Accumulation Numerical stability via FP32 intermediate
Type Conversion astype() for seamless dtype conversion
import pygpukit as gpk
import numpy as np

# FP16 operations
a = gpk.from_numpy(np.random.randn(1024, 1024).astype(np.float16))
b = gpk.from_numpy(np.random.randn(1024, 1024).astype(np.float16))
c = a @ b  # FP16 matmul

# BF16 operations
arr = np.random.randn(1024, 1024).astype(np.float32)
a_bf16 = gpk.from_numpy(arr).astype(gpk.bfloat16)
b_bf16 = gpk.from_numpy(arr).astype(gpk.bfloat16)
c_bf16 = a_bf16 @ b_bf16  # BF16 matmul
result = c_bf16.astype(gpk.float32)  # Convert back to FP32

Reduction Operations

Operation Description
gpk.sum(a) Sum of all elements
gpk.mean(a) Mean of all elements
gpk.max(a) Maximum element

Operator Overloads

c = a + b   # Element-wise add
c = a - b   # Element-wise subtract
c = a * b   # Element-wise multiply
c = a / b   # Element-wise divide
c = a @ b   # Matrix multiplication

What's New in v0.2.4

Single-Binary Distribution

Feature Description
Driver-only mode Only nvcuda.dll (GPU driver) required
Dynamic NVRTC JIT loaded at runtime, optional
No cudart dependency Eliminated CUDA Runtime dependency
Smaller wheel No bundled DLLs
import pygpukit as gp

# Works with just GPU drivers!
print(f"CUDA: {gp.is_cuda_available()}")      # True (if GPU driver installed)
print(f"NVRTC: {gp.is_nvrtc_available()}")    # True (if CUDA Toolkit installed)
print(f"NVRTC Path: {gp.get_nvrtc_path()}")   # Path to NVRTC DLL (if available)

TF32 TensorCore GEMM

Feature Description
PTX mma.sync Direct TensorCore access via inline PTX assembly
cp.async Pipeline Double-buffered async memory transfers
TF32 Precision 19-bit mantissa (vs FP32's 23-bit), ~0.1% per-op error
SM 80+ Required Ampere architecture (RTX 30XX+) required

Performance

Benchmark Comparison (RTX 3090 Ti, 8192×8192)

Library FP32 TF32 FP16 BF16 Requirements
NumPy (OpenBLAS) ~0.8 TFLOPS CPU only
cuBLAS ~21 TFLOPS ~59 TFLOPS ~75 TFLOPS ~83 TFLOPS CUDA Toolkit
PyGPUkit (CUTLASS) 18 TFLOPS 31 TFLOPS 63 TFLOPS 63 TFLOPS GPU drivers only

Built-in matmul kernels are pre-compiled. Driver-Only and Full (JIT) modes have identical matmul performance. JIT is only needed for custom kernels.

PyGPUkit Performance by Matrix Size

Matrix Size FP32 (NO_TF32) TF32 (CUTLASS) FP16 (CUTLASS) BF16 (CUTLASS)
2048×2048 9.6 TFLOPS 13 TFLOPS 15 TFLOPS 21 TFLOPS
4096×4096 14.7 TFLOPS 22 TFLOPS 44 TFLOPS 44 TFLOPS
8192×8192 18 TFLOPS 31 TFLOPS 63 TFLOPS 63 TFLOPS

Note: CUTLASS is automatic for compatible sizes (16-aligned). Use PYGPUKIT_NO_TF32=1 for full FP32 precision.


Installation

pip install pygpukit

From source:

git clone https://github.com/m96-chan/PyGPUkit
cd PyGPUkit
pip install -e .

Requirements

  • Python 3.10+
  • NVIDIA GPU with drivers installed
  • Optional: CUDA Toolkit (for JIT compilation of custom kernels)

Note: NVRTC (NVIDIA Runtime Compiler) is included in CUDA Toolkit. Pre-compiled GPU operations (matmul, add, mul, etc.) work with just GPU drivers.

Supported GPUs

Generation Architecture Examples Status
Ampere SM80-86 A100, RTX 3090, RTX 3080 Fully supported
Ada Lovelace SM89 RTX 4090, RTX 4080 Fully supported
Hopper SM90 H100, H200 Fully supported
Blackwell SM100-120 B100, B200 Supported (CUDA 13.x)
Turing/Older SM < 80 RTX 20XX, GTX 10XX NOT supported

Runtime Modes

Mode Requirements Features
Full JIT GPU drivers + CUDA Toolkit All features including custom kernels
Pre-compiled GPU drivers only Built-in ops (matmul, add, mul)
CPU simulation None Testing/development without GPU

Quick Start

Basic Operations

import pygpukit as gp

# Allocate arrays
x = gp.zeros((1024, 1024), dtype="float32")
y = gp.ones((1024, 1024), dtype="float32")

# Operations
z = gp.add(x, y)
w = gp.matmul(x, y)

# CPU <-> GPU transfer
arr = z.to_numpy()
garr = gp.from_numpy(arr)

Custom JIT Kernel (requires CUDA Toolkit)

src = '''
extern "C" __global__
void scale(float* x, float factor, int n) {
    int idx = blockIdx.x * blockDim.x + threadIdx.x;
    if (idx < n) x[idx] *= factor;
}
'''

if gp.is_nvrtc_available():
    kernel = gp.jit(src, func="scale")
    kernel(x, factor=0.5, n=x.size)
else:
    print("JIT not available. Using pre-compiled ops.")

Rust Scheduler

import _pygpukit_rust as rust

# Memory Pool with LRU eviction
pool = rust.MemoryPool(quota=100 * 1024 * 1024, enable_eviction=True)
block = pool.allocate(4096)

# QoS-aware task scheduling
evaluator = rust.QosPolicyEvaluator(total_memory=8*1024**3, total_bandwidth=1.0)
task = rust.QosTaskMeta.guaranteed("task-1", "Critical Task", 256*1024*1024)
result = evaluator.evaluate(task)

# GPU Partitioning
manager = rust.PartitionManager(rust.PartitionConfig(total_memory=8*1024**3))
manager.create_partition("inference", "Inference",
    rust.PartitionLimits().memory(4*1024**3).compute(0.5))

Features

Core Infrastructure (Rust)

Feature Description
Memory Pool LRU eviction, size-class free lists
Scheduler Priority queue, memory reservation
Transfer Engine Separate H2D/D2H streams, priority
Kernel Dispatch Per-stream limits, lifecycle tracking

Advanced Scheduler

Feature Description
Admission Control Deterministic admission, quota enforcement
QoS Policy Guaranteed/Burstable/BestEffort tiers
Kernel Pacing Bandwidth-based throttling per stream
GPU Partitioning Resource isolation, multi-tenant support
Multi-LLM Execution Concurrent AI model execution with stream isolation
asyncio Integration Native Python async/await for concurrent inference

Project Goals

  1. Provide the smallest usable GPU runtime for Python
  2. Expose GPU scheduling (bandwidth, memory, partitioning)
  3. Make writing custom GPU kernels easy
  4. Serve as a building block for inference engines, DSP systems, and real-time workloads

Project Structure

PyGPUkit/
  src/pygpukit/    # Python API (NumPy-compatible)
  native/          # C++ backend (CUDA Driver API, NVRTC)
  rust/            # Rust backend (memory pool, scheduler)
    pygpukit-core/   # Pure Rust core logic
    pygpukit-python/ # PyO3 bindings
  examples/        # Demo scripts
  tests/           # Test suite

Roadmap

Released

Version Highlights
v0.1 GPUArray, NVRTC JIT, add/mul/matmul, wheels
v0.2.0 Rust scheduler (QoS, partitioning), memory pool (LRU), 106 tests
v0.2.1 API stabilization, error propagation
v0.2.2 Ampere SGEMM (cp.async, float4), 18 TFLOPS FP32
v0.2.3 TF32 TensorCore (PTX mma.sync), 28 TFLOPS
v0.2.4 Single-binary distribution, dynamic NVRTC, driver-only mode
v0.2.5 FP16/BF16 support, reduction ops, operator overloads, TF32 v2 (~30 TFLOPS)
v0.2.6 CUTLASS backend (31 TFLOPS TF32, 63 TFLOPS FP16/BF16), Multi-LLM concurrent execution
v0.2.7 Epilogue fusion (linear+bias+gelu), Multi-SM kernels, API review

Planned

Version Goals
v0.3 Triton backend, advanced ops (softmax), MPS/MIG

API Stability & Backward Compatibility

Version Policy

  • v0.2.x: Backward compatible within minor versions. New features may be added, but existing APIs remain stable.
  • v0.3+: May introduce breaking changes with deprecation warnings in prior version.

Stable Public API (v0.2.x)

All functions exported via pygpukit.* are part of the stable public API:

Category Functions
Factory zeros, ones, empty, from_numpy
Elementwise add, sub, mul, div
Math exp, log, relu, gelu
Matrix matmul, transpose
Reductions sum, mean, max
Neural layernorm, bias_add_inplace, linear_bias_gelu
Types GPUArray, DataType, float32, float64, float16, bfloat16
LLM llm.SafeTensorsFile, llm.Tokenizer, llm.GPT2Model, llm.Linear

Deprecation Policy

APIs to be removed will emit DeprecationWarning for at least one minor version before removal.


Contributing

Contributions and discussions are welcome! Please open Issues for feature requests, bugs, or design proposals.


License

MIT License


Acknowledgements

Inspired by: CUDA Runtime, NVRTC, PyCUDA, CuPy, Triton

PyGPUkit aims to fill the gap for a tiny, embeddable GPU runtime for Python.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pygpukit-0.2.8.tar.gz (33.9 MB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

pygpukit-0.2.8-cp312-cp312-win_amd64.whl (2.7 MB view details)

Uploaded CPython 3.12Windows x86-64

pygpukit-0.2.8-cp312-cp312-manylinux_2_34_x86_64.manylinux_2_35_x86_64.whl (3.0 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.34+ x86-64manylinux: glibc 2.35+ x86-64

File details

Details for the file pygpukit-0.2.8.tar.gz.

File metadata

  • Download URL: pygpukit-0.2.8.tar.gz
  • Upload date:
  • Size: 33.9 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for pygpukit-0.2.8.tar.gz
Algorithm Hash digest
SHA256 95df04f97be72bcdc085fa1aa070373ada75b58e4dcb7b571ece0dfb8bdd8650
MD5 fa6638b1e0e9ca1296daf787b58b2ba1
BLAKE2b-256 959c48613444697361d2a84741ab91107765604149c87d8fd717ac0d4d5f0d97

See more details on using hashes here.

Provenance

The following attestation bundles were made for pygpukit-0.2.8.tar.gz:

Publisher: release.yml on m96-chan/PyGPUkit

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file pygpukit-0.2.8-cp312-cp312-win_amd64.whl.

File metadata

  • Download URL: pygpukit-0.2.8-cp312-cp312-win_amd64.whl
  • Upload date:
  • Size: 2.7 MB
  • Tags: CPython 3.12, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for pygpukit-0.2.8-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 712642cb5242d50589bab07530ecd2893e9394523ec34c51e21ee1919fb7f88f
MD5 5fc17915b4507c6c8215faf630fd5a41
BLAKE2b-256 cb9ab83057600a0d30f0646413f9d8b160358056637d7930648a595794c4bbb9

See more details on using hashes here.

Provenance

The following attestation bundles were made for pygpukit-0.2.8-cp312-cp312-win_amd64.whl:

Publisher: release.yml on m96-chan/PyGPUkit

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file pygpukit-0.2.8-cp312-cp312-manylinux_2_34_x86_64.manylinux_2_35_x86_64.whl.

File metadata

File hashes

Hashes for pygpukit-0.2.8-cp312-cp312-manylinux_2_34_x86_64.manylinux_2_35_x86_64.whl
Algorithm Hash digest
SHA256 de8d1a1f071be08ea49bd0632a6f00baefe13b85353accaafe833cfc92b7d67c
MD5 af94645f10b74cd60f0794e0f4d2a20a
BLAKE2b-256 196978ca03b9708d447ca0a9092e7ffd3f6a7ec691e96d0e19f609363a2f4881

See more details on using hashes here.

Provenance

The following attestation bundles were made for pygpukit-0.2.8-cp312-cp312-manylinux_2_34_x86_64.manylinux_2_35_x86_64.whl:

Publisher: release.yml on m96-chan/PyGPUkit

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page