Unapologetically SM120-only CuTe DSL kernels for NVFP4 GEMM and MoE.
Project description
b12x
b12x is an SM120-only CuTe DSL kernel library for Blackwell NVFP4 dense GEMM,
routed Mixture-of-Experts, and paged attention inference.
It is intentionally narrow. This is not a generic CUDA kernel collection or a
full model-serving stack. It does not intend to target any other GPU architectures,
including SM100. It is a focused package for a small number of hand-tuned, high-performance
SM120 kernels plus the runtime glue needed to launch them cleanly from PyTorch and sglang.
Installation
Runtime install
python -m pip install b12x
Development install from source
git clone <repo-url>
cd b12x
python -m pip install -e '.[dev]'
Requirements
- Blackwell SM120 GPU
- CUDA 13 toolchain
- Python
>=3.10,<4.0 - CUDA 13 PyTorch,
torch>=2.10.0 nvidia-cutlass-dsl[cu13]==4.4.1- FlashInfer available if you want reference and benchmark comparisons, but it's not a runtime dependency
- Qwen3.5-397B A17B NVFP4 checkpoint available through
B12X_MODEL_PATHfor the end-to-end MoE benchmark
Package layout
b12x.attention- SM120 paged attention forward kernel (TMA-based KV, FP8 KV cache, split-KV)
b12x.cute- Low-level CuTe and FP4 helpers
b12x.gemm- Standalone dense NVFP4 GEMM
b12x.integration- Public runtime entrypoints:
b12x_moe_fp4,b12x_paged_attention_forward,create_paged_attention_plan
- Public runtime entrypoints:
b12x.moe.fused- Static, micro, and dynamic fused MoE kernels and reference paths
b12x.quant- Torch-side NVFP4 packing and quantization helpers
b12x.sglang- Thin
sglangintegration shims
- Thin
Attention runtime contract (experimental)
The attention kernel is a work in progress. It is functional and passes correctness tests but has not been tuned to the same degree as the MoE kernels.
b12x.integration.attention.create_paged_attention_planbuilds an exact-shape launch plan for one paged attention configuration.allocate_paged_attention_workspace_for_planallocates reusable scratch buffers for a plan.allocate_paged_attention_workspace_poolprovides a caller-owned pool that partitions scratch by CUDA stream for variable-shape workloads.b12x_paged_attention_forwardexecutes the kernel given a plan and workspace.- Page size is fixed at 64.
- Supported KV dtypes: BF16, FP16, FP8 E4M3.
- FP8 KV uses TMA loads with in-kernel BF16 dequant. Per-head descale tensors are optional.
- Split-KV is automatic outside CUDA graph capture; inside capture,
num_splitsmust be explicit. - GQA with arbitrary ratios is supported. When
tile_m % qhead_per_kvhead != 0, Q loads fall back from TMA to per-thread async copies automatically. - During CUDA graph capture,
output=must be caller-owned and stable across replays.
MoE runtime contract
b12x.integration.tp_moe.b12x_moe_fp4requires a caller-owned workspace.b12xselects its fused MoE backend from shape alone:- compact routed workloads use the static or micro backend
- all larger routed workloads use dynamic
- Use
allocate_tp_moe_workspace(...)for one exact unchunked launch shape. - Use
allocate_tp_moe_workspace_pool()for variable-size or chunked workloads. - Keep one workspace pool per process/device, and let the pool partition scratch by CUDA stream internally.
- During CUDA graph capture,
output=must also be caller-owned and stable across replays.
Environment variables
| Variable | Default | Description |
|---|---|---|
B12X_ATTN=TURBO |
off | Enable MXFP8 PV accumulation for FP8 KV configs (higher throughput, slight accuracy trade-off). |
B12X_FAST_MATH |
1 |
Enable fast-math MoE paths. |
B12X_MODEL_PATH |
— | Path to Qwen3.5 NVFP4 checkpoint for end-to-end MoE benchmarks. |
B12X_STATIC_COMPACT_CUTOVER_PAIRS |
auto | Override static→dynamic cutover threshold (routed pairs). |
B12X_MICRO_CUTOVER_TOKENS |
auto | Override micro→static cutover threshold (tokens). |
B12X_DYNAMIC_ENABLE_MULTICTA |
1 |
Enable multi-CTA dynamic launches. |
B12X_DYNAMIC_CHUNK_MULTIPLIER |
1 |
Dynamic backend chunk size multiplier. |
B12X_{STATIC,MICRO,DYNAMIC}_MAX_ACTIVE_CLUSTERS |
auto | Override max active clusters per backend. |
B12X_{STATIC,MICRO,DYNAMIC}_REUSE_COMPILED |
1 |
Reuse compiled kernels across shapes within a backend. |
Benchmarks and tests
Benchmarks
benchmarks/benchmark_moe.py- End-to-end Qwen3.5-397B TP=4 MoE benchmark
microbatch profile:[1, 2, 4, 8]sglang-single-requestbatch profile:[1, 23, 80]chunked-prefillbatch profile:[8192, 16384, 24576, 32768]
benchmarks/benchmark_paged_attention.py- Paged attention vs FlashInfer across decode and extend shapes
benchmarks/benchmark_dense_gemm.py- Dense FP4 GEMM vs FlashInfer/cuDNN/CUTLASS
benchmarks/benchmark_mxfp8_pv.py- MXFP8 PV microbenchmark (turbo mode throughput)
Tests
tests/test_paged_attention_workspace_api.py- Paged attention workspace and plan correctness across shapes, dtypes, and split counts
tests/test_attention_cuda_graphs.py- CUDA graph capture and replay for paged attention, including FP8 KV and small GQA ratios
tests/test_tp_moe_reference.py- Independent oracle-backed MoE correctness test
tests/test_moe_equivalence.py- Real-weight smoke and CUDA-graph replay routing-safety checks
tests/test_gemm_stack.py- Dense GEMM exactness vs FlashInfer/cuDNN
Common commands
# Graph-first benchmark defaults with auto-dispatch
B12X_MODEL_PATH=/path/to/Qwen3.5-397B-A17B-NVFP4 python benchmarks/benchmark_moe.py
# Measure eager launches instead of CUDA graph replay
B12X_MODEL_PATH=/path/to/Qwen3.5-397B-A17B-NVFP4 python benchmarks/benchmark_moe.py --no-cuda-graph
# Include routing in the timed region
B12X_MODEL_PATH=/path/to/Qwen3.5-397B-A17B-NVFP4 python benchmarks/benchmark_moe.py --include-routing
# Use the recorded single-request sglang profile
B12X_MODEL_PATH=/path/to/Qwen3.5-397B-A17B-NVFP4 python benchmarks/benchmark_moe.py --batch-size-profile sglang-single-request
# Graph-first prefill-scale sweep aligned with chunked-prefill serving
B12X_MODEL_PATH=/path/to/Qwen3.5-397B-A17B-NVFP4 python benchmarks/benchmark_moe.py --batch-size-profile chunked-prefill
# Multi-layer CUDA-graph replay validation with real consecutive MoE layers
B12X_MODEL_PATH=/path/to/Qwen3.5-397B-A17B-NVFP4 python benchmarks/benchmark_moe.py --graph-mode multi-layer --reference none --validate none
# Paged attention benchmark vs FlashInfer
python benchmarks/benchmark_paged_attention.py
# Dense GEMM microbenchmark
python benchmarks/benchmark_dense_gemm.py
# Attention correctness
pytest tests/test_attention_cuda_graphs.py tests/test_paged_attention_workspace_api.py
# Oracle-backed MoE correctness
python tests/test_tp_moe_reference.py --impls b12x --scale-contract per-expert
# Real-weight CUDA-graph smoke
pytest tests/test_moe_equivalence.py
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file b12x-0.5.0.tar.gz.
File metadata
- Download URL: b12x-0.5.0.tar.gz
- Upload date:
- Size: 128.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
22fd39489dc32819b941a47f4c83b40759a89b91cb8cc7a6f731418ddfff655e
|
|
| MD5 |
4b23c4658e60b06a39f4fce0b2923a4a
|
|
| BLAKE2b-256 |
f2c500af03e7dc6eb28dcc0341cfdbdc90383d8addb723703b0eb2c0a8e8a345
|
File details
Details for the file b12x-0.5.0-py3-none-any.whl.
File metadata
- Download URL: b12x-0.5.0-py3-none-any.whl
- Upload date:
- Size: 135.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d1d1ce7fde97fddaabc3168c0646d8bfc901bf9ffe3333a7b8ba4e19ad943eb0
|
|
| MD5 |
21fa918c1fd2007e0ab87ba0d24ef353
|
|
| BLAKE2b-256 |
d35b0833bda4c3df8a688ad199af3f9211a8b06282549c7196c620b02b50321c
|