Kernel Library for SGLang
Project description
sgl-kernel
Kernel Library for LLM inference engines
sgl-kernel provides optimized compute primitives for LLM inference engines, enabling efficient inference for large language models and vision-language models through custom kernel operations. It has been used by LightLLM, SGLang and so on.
Installation
Requires torch == 2.9.1
# Latest version
pip3 install sgl-kernel --upgrade
Building from Source
Requires
- CMake ≥3.31,
- Python ≥3.10
- scikit-build-core
- ninja(optional)
Use Makefile to build sgl-kernel
make build
Limit build resource usage (CPU / parallelism)
By default, make build uses all available CPU cores. You can override build parallelism and NVCC compile threads:
# Limit parallel jobs (controls both make and cmake parallelism)
make build MAX_JOBS=2
# Additionally limit NVCC internal threads (reduces CPU and peak memory)
make build MAX_JOBS=2 CMAKE_ARGS="-DSGL_KERNEL_COMPILE_THREADS=1"
Contribution
Steps to add a new kernel:
- Implement the kernel in csrc
- Expose the interface in include/sgl_kernel_ops.h
- Create torch extension in csrc/common_extension.cc
- Update CMakeLists.txt to include new CUDA source
- Expose Python interface in python
- Add test and benchmark
Development Tips
- When creating torch extensions, add the function definition with
m.def, and device binding withm.impl:
-
How to write schema: Schema reference
// We need def with schema here for torch.compile m.def( "bmm_fp8(Tensor A, Tensor B, Tensor! D, Tensor A_scale, Tensor B_scale, Tensor workspace_buffer, " "int cublas_handle) -> ()"); m.impl("bmm_fp8", torch::kCUDA, &bmm_fp8);
Adapting C++ Native Types for Torch Compatibility
Third-party C++ libraries often use int and float, but PyTorch bindings require int64_t and double due to Python's type mapping.
Use make_pytorch_shim from sgl_kernel_torch_shim.h to handle conversions automatically:
// Add type conversion for int -> int64_t
template <>
struct pytorch_library_compatible_type<int> {
using type = int64_t;
static int convert_from_type(int64_t arg) {
TORCH_CHECK(arg <= std::numeric_limits<int>::max(), "value too large");
TORCH_CHECK(arg >= std::numeric_limits<int>::min(), "value too small");
return arg;
}
};
// Wrap your function
m.impl("fwd", torch::kCUDA, make_pytorch_shim(&mha_fwd));
Testing & Benchmarking
- Add pytest tests in tests/, if you need to skip some test, please use
@pytest.mark.skipif
@pytest.mark.skipif(
skip_condition, reason="Nvfp4 Requires compute capability of 10 or above."
)
-
Add benchmarks using triton benchmark in benchmark/
We recommend using
triton.testing.do_bench_cudagraphfor kernel benchmarking:Compared to
triton.testing.do_bench,do_bench_cudagraphprovides:- Reduced CPU overhead impact for more accurate kernel performance measurements
- Incorporation of PDL (Programmatic Dependent Launch) effects into individual kernel results
- More realistic performance data on PDL-supported architectures (SM >= 90)
-
Run test suite
Kernel Size Analysis
Analyze CUDA kernel sizes in compiled wheel files to identify oversized kernels and template-instantiation bloat:
This tool requires cubloaty (install with pip install cubloaty) to work.
# Install cubloaty
pip install cubloaty
# Analyze a wheel file
python analyze_whl_kernel_sizes.py path/to/sgl_kernel-*.whl
# Custom output file
python analyze_whl_kernel_sizes.py path/to/sgl_kernel-*.whl --output my_analysis.txt
The tool generates:
- A text report with:
- Kernel groups (by name prefix)
- Individual kernel sizes (sorted by size)
Use this to identify large kernels and potential template instantiation bloat.
FAQ
- Q: Segmentation fault with CUDA 12.6
- A: Update ptxas to 12.8, reference: segment fault error
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distributions
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file sgl_kernel-0.3.21-cp310-abi3-manylinux2014_x86_64.whl.
File metadata
- Download URL: sgl_kernel-0.3.21-cp310-abi3-manylinux2014_x86_64.whl
- Upload date:
- Size: 535.6 MB
- Tags: CPython 3.10+
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.18
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
57dfb3a2a3cd759f499c32e2bad5f6489b7c58f7f9a84ee00c53ec92d303aaab
|
|
| MD5 |
34c4c4bc34a343ec2cc304cc4a942710
|
|
| BLAKE2b-256 |
369ff836e126002c7cfcfe35418f6cff5a63fe3f529c609b334ca4775354b4d5
|
File details
Details for the file sgl_kernel-0.3.21-cp310-abi3-manylinux2014_aarch64.whl.
File metadata
- Download URL: sgl_kernel-0.3.21-cp310-abi3-manylinux2014_aarch64.whl
- Upload date:
- Size: 626.6 MB
- Tags: CPython 3.10+
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.18
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
bafdcc26e9ce1e9102b99e4186d652fefbafe5c22ea2cbb5ffe07b331e83be1f
|
|
| MD5 |
887c10b43f1be78574b18a5d92c1f336
|
|
| BLAKE2b-256 |
eb2bf1aeca98bc856c14d870f1dcf38bca35cf84ffe58874c67402b0f862ed18
|