Quantized MatMul in CUDA with a PyTorch interface
Project description
Quantized matmul in CUDA, with a PyTorch interface
Original code from FasterTransformer / TensorRT-LLM: https://github.com/NVIDIA/TensorRT-LLM/tree/main/cpp/tensorrt_llm/kernels
Adapted to support a different quantization scheme.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
quant_matmul-1.2.0.tar.gz
(11.6 kB
view hashes)