Quantized MatMul in CUDA with a PyTorch interface
Project description
Quantized matmul in CUDA, with a PyTorch interface
Original code from FasterTransformer / TensorRT-LLM: https://github.com/NVIDIA/TensorRT-LLM/tree/main/cpp/tensorrt_llm/kernels
Adapted to support a different quantization scheme.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
quant_matmul-1.1.0.post1.tar.gz
(11.5 kB
view details)
File details
Details for the file quant_matmul-1.1.0.post1.tar.gz
.
File metadata
- Download URL: quant_matmul-1.1.0.post1.tar.gz
- Upload date:
- Size: 11.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.13
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1161767b41e0c3928914fcc3d53eec69e9c7e8b2cb472779cccf1a31cd44c230 |
|
MD5 | cfa9c81e233f69dfc8e49c7f0b667b28 |
|
BLAKE2b-256 | a4d6147c3590c1a17669b3182dcc438f40c17637083ffb462b05e5fe400ff44b |