A library for compressing large language models utilizing the latest techniques and research in the field for both training aware and post training techniques. The library is designed to be flexible and easy to use on top of PyTorch and HuggingFace Transformers, allowing for quick experimentation.
Project description
LLM Compressor
llmcompressor is an easy-to-use library for optimizing models for deployment with vllm, including:
- Comprehensive set of quantization algorithms for weight-only and activation quantization
- Seamless integration with Hugging Face models and repositories
safetensors-based file format compatible withvllm- Large model support via
accelerate
✨ Read the announcement blog here! ✨
Supported Formats
- Activation Quantization: W8A8 (int8 and fp8)
- Mixed Precision: W4A16, W8A16
- 2:4 Semi-structured and Unstructured Sparsity
Supported Algorithms
- Simple PTQ
- GPTQ
- SmoothQuant
- SparseGPT
When to Use Which Optimization
Please refer to docs/schemes.md for detailed information about available optimization schemes and their use cases.
Installation
pip install llmcompressor
Get Started
End-to-End Examples
Applying quantization with llmcompressor:
- Activation quantization to
int8 - Activation quantization to
fp8 - Weight only quantization to
int4 - Quantizing MoE LLMs
- Quantizing Vision-Language Models
- Quantizing Audio-Language Models
User Guides
Deep dives into advanced usage of llmcompressor:
Quick Tour
Let's quantize TinyLlama with 8 bit weights and activations using the GPTQ and SmoothQuant algorithms.
Note that the model can be swapped for a local or remote HF-compatible checkpoint and the recipe may be changed to target different quantization algorithms or formats.
Apply Quantization
Quantization is applied by selecting an algorithm and calling the oneshot API.
from llmcompressor.modifiers.smoothquant import SmoothQuantModifier
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor import oneshot
# Select quantization algorithm. In this case, we:
# * apply SmoothQuant to make the activations easier to quantize
# * quantize the weights to int8 with GPTQ (static per channel)
# * quantize the activations to int8 (dynamic per token)
recipe = [
SmoothQuantModifier(smoothing_strength=0.8),
GPTQModifier(scheme="W8A8", targets="Linear", ignore=["lm_head"]),
]
# Apply quantization using the built in open_platypus dataset.
# * See examples for demos showing how to pass a custom calibration set
oneshot(
model="TinyLlama/TinyLlama-1.1B-Chat-v1.0",
dataset="open_platypus",
recipe=recipe,
output_dir="TinyLlama-1.1B-Chat-v1.0-INT8",
max_seq_length=2048,
num_calibration_samples=512,
)
Inference with vLLM
The checkpoints created by llmcompressor can be loaded and run in vllm:
Install:
pip install vllm
Run:
from vllm import LLM
model = LLM("TinyLlama-1.1B-Chat-v1.0-INT8")
output = model.generate("My name is")
Questions / Contribution
- If you have any questions or requests open an issue and we will add an example or documentation.
- We appreciate contributions to the code, examples, integrations, and documentation as well as bug reports and feature requests! Learn how here.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llmcompressor_nightly-0.5.0.20250410.tar.gz.
File metadata
- Download URL: llmcompressor_nightly-0.5.0.20250410.tar.gz
- Upload date:
- Size: 183.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.10.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e9c5870357b8d074b5d48bc284335fade650d3a225fc1e295fd57e9ec8f96b56
|
|
| MD5 |
0a53eb2a9d3e5a191662d9d76db2a6e3
|
|
| BLAKE2b-256 |
c059806cc7511b5bce885c6739dd061d2cb19ad805bca917a68545cee86388b9
|
File details
Details for the file llmcompressor_nightly-0.5.0.20250410-py3-none-any.whl.
File metadata
- Download URL: llmcompressor_nightly-0.5.0.20250410-py3-none-any.whl
- Upload date:
- Size: 247.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.10.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c2d8b213623ee53758af51d96a195f79947e8aa51ae65796cfd544123e248c45
|
|
| MD5 |
aa4a7f8d673515cae444049d22a1f1ee
|
|
| BLAKE2b-256 |
153b810e09a0555c6d3fba5aff72c00d9af9ccc45f57d17955e2559ce747bac0
|