A library for compressing large language models utilizing the latest techniques and research in the field for both training aware and post training techniques. The library is designed to be flexible and easy to use on top of PyTorch and HuggingFace Transformers, allowing for quick experimentation.
Project description
LLM Compressor
llmcompressor
is an easy-to-use library for optimizing models for deployment with vllm
, including:
- Comprehensive set of quantization algorithms for weight-only and activation quantization
- Seamless integration with Hugging Face models and repositories
safetensors
-based file format compatible withvllm
- Large model support via
accelerate
✨ Read the announcement blog here! ✨
Supported Formats
- Activation Quantization: W8A8 (int8 and fp8)
- Mixed Precision: W4A16, W8A16
- 2:4 Semi-structured and Unstructured Sparsity
Supported Algorithms
- Simple PTQ
- GPTQ
- SmoothQuant
- SparseGPT
Installation
pip install llmcompressor
Get Started
End-to-End Examples
Applying quantization with llmcompressor
:
- Activation quantization to
int8
- Activation quantization to
fp8
- Weight only quantization to
int4
- Quantizing MoE LLMs
User Guides
Deep dives into advanced usage of llmcompressor
:
Quick Tour
Let's quantize TinyLlama
with 8 bit weights and activations using the GPTQ
and SmoothQuant
algorithms.
Note that the model can be swapped for a local or remote HF-compatible checkpoint and the recipe
may be changed to target different quantization algorithms or formats.
Apply Quantization
Quantization is applied by selecting an algorithm and calling the oneshot
API.
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.modifiers.smoothquant import SmoothQuantModifier
from llmcompressor.transformers import oneshot
# Select quantization algorithm. In this case, we:
# * apply SmoothQuant to make the activations easier to quantize
# * quantize the weights to int8 with GPTQ (static per channel)
# * quantize the activations to int8 (dynamic per token)
recipe = [
SmoothQuantModifier(smoothing_strength=0.8),
GPTQModifier(scheme="W8A8", targets="Linear", ignore=["lm_head"]),
]
# Apply quantization using the built in open_platypus dataset.
# * See examples for demos showing how to pass a custom calibration set
oneshot(
model="TinyLlama/TinyLlama-1.1B-Chat-v1.0",
dataset="open_platypus",
recipe=recipe,
output_dir="TinyLlama-1.1B-Chat-v1.0-INT8",
max_seq_length=2048,
num_calibration_samples=512,
)
Inference with vLLM
The checkpoints created by llmcompressor
can be loaded and run in vllm
:
Install:
pip install vllm
Run:
from vllm import LLM
model = LLM("TinyLlama-1.1B-Chat-v1.0-INT8")
output = model.generate("My name is")
Questions / Contribution
- If you have any questions or requests open an issue and we will add an example or documentation.
- We appreciate contributions to the code, examples, integrations, and documentation as well as bug reports and feature requests! Learn how here.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file llmcompressor-nightly-0.2.0.20240926.tar.gz
.
File metadata
- Download URL: llmcompressor-nightly-0.2.0.20240926.tar.gz
- Upload date:
- Size: 161.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.10.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8db9a4f263d4ef6b60aaca3bf295cfd7efb8d6fed6404551fd7a320db9fbf98e |
|
MD5 | fde7fbef7d8cdec1bccd6f2cde102896 |
|
BLAKE2b-256 | 3d61a12680e87df02ae172d52d25d3e4158a79025dffd4a498be7ff389e8a3b8 |
File details
Details for the file llmcompressor_nightly-0.2.0.20240926-py3-none-any.whl
.
File metadata
- Download URL: llmcompressor_nightly-0.2.0.20240926-py3-none-any.whl
- Upload date:
- Size: 211.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.10.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | d3bb1acdbfcca50ad24f3ba39ad4e3fc1fb0fb3ea2aabe45a6ebf722aa1c91a3 |
|
MD5 | 5c795ccf1b58d709eeb565a002cf1166 |
|
BLAKE2b-256 | 961afa76e428577250c2d1f31ddb63cfd284d160425ee7a0be61a8b0a3019668 |