Skip to main content

A library for compressing large language models utilizing the latest techniques and research in the field for both training aware and post training techniques. The library is designed to be flexible and easy to use on top of PyTorch and HuggingFace Transformers, allowing for quick experimentation.

Project description

tool icon LLM Compressor

llmcompressor is an easy-to-use library for optimizing models for deployment with vllm, including:

  • Comprehensive set of quantization algorithms for weight-only and activation quantization
  • Seamless integration with Hugging Face models and repositories
  • safetensors-based file format compatible with vllm
  • Large model support via accelerate

✨ Read the announcement blog here! ✨

LLM Compressor Flow

🚀 What's New!

Big updates have landed in LLM Compressor! To get a more in-depth look, check out the deep-dive.

Some of the exciting new features include:

Supported Formats

  • Activation Quantization: W8A8 (int8 and fp8)
  • Mixed Precision: W4A16, W8A16, NVFP4 (W4A4 and W4A16 support)
  • 2:4 Semi-structured and Unstructured Sparsity

Supported Algorithms

  • Simple PTQ
  • GPTQ
  • AWQ
  • SmoothQuant
  • SparseGPT

When to Use Which Optimization

Please refer to docs/schemes.md for detailed information about available optimization schemes and their use cases.

Installation

pip install llmcompressor

Get Started

End-to-End Examples

Applying quantization with llmcompressor:

User Guides

Deep dives into advanced usage of llmcompressor:

Quick Tour

Let's quantize TinyLlama with 8 bit weights and activations using the GPTQ and SmoothQuant algorithms.

Note that the model can be swapped for a local or remote HF-compatible checkpoint and the recipe may be changed to target different quantization algorithms or formats.

Apply Quantization

Quantization is applied by selecting an algorithm and calling the oneshot API.

from llmcompressor.modifiers.smoothquant import SmoothQuantModifier
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor import oneshot

# Select quantization algorithm. In this case, we:
#   * apply SmoothQuant to make the activations easier to quantize
#   * quantize the weights to int8 with GPTQ (static per channel)
#   * quantize the activations to int8 (dynamic per token)
recipe = [
    SmoothQuantModifier(smoothing_strength=0.8),
    GPTQModifier(scheme="W8A8", targets="Linear", ignore=["lm_head"]),
]

# Apply quantization using the built in open_platypus dataset.
#   * See examples for demos showing how to pass a custom calibration set
oneshot(
    model="TinyLlama/TinyLlama-1.1B-Chat-v1.0",
    dataset="open_platypus",
    recipe=recipe,
    output_dir="TinyLlama-1.1B-Chat-v1.0-INT8",
    max_seq_length=2048,
    num_calibration_samples=512,
)

Inference with vLLM

The checkpoints created by llmcompressor can be loaded and run in vllm:

Install:

pip install vllm

Run:

from vllm import LLM
model = LLM("TinyLlama-1.1B-Chat-v1.0-INT8")
output = model.generate("My name is")

Questions / Contribution

  • If you have any questions or requests open an issue and we will add an example or documentation.
  • We appreciate contributions to the code, examples, integrations, and documentation as well as bug reports and feature requests! Learn how here.

Citation

If you find LLM Compressor useful in your research or projects, please consider citing it:

@software{llmcompressor2024,
    title={{LLM Compressor}},
    author={Red Hat AI and vLLM Project},
    year={2024},
    month={8},
    url={https://github.com/vllm-project/llm-compressor},
}

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmcompressor-0.6.1a20250711.tar.gz (388.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llmcompressor-0.6.1a20250711-py3-none-any.whl (249.9 kB view details)

Uploaded Python 3

File details

Details for the file llmcompressor-0.6.1a20250711.tar.gz.

File metadata

  • Download URL: llmcompressor-0.6.1a20250711.tar.gz
  • Upload date:
  • Size: 388.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.12

File hashes

Hashes for llmcompressor-0.6.1a20250711.tar.gz
Algorithm Hash digest
SHA256 383e1be3a682f2edb5b2a5f329e78e35ff6ff9ee292d101814fbd10cec81df69
MD5 2fea123aaf8570004651288fd822323f
BLAKE2b-256 14e4b05b21f93b734adcd5b74f1641f7702e49c1c645c08f5fac027a73a86795

See more details on using hashes here.

File details

Details for the file llmcompressor-0.6.1a20250711-py3-none-any.whl.

File metadata

File hashes

Hashes for llmcompressor-0.6.1a20250711-py3-none-any.whl
Algorithm Hash digest
SHA256 11246e349cf027ab29c64978e26d8baa3d8914ba60009e5b971e36904bf0ea7a
MD5 9a6dcc7009f33635e8012b27b08761cc
BLAKE2b-256 91a7bf84607b4be79ecb4d60f1fdf0bca5e7864b824075eb49a8c7e0a2840852

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page