Skip to main content

A library for compressing large language models utilizing the latest techniques and research in the field for both training aware and post training techniques. The library is designed to be flexible and easy to use on top of PyTorch and HuggingFace Transformers, allowing for quick experimentation.

Project description

tool icon LLM Compressor

llmcompressor is an easy-to-use library for optimizing models for deployment with vllm, including:

  • Comprehensive set of quantization algorithms for weight-only and activation quantization
  • Seamless integration with Hugging Face models and repositories
  • safetensors-based file format compatible with vllm
  • Large model support via accelerate

✨ Read the announcement blog here! ✨

LLM Compressor Flow

🚀 What's New!

Big updates have landed in LLM Compressor! Check out these exciting new features:

  • Axolotl Sparse Finetuning Integration: Easily finetune sparse LLMs through our seamless integration with Axolotl. Learn more here.
  • AutoAWQ Integration: Perform low-bit weight-only quantization efficiently using AutoAWQ, now part of LLM Compressor. Note: This integration should be considered experimental for now. Enhanced support, including for MoE models and improved handling of larger models via layer sequential pipelining, is planned for upcoming releases. See the details.
  • Day 0 Llama 4 Support: Meta utilized LLM Compressor to create the FP8-quantized Llama-4-Maverick-17B-128E, optimized for vLLM inference using compressed-tensors format.

Supported Formats

  • Activation Quantization: W8A8 (int8 and fp8)
  • Mixed Precision: W4A16, W8A16
  • 2:4 Semi-structured and Unstructured Sparsity

Supported Algorithms

  • Simple PTQ
  • GPTQ
  • AWQ
  • SmoothQuant
  • SparseGPT

When to Use Which Optimization

Please refer to docs/schemes.md for detailed information about available optimization schemes and their use cases.

Installation

pip install llmcompressor

Get Started

End-to-End Examples

Applying quantization with llmcompressor:

User Guides

Deep dives into advanced usage of llmcompressor:

Quick Tour

Let's quantize TinyLlama with 8 bit weights and activations using the GPTQ and SmoothQuant algorithms.

Note that the model can be swapped for a local or remote HF-compatible checkpoint and the recipe may be changed to target different quantization algorithms or formats.

Apply Quantization

Quantization is applied by selecting an algorithm and calling the oneshot API.

from llmcompressor.modifiers.smoothquant import SmoothQuantModifier
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor import oneshot

# Select quantization algorithm. In this case, we:
#   * apply SmoothQuant to make the activations easier to quantize
#   * quantize the weights to int8 with GPTQ (static per channel)
#   * quantize the activations to int8 (dynamic per token)
recipe = [
    SmoothQuantModifier(smoothing_strength=0.8),
    GPTQModifier(scheme="W8A8", targets="Linear", ignore=["lm_head"]),
]

# Apply quantization using the built in open_platypus dataset.
#   * See examples for demos showing how to pass a custom calibration set
oneshot(
    model="TinyLlama/TinyLlama-1.1B-Chat-v1.0",
    dataset="open_platypus",
    recipe=recipe,
    output_dir="TinyLlama-1.1B-Chat-v1.0-INT8",
    max_seq_length=2048,
    num_calibration_samples=512,
)

Inference with vLLM

The checkpoints created by llmcompressor can be loaded and run in vllm:

Install:

pip install vllm

Run:

from vllm import LLM
model = LLM("TinyLlama-1.1B-Chat-v1.0-INT8")
output = model.generate("My name is")

Questions / Contribution

  • If you have any questions or requests open an issue and we will add an example or documentation.
  • We appreciate contributions to the code, examples, integrations, and documentation as well as bug reports and feature requests! Learn how here.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmcompressor-0.5.2a20250516.tar.gz (328.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llmcompressor-0.5.2a20250516-py3-none-any.whl (279.8 kB view details)

Uploaded Python 3

File details

Details for the file llmcompressor-0.5.2a20250516.tar.gz.

File metadata

  • Download URL: llmcompressor-0.5.2a20250516.tar.gz
  • Upload date:
  • Size: 328.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.12

File hashes

Hashes for llmcompressor-0.5.2a20250516.tar.gz
Algorithm Hash digest
SHA256 937469134becf93348ca9ff47a54dc419fb29dc51f38e9c21b1209dde76a281b
MD5 0e5ba0438c5c073a26440f13327a4954
BLAKE2b-256 bd0b718c8c83c75d5321fa95a9f8a93c5dcdf56d38ddf6555de944b546c4d385

See more details on using hashes here.

File details

Details for the file llmcompressor-0.5.2a20250516-py3-none-any.whl.

File metadata

File hashes

Hashes for llmcompressor-0.5.2a20250516-py3-none-any.whl
Algorithm Hash digest
SHA256 f1ebcbf4373a81c14a7205c78675b541c40552e94d3878f465f1121bee4d1634
MD5 7db51b276b36e97d4df239318f0d2ff5
BLAKE2b-256 a3e6ae08c5b9f064464d47543172f2e44de4cad5733eee9c527559247fc56cf3

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page