Skip to main content

Repository of Intel® Neural Compressor

Project description

Intel® Neural Compressor

An open-source Python library supporting popular model compression techniques on all mainstream deep learning frameworks (TensorFlow, PyTorch, and ONNX Runtime)

python version license coverage Downloads

Architecture   |   Workflow   |   LLMs Recipes   |   Results   |   Documentations


Intel® Neural Compressor aims to provide popular model compression techniques such as quantization, pruning (sparsity), distillation, and neural architecture search on mainstream frameworks such as TensorFlow, PyTorch, and ONNX Runtime, as well as Intel extensions such as Intel Extension for TensorFlow and Intel Extension for PyTorch. In particular, the tool provides the key features, typical examples, and open collaborations as below:

What's New

  • [2025/12] NVFP4 quantization experimental support
  • [2025/10] MXFP8 / MXFP4 quantization experimental support
  • [2025/09] FP8 dynamic quantization, including Linear, FusedMoE on Intel Gaudi AI Accelerators
  • [2025/05] FP8 static quantization of DeepSeek V3/R1 model on Intel Gaudi AI Accelerators
  • [2025/03] VLM quantization in transformers-like API on Intel CPU/GPU

Installation

Choose the necessary framework dependencies to install based on your deploy environment.

Install Framework

Install Neural Compressor from pypi

# Install 2.X API + Framework extension API + PyTorch dependency
pip install neural-compressor[pt]
# Install 2.X API + Framework extension API + TensorFlow dependency
pip install neural-compressor[tf]

Note: Further installation methods can be found under Installation Guide. check out our FAQ for more details.

Getting Started

After successfully installing these packages, try your first quantization program. Following example code demonstrates FP8 Quantization, it is supported by Intel Gaudi2 AI Accelerator.
To try on Intel Gaudi2, docker image with Gaudi Software Stack is recommended, please refer to following script for environment setup. More details can be found in Gaudi Guide.

Run a container with an interactive shell, more info

docker run -it --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host vault.habana.ai/gaudi-docker/1.23.0/ubuntu24.04/habanalabs/pytorch-installer-2.9.0:latest

Note: Since Habana software >= 1.21.0, PT_HPU_LAZY_MODE=0 is the default setting. However, most low-precision functions (such as convert_from_uint4) do not support this setting. Therefore, we recommend setting PT_HPU_LAZY_MODE=1 to maintain compatibility.

Run the example,

from neural_compressor.torch.quantization import (
    FP8Config,
    prepare,
    convert,
)

import torch
import torchvision.models as models

model = models.resnet18()
qconfig = FP8Config(fp8_config="E4M3")
model = prepare(model, qconfig)

# Customer defined calibration. Below is a dummy calibration
model(torch.randn(1, 3, 224, 224).to("hpu"))

model = convert(model)

output = model(torch.randn(1, 3, 224, 224).to("hpu")).to("cpu")
print(output.shape)

More FP8 quantization doc.

Following example code demonstrates weight-only large language model loading on Intel Gaudi2 AI Accelerator.

from neural_compressor.torch.quantization import load

model_name = "TheBloke/Llama-2-7B-GPTQ"
model = load(
    model_name_or_path=model_name,
    format="huggingface",
    device="hpu",
    torch_dtype=torch.bfloat16,
)

Note: Intel Neural Compressor will convert the model format from auto-gptq to hpu format on the first load and save hpu_model.safetensors to the local cache directory for the next load. So it may take a while to load for the first time.

Documentation

Overview
Architecture Workflow APIs LLMs Recipes Examples
PyTorch Extension APIs
Overview
Dynamic Quantization Static Quantization Smooth Quantization
Weight-Only Quantization FP8 Quantization Mixed Precision
MX Quantization NVFP4 Quantization
Tensorflow Extension APIs
Overview Static Quantization Smooth Quantization
Transformers-like APIs
Overview
Other Modules
Auto Tune

Note: From 3.0 release, we recommend to use 3.X API. Compression techniques during training such as QAT, Pruning, Distillation only available in 2.X API currently.

Selected Publications/Events

Note: View Full Publication List.

Additional Content

Communication

  • GitHub Issues: mainly for bug reports, new feature requests, question asking, etc.
  • Email: welcome to raise any interesting research ideas on model compression techniques by email for collaborations.
  • Discord Channel: join the discord channel for more flexible technical discussion.
  • WeChat group: scan the QA code to join the technical discussion.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neural_compressor_tf-3.7.1.tar.gz (244.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

neural_compressor_tf-3.7.1-py3-none-any.whl (356.5 kB view details)

Uploaded Python 3

File details

Details for the file neural_compressor_tf-3.7.1.tar.gz.

File metadata

  • Download URL: neural_compressor_tf-3.7.1.tar.gz
  • Upload date:
  • Size: 244.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.10.18

File hashes

Hashes for neural_compressor_tf-3.7.1.tar.gz
Algorithm Hash digest
SHA256 2630d3c17b76b6f88aa7a3d86d5dc33b2ccd42db618a35c5e5d9115926548995
MD5 a25d6390acfad5baeb2994ea8a31cc44
BLAKE2b-256 49f5da2a5e035aeff77db62e290c40e3256da583613f37cb04bcd9cec83fd67e

See more details on using hashes here.

File details

Details for the file neural_compressor_tf-3.7.1-py3-none-any.whl.

File metadata

File hashes

Hashes for neural_compressor_tf-3.7.1-py3-none-any.whl
Algorithm Hash digest
SHA256 0642a7ac1b119c7d8f5949805f472563c2f4da4c1da3b2096b613fe0808abf62
MD5 874cde6d36489c7878dcf13b9c762c47
BLAKE2b-256 5b528bc4c7decb8457724112e38fdcfd08eef9f5e6ff4877b128b42c5e51a233

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page