Skip to main content

Repository of Intel® Neural Compressor

Project description

Intel® Neural Compressor

An open-source Python library supporting popular model compression techniques on mainstream deep learning frameworks (PyTorch, TensorFlow, and JAX)

python version license coverage Downloads

Architecture   |   Workflow   |   Documentations


Intel® Neural Compressor aims to provide popular model compression techniques such as Static Quantization, Dynamic Quantization, SmoothQuant, Weight-Only Quantization, Quantization-Aware Training, Mixed Precision, etc.

What's New

  • [2026/03] FP8 quantization support for Keras/JAX (experimental)
  • [2026/03] FP8 KV cache/Attention static quantization with AutoRound (experimental)
  • [2025/12] NVFP4 quantization experimental support
  • [2025/10] MXFP8 / MXFP4 quantization experimental support
  • [2025/09] FP8 dynamic quantization, including Linear, FusedMoE on Intel Gaudi AI Accelerators
  • [2025/05] FP8 static quantization of DeepSeek V3/R1 model on Intel Gaudi AI Accelerators
  • [2025/03] VLM quantization in transformers-like API on Intel CPU/GPU

Installation

Choose the necessary framework dependencies to install based on your deploy environment.

Install Framework for PyTorch Backend (on-demand)

Intel Neural Compressor supports PyTorch with CPU, GPU and HPU. Please install the corresponding PyTorch version based on your hardware environment.

Install Neural Compressor from pypi

# Framework extension API + PyTorch dependency
pip install neural-compressor-pt
# Framework extension API + TensorFlow dependency
pip install neural-compressor-tf
# Framework extension API + JAX dependency, available since v3.8
pip install neural-compressor-jax

Note: Further installation methods can be found under Installation Guide. check out our FAQ for more details.

Getting Started

After successfully installing these packages, try your first quantization program. Following example code demonstrates FP8 Quantization, it is supported by Intel Gaudi2 AI Accelerator.
To try on Intel Gaudi2, docker image with Gaudi Software Stack is recommended, please refer to following script for environment setup. More details can be found in Gaudi Guide.

Run a container with an interactive shell, more info

docker run -it --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host vault.habana.ai/gaudi-docker/1.23.0/ubuntu24.04/habanalabs/pytorch-installer-2.9.0:latest

Note: Since Habana software >= 1.21.0, PT_HPU_LAZY_MODE=0 is the default setting. However, most low-precision functions (such as convert_from_uint4) do not support this setting. Therefore, we recommend setting PT_HPU_LAZY_MODE=1 to maintain compatibility.

Run the example,

from neural_compressor.torch.quantization import (
    FP8Config,
    prepare,
    convert,
)

import torch
import torchvision.models as models

model = models.resnet18()
qconfig = FP8Config(fp8_config="E4M3")
model = prepare(model, qconfig)

# Customer defined calibration. Below is a dummy calibration
model(torch.randn(1, 3, 224, 224).to("hpu"))

model = convert(model)

output = model(torch.randn(1, 3, 224, 224).to("hpu")).to("cpu")
print(output.shape)

More FP8 quantization doc.

Following example code demonstrates weight-only large language model loading on Intel Gaudi2 AI Accelerator.

from neural_compressor.torch.quantization import load

model_name = "TheBloke/Llama-2-7B-GPTQ"
model = load(
    model_name_or_path=model_name,
    format="huggingface",
    device="hpu",
    torch_dtype=torch.bfloat16,
)

Note: Intel Neural Compressor will convert the model format from auto-gptq to hpu format on the first load and save hpu_model.safetensors to the local cache directory for the next load. So it may take a while to load for the first time.

Documentation

Overview
Architecture Workflow APIs Examples
PyTorch Extension APIs
Overview
Dynamic Quantization Static Quantization Smooth Quantization
Weight-Only Quantization FP8 Quantization Mixed Precision
MX Quantization NVFP4 Quantization
Tensorflow Extension APIs
Overview Static Quantization Smooth Quantization
Transformers-like APIs
Overview
JAX Extension APIs
Overview
Other Modules
Auto Tune

Selected Publications/Events

Note: View Full Publication List.

Additional Content

Communication

  • GitHub Issues: mainly for bug reports, new feature requests, question asking, etc.
  • Email: welcome to raise any interesting research ideas on model compression techniques by email for collaborations.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neural_compressor_tf-3.8.tar.gz (242.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

neural_compressor_tf-3.8-py3-none-any.whl (354.9 kB view details)

Uploaded Python 3

File details

Details for the file neural_compressor_tf-3.8.tar.gz.

File metadata

  • Download URL: neural_compressor_tf-3.8.tar.gz
  • Upload date:
  • Size: 242.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.18

File hashes

Hashes for neural_compressor_tf-3.8.tar.gz
Algorithm Hash digest
SHA256 4f8d1e2760fec701f2d8c306c109b8bb43db557e0a3e0ba4dd544de5ed4298b9
MD5 3bb7c71a7147c52c46fdd7d3bc158794
BLAKE2b-256 f8f7d08075083e4a2ae95f1c64087117b73842a057dc23a60f0486d4c15a8702

See more details on using hashes here.

File details

Details for the file neural_compressor_tf-3.8-py3-none-any.whl.

File metadata

File hashes

Hashes for neural_compressor_tf-3.8-py3-none-any.whl
Algorithm Hash digest
SHA256 d14e3c3524541efffd89bcc07de17f7b67240520881b1b93622cce5e253b531d
MD5 cfc880358ea36d81f4ab1975ad25310e
BLAKE2b-256 af6d744d17e17d8fccc5401f8a9e19bb0c2571d2eb0426dbf9a1c706ba11bbab

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page