Skip to main content

Repository of Intel® Neural Compressor

Project description

Intel® Neural Compressor

An open-source Python library supporting popular model compression techniques on all mainstream deep learning frameworks (TensorFlow, PyTorch, ONNX Runtime, and MXNet)

python version license coverage Downloads

Architecture   |   Workflow   |   LLMs Recipes   |   Results   |   Documentations


Intel® Neural Compressor aims to provide popular model compression techniques such as quantization, pruning (sparsity), distillation, and neural architecture search on mainstream frameworks such as TensorFlow, PyTorch, ONNX Runtime, and MXNet, as well as Intel extensions such as Intel Extension for TensorFlow and Intel Extension for PyTorch. In particular, the tool provides the key features, typical examples, and open collaborations as below:

What's New

Installation

Install from pypi

pip install neural-compressor

Note: More installation methods can be found at Installation Guide. Please check out our FAQ for more details.

Getting Started

Setting up the environment:

pip install "neural-compressor>=2.3" "transformers>=4.34.0" torch torchvision

After successfully installing these packages, try your first quantization program.

Weight-Only Quantization (LLMs)

Following example code demonstrates Weight-Only Quantization on LLMs, it supports Intel CPU, Intel Gauid2 AI Accelerator, Nvidia GPU, best device will be selected automatically.

To try on Intel Gaudi2, docker image with Gaudi Software Stack is recommended, please refer to following script for environment setup. More details can be found in Gaudi Guide.

docker run -it --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host vault.habana.ai/gaudi-docker/1.14.0/ubuntu22.04//habanalabs/pytorch-installer-2.1.1:latest

# Check the container ID
docker ps

# Login into container
docker exec -it <container_id> bash

# Install the optimum-habana
pip install --upgrade-strategy eager optimum[habana]

# Install INC/auto_round
pip install neural-compressor auto_round

Run the example:

from transformers import AutoModel, AutoTokenizer

from neural_compressor.config import PostTrainingQuantConfig
from neural_compressor.quantization import fit
from neural_compressor.adaptor.torch_utils.auto_round import get_dataloader

model_name = "EleutherAI/gpt-neo-125m"
float_model = AutoModel.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
dataloader = get_dataloader(tokenizer, seqlen=2048)

woq_conf = PostTrainingQuantConfig(
    approach="weight_only",
    op_type_dict={
        ".*": {  # match all ops
            "weight": {
                "dtype": "int",
                "bits": 4,
                "algorithm": "AUTOROUND",
            },
        }
    },
)
quantized_model = fit(model=float_model, conf=woq_conf, calib_dataloader=dataloader)

Note:

To try INT4 model inference, please directly use Intel Extension for Transformers, which leverages Intel Neural Compressor for model quantization.

Static Quantization (Non-LLMs)

from torchvision import models

from neural_compressor.config import PostTrainingQuantConfig
from neural_compressor.data import DataLoader, Datasets
from neural_compressor.quantization import fit

float_model = models.resnet18()
dataset = Datasets("pytorch")["dummy"](shape=(1, 3, 224, 224))
calib_dataloader = DataLoader(framework="pytorch", dataset=dataset)
static_quant_conf = PostTrainingQuantConfig()
quantized_model = fit(model=float_model, conf=static_quant_conf, calib_dataloader=calib_dataloader)

Documentation

Overview
Architecture Workflow APIs LLMs Recipes Examples
Python-based APIs
Quantization Advanced Mixed Precision Pruning (Sparsity) Distillation
Orchestration Benchmarking Distributed Compression Model Export
Neural Coder (Zero-code Optimization)
Launcher JupyterLab Extension Visual Studio Code Extension Supported Matrix
Advanced Topics
Adaptor Strategy Distillation for Quantization SmoothQuant
Weight-Only Quantization (INT8/INT4/FP4/NF4) FP8 Quantization Layer-Wise Quantization
Innovations for Productivity
Neural Insights Neural Solution

Note: More documentations can be found at User Guide.

Selected Publications/Events

Note: View Full Publication List.

Additional Content

Communication

  • GitHub Issues: mainly for bug reports, new feature requests, question asking, etc.
  • Email: welcome to raise any interesting research ideas on model compression techniques by email for collaborations.
  • Discord Channel: join the discord channel for more flexible technical discussion.
  • WeChat group: scan the QA code to join the technical discussion.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neural_compressor_3x_ort-2.5.1.tar.gz (95.4 kB view details)

Uploaded Source

Built Distribution

neural_compressor_3x_ort-2.5.1-py3-none-any.whl (112.9 kB view details)

Uploaded Python 3

File details

Details for the file neural_compressor_3x_ort-2.5.1.tar.gz.

File metadata

  • Download URL: neural_compressor_3x_ort-2.5.1.tar.gz
  • Upload date:
  • Size: 95.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.4.0.post20200518 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.8.3

File hashes

Hashes for neural_compressor_3x_ort-2.5.1.tar.gz
Algorithm Hash digest
SHA256 fd920a76e0958ccf09ad113f260884e4aaa64a34fdb0227d9870cd035cbf9c63
MD5 c374549e7e4edc3a68613b9c0ba5aa5f
BLAKE2b-256 a069e1fe87ef314421efa71c66817ef6006c81fef98e4cd147363b772886bbb4

See more details on using hashes here.

File details

Details for the file neural_compressor_3x_ort-2.5.1-py3-none-any.whl.

File metadata

  • Download URL: neural_compressor_3x_ort-2.5.1-py3-none-any.whl
  • Upload date:
  • Size: 112.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.4.0.post20200518 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.8.3

File hashes

Hashes for neural_compressor_3x_ort-2.5.1-py3-none-any.whl
Algorithm Hash digest
SHA256 31c5d2fb89df12f0d19b9df8ba4343f49210b827d005ff568e367bf3d4c76102
MD5 54a947297146cf3b72fd9da3542eec34
BLAKE2b-256 4ceab614866cc8ad8b837775eea1977843c5b9e6d86092ad9db868d600c3f558

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page