Skip to main content

Repository of Intel® Neural Compressor

Project description

Intel® Neural Compressor

An open-source Python library supporting popular model compression techniques on all mainstream deep learning frameworks (TensorFlow, PyTorch, ONNX Runtime, and MXNet)

python version license coverage Downloads

Architecture   |   Workflow   |   Results   |   Examples   |   Documentations


Intel® Neural Compressor aims to provide popular model compression techniques such as quantization, pruning (sparsity), distillation, and neural architecture search on mainstream frameworks such as TensorFlow, PyTorch, ONNX Runtime, and MXNet, as well as Intel extensions such as Intel Extension for TensorFlow and Intel Extension for PyTorch. In particular, the tool provides the key features, typical examples, and open collaborations as below:

Installation

Install from pypi

pip install neural-compressor

More installation methods can be found at Installation Guide. Please check out our FAQ for more details.

Getting Started

Quantization with Python API

# Install Intel Neural Compressor and TensorFlow
pip install neural-compressor
pip install tensorflow
# Prepare fp32 model
wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v1_6/mobilenet_v1_1.0_224_frozen.pb
from neural_compressor.data import DataLoader, Datasets
from neural_compressor.config import PostTrainingQuantConfig

dataset = Datasets("tensorflow")["dummy"](shape=(1, 224, 224, 3))
dataloader = DataLoader(framework="tensorflow", dataset=dataset)

from neural_compressor.quantization import fit

q_model = fit(
    model="./mobilenet_v1_1.0_224_frozen.pb",
    conf=PostTrainingQuantConfig(),
    calib_dataloader=dataloader,
)

Documentation

Overview
Architecture Workflow Examples APIs
Python-based APIs
Quantization Advanced Mixed Precision Pruning (Sparsity) Distillation
Orchestration Benchmarking Distributed Compression Model Export
Neural Coder (Zero-code Optimization)
Launcher JupyterLab Extension Visual Studio Code Extension Supported Matrix
Advanced Topics
Adaptor Strategy Distillation for Quantization SmoothQuant
Weight-Only Quantization (INT8/INT4/FP4/NF4) FP8 Quantization
Innovations for Productivity
Neural Insights Neural Solution

More documentations can be found at User Guide.

Selected Publications/Events

View Full Publication List.

Additional Content

Communication

  • GitHub Issues: mainly for bugs report, new feature request, question asking, etc.
  • Email: welcome to raise any interesting research ideas on model compression techniques by email for collaborations.
  • WeChat group: scan the QA code to join the technical discussion.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neural_compressor-2.3.1.tar.gz (998.1 kB view details)

Uploaded Source

Built Distribution

neural_compressor-2.3.1-py3-none-any.whl (1.4 MB view details)

Uploaded Python 3

File details

Details for the file neural_compressor-2.3.1.tar.gz.

File metadata

  • Download URL: neural_compressor-2.3.1.tar.gz
  • Upload date:
  • Size: 998.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.4.0.post20200518 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.8.3

File hashes

Hashes for neural_compressor-2.3.1.tar.gz
Algorithm Hash digest
SHA256 bbb4447cba8d82fa83724712e0b53da3f0171b47861976203c366783ee2c8f80
MD5 f3da63e7796cdd92a5934f2767d92f33
BLAKE2b-256 d963d5beaefb623747226edf4dc9a7f35989f6ab245a1a0757f002ae5efe1e4a

See more details on using hashes here.

File details

Details for the file neural_compressor-2.3.1-py3-none-any.whl.

File metadata

  • Download URL: neural_compressor-2.3.1-py3-none-any.whl
  • Upload date:
  • Size: 1.4 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.4.0.post20200518 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.8.3

File hashes

Hashes for neural_compressor-2.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 d94ba25aad77289c24d475a1967889832a9712ba685afa03093a06d93b84d1d0
MD5 32a133687c8cda7ce64598061114b83c
BLAKE2b-256 971cc6b3917ae047402f4749a5e2ec8c398687120fb7100def968f403f74c873

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page