Skip to main content

Repository of Intel® Neural Compressor

Project description

Intel® Neural Compressor

An open-source Python library supporting popular model compression techniques on all mainstream deep learning frameworks (TensorFlow, PyTorch, ONNX Runtime, and MXNet)

python version license coverage Downloads

Architecture   |   Workflow   |   Results   |   Examples   |   Documentations


Intel® Neural Compressor aims to provide popular model compression techniques such as quantization, pruning (sparsity), distillation, and neural architecture search on mainstream frameworks such as TensorFlow, PyTorch, ONNX Runtime, and MXNet, as well as Intel extensions such as Intel Extension for TensorFlow and Intel Extension for PyTorch. In particular, the tool provides the key features, typical examples, and open collaborations as below:

Installation

Install from pypi

pip install neural-compressor

More installation methods can be found at Installation Guide. Please check out our FAQ for more details.

Getting Started

Quantization with Python API

# Install Intel Neural Compressor and TensorFlow
pip install neural-compressor 
pip install tensorflow
# Prepare fp32 model
wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v1_6/mobilenet_v1_1.0_224_frozen.pb
from neural_compressor.config import PostTrainingQuantConfig
from neural_compressor.data import DataLoader
from neural_compressor.data import Datasets

dataset = Datasets('tensorflow')['dummy'](shape=(1, 224, 224, 3))
dataloader = DataLoader(framework='tensorflow', dataset=dataset)

from neural_compressor.quantization import fit
q_model = fit(
    model="./mobilenet_v1_1.0_224_frozen.pb",
    conf=PostTrainingQuantConfig(),
    calib_dataloader=dataloader,
    eval_dataloader=dataloader)

More quick samples can be found in Get Started Page.

Documentation

Overview
Architecture Workflow APIs GUI
Notebook Examples Intel oneAPI AI Analytics Toolkit
Python-based APIs
Quantization Advanced Mixed Precision Pruning (Sparsity) Distillation
Orchestration Benchmarking Distributed Compression Model Export
Neural Coder (Zero-code Optimization)
Launcher JupyterLab Extension Visual Studio Code Extension Supported Matrix
Advanced Topics
Adaptor Strategy Distillation for Quantization SmoothQuant

Selected Publications/Events

View our Full Publication List.

Additional Content

Research Collaborations

Welcome to raise any interesting research ideas on model compression techniques and feel free to reach us (inc.maintainers@intel.com). Look forward to our collaborations on Intel Neural Compressor!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neural_compressor_full-2.1.1.tar.gz (6.8 MB view details)

Uploaded Source

Built Distribution

neural_compressor_full-2.1.1-py3-none-any.whl (7.4 MB view details)

Uploaded Python 3

File details

Details for the file neural_compressor_full-2.1.1.tar.gz.

File metadata

  • Download URL: neural_compressor_full-2.1.1.tar.gz
  • Upload date:
  • Size: 6.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.4.0.post20200518 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.8.3

File hashes

Hashes for neural_compressor_full-2.1.1.tar.gz
Algorithm Hash digest
SHA256 26198af02945577a67b4a7864df6ac050325857676185139088c61c57abe19cb
MD5 85795d536eaa79e0cad8b348f56d4f77
BLAKE2b-256 d3fe0373342d96d2dab64baa1deee3d62d2e7c38b0989e47e3acc14e52d73b9d

See more details on using hashes here.

File details

Details for the file neural_compressor_full-2.1.1-py3-none-any.whl.

File metadata

  • Download URL: neural_compressor_full-2.1.1-py3-none-any.whl
  • Upload date:
  • Size: 7.4 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.4.0.post20200518 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.8.3

File hashes

Hashes for neural_compressor_full-2.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 c42065400236b83824062b6f45aae2bd75ff0346e1a889f671c05478dd8a0538
MD5 fa6b45516d73b6a51ce18f668833e8ea
BLAKE2b-256 fbee13af9258d4c46cb166fbcaf7042b7e360de69f3f1a097874c95068af356b

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page