Repository of Intel® Neural Compressor
Project description
Intel® Neural Compressor
An open-source Python library supporting popular network compression technologies on all mainstream deep learning frameworks (TensorFlow, PyTorch, ONNX Runtime, and MXNet)
Intel® Neural Compressor, formerly known as Intel® Low Precision Optimization Tool, an open-source Python library running on Intel CPUs and GPUs, which delivers unified interfaces across multiple deep learning frameworks for popular network compression technologies, such as quantization, pruning, knowledge distillation. This tool supports automatic accuracy-driven tuning strategies to help user quickly find out the best quantized model. It also implements different weight pruning algorithms to generate pruned model with predefined sparsity goal and supports knowledge distillation to distill the knowledge from the teacher model to the student model. Intel® Neural Compressor has been one of the critical AI software components in Intel® oneAPI AI Analytics Toolkit.
Note: GPU support is under development.
Visit the Intel® Neural Compressor online document website at: https://intel.github.io/neural-compressor.
Installation
Prerequisites
- Python version: 3.7 or 3.8 or 3.9 or 3.10
Install on Linux
# install stable version from pip
pip install neural-compressor
# install nightly version from pip
pip install -i https://test.pypi.org/simple/ neural-compressor
# install stable version from from conda
conda install neural-compressor -c conda-forge -c intel
More installation methods can be found at Installation Guide.
Note: Run into installation issues, please check FAQ.
Getting Started
- Quantization with Python API
# A TensorFlow Example
pip install tensorflow
# Prepare fp32 model
wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v1_6/mobilenet_v1_1.0_224_frozen.pb
import tensorflow as tf
from neural_compressor.experimental import Quantization, common
tf.compat.v1.disable_eager_execution()
quantizer = Quantization()
quantizer.model = './mobilenet_v1_1.0_224_frozen.pb'
dataset = quantizer.dataset('dummy', shape=(1, 224, 224, 3))
quantizer.calib_dataloader = common.DataLoader(dataset)
quantizer.fit()
- Quantization with GUI
# An ONNX Example
pip install onnx==1.9.0 onnxruntime==1.10.0 onnxruntime-extensions
# Prepare fp32 model
wget https://github.com/onnx/models/blob/main/vision/classification/resnet/model/resnet50-v1-12.onnx
# Start GUI
inc_bench
System Requirements
Intel® Neural Compressor supports systems based on Intel 64 architecture or compatible processors, specially optimized for the following CPUs:
- Intel Xeon Scalable processor (formerly Skylake, Cascade Lake, Cooper Lake, and Icelake)
- Future Intel Xeon Scalable processor (code name Sapphire Rapids)
Validated Software Environment
- OS version: CentOS 8.4, Ubuntu 20.04
- Python version: 3.7, 3.8, 3.9, 3.10
Framework | TensorFlow | Intel TensorFlow | PyTorch | IPEX | ONNX Runtime | MXNet |
---|---|---|---|---|---|---|
Version | 2.9.1 2.8.2 2.7.3 | 2.8.0 2.7.0 1.15.0UP3 |
1.11.0+cpu 1.10.0+cpu 1.9.0+cpu |
1.11.0 1.10.0 1.9.0 |
1.10.0 1.9.0 1.8.0 |
1.8.0 1.7.0 1.6.0 |
Note: 1.Starting from official TensorFlow 2.6.0, oneDNN has been default in the binary. Please set the environment variable TF_ENABLE_ONEDNN_OPTS=1 to enable the oneDNN optimizations.
2.Starting from official TensorFlow 2.9.0, oneDNN optimizations are enabled by default on CPUs with neural-network-focused hardware features such as AVX512_VNNI, AVX512_BF16, AMX, etc. No need to set environment variable.
Validated Models
Intel® Neural Compressor validated 420+ examples with performance speedup geomean 2.2x and up to 4.2x on VNNI while minimizing the accuracy loss. More details for validated models are available here.
Documentation
Selected Publications
- Intel® Deep Learning Boost - Boost Network Security AI Inference Performance in Google Cloud Platform (GCP) (Apr 2022)
- Intel® Neural Compressor joined PyTorch ecosystem tool (Apr 2022)
- New instructions in the Intel® Xeon® Scalable processors combined with optimized software frameworks enable real-time AI within network workloads (Feb 2022)
- Quantizing ONNX Models using Intel® Neural Compressor (Feb 2022)
- Quantize AI Model by Intel® oneAPI AI Analytics Toolkit on Alibaba Cloud (Feb 2022)
View the full publication list.
Additional Content
- Release Information
- Contribution Guidelines
- Legal Information
- Security Policy
- Intel® Neural Compressor Website
Hiring
We are hiring. Please send your resume to inc.maintainers@intel.com if you have interests in model compression techniques.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distributions
File details
Details for the file neural_compressor-1.12.tar.gz
.
File metadata
- Download URL: neural_compressor-1.12.tar.gz
- Upload date:
- Size: 1.9 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.3.0 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.4.0.post20200518 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.8.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6c91661330ba75e4fc05b97735ab653f80a357810ae578ded6fe34f5081a815a |
|
MD5 | 7512862ec708c4c6379defd12e2a794a |
|
BLAKE2b-256 | 34676b8a00ec3908de97d684f7e8ccd6a4648b84728e4e78ae9ae063fe0fa099 |
File details
Details for the file neural_compressor-1.12-cp310-cp310-win_amd64.whl
.
File metadata
- Download URL: neural_compressor-1.12-cp310-cp310-win_amd64.whl
- Upload date:
- Size: 2.1 MB
- Tags: CPython 3.10, Windows x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.3.0 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.4.0.post20200518 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.8.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2739c6f6138e3c11f14f99c380427e6245437e8a6f0291ec950f00b3ed6ad244 |
|
MD5 | 8a4dbc18404de84b783f952c00660c6a |
|
BLAKE2b-256 | dd5b19a6b51f67ab88188e9a993230f2044e7e58081a56a92b877b8f368d1d8c |
File details
Details for the file neural_compressor-1.12-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
.
File metadata
- Download URL: neural_compressor-1.12-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
- Upload date:
- Size: 33.2 MB
- Tags: CPython 3.10, manylinux: glibc 2.17+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.3.0 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.4.0.post20200518 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.8.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | adf10c8b3cd148f6f871c8bea1921b87c275b22ff0409a28db694bafc638e5ca |
|
MD5 | 41f6dfab97320c0eac1469e22763ca85 |
|
BLAKE2b-256 | 764524648f359ce8ec84dd215f21bb38b59d19bc5d44fbaa48efa4a21e8b3c5b |
File details
Details for the file neural_compressor-1.12-cp39-cp39-win_amd64.whl
.
File metadata
- Download URL: neural_compressor-1.12-cp39-cp39-win_amd64.whl
- Upload date:
- Size: 2.1 MB
- Tags: CPython 3.9, Windows x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.3.0 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.4.0.post20200518 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.8.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 61124feadb0cf2d5781528a63d0a63e062f4bc61fa49c8f69d676adfddcb797b |
|
MD5 | e5434ffa54a1a1836627d3448ad26d93 |
|
BLAKE2b-256 | 7423d89b63d298488d6715beb1c8ee964ec1bbdf76df137dfc89e2dbdc2de1e8 |
File details
Details for the file neural_compressor-1.12-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
.
File metadata
- Download URL: neural_compressor-1.12-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
- Upload date:
- Size: 33.5 MB
- Tags: CPython 3.9, manylinux: glibc 2.17+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.3.0 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.4.0.post20200518 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.8.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 49e26f9e87913915748141181800d8a75dd1a14c427608bd6ec54d0afb0791a1 |
|
MD5 | a54563fc01e2b4fa6f5fd26b4c12991a |
|
BLAKE2b-256 | 6a98ac8e54a0750b972477bb5010eb86649f2db20b325ac89faaff494dda919c |
File details
Details for the file neural_compressor-1.12-cp38-cp38-win_amd64.whl
.
File metadata
- Download URL: neural_compressor-1.12-cp38-cp38-win_amd64.whl
- Upload date:
- Size: 2.1 MB
- Tags: CPython 3.8, Windows x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.3.0 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.4.0.post20200518 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.8.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | daa7ee82ca32aa2f3c783d9aaaba5519cabbfbcdcc538b3b920ffeaf3077f558 |
|
MD5 | a9dbf34342b875a987920b402c0e781b |
|
BLAKE2b-256 | 89d5ad4c3d8dff5b0c6c815155129c216bd509b89024b143de9671192db60c82 |
File details
Details for the file neural_compressor-1.12-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
.
File metadata
- Download URL: neural_compressor-1.12-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
- Upload date:
- Size: 33.2 MB
- Tags: CPython 3.8, manylinux: glibc 2.17+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.3.0 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.4.0.post20200518 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.8.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 50067a0785e760479dba4259a3865666f1bd5031db2f16e77bc6bfe2c2212864 |
|
MD5 | 16d2564db2e48757691c8c0015216f9d |
|
BLAKE2b-256 | d2155fc1aac0fd08f57596a907d727eb140540c6a6234c1b8eb5357f7d3e871a |
File details
Details for the file neural_compressor-1.12-cp37-cp37m-win_amd64.whl
.
File metadata
- Download URL: neural_compressor-1.12-cp37-cp37m-win_amd64.whl
- Upload date:
- Size: 2.1 MB
- Tags: CPython 3.7m, Windows x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.3.0 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.4.0.post20200518 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.8.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4237eb64c7a1e343d9f3cfef9948dcf54406d69b6220d18f9b59f01be87c2e31 |
|
MD5 | d2a3b183accbd4466f1d87554e330014 |
|
BLAKE2b-256 | 162c8e0d78449a564c8d47e4f01bf5f67935242723cf6300de34ed6a4832ae51 |
File details
Details for the file neural_compressor-1.12-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
.
File metadata
- Download URL: neural_compressor-1.12-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
- Upload date:
- Size: 33.5 MB
- Tags: CPython 3.7m, manylinux: glibc 2.17+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.3.0 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.4.0.post20200518 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.8.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 75f1296e519e6840630fdb823130861ed18d3965041ef84488b4c5d087fde772 |
|
MD5 | 8bd1fb9269570c462a9c6a5f9d59feab |
|
BLAKE2b-256 | be0de9257b932cee7977f77b28d3d92ecef1808f602c0e5dba47db4d32cbae04 |