Repository of Intel® Neural Compressor
Project description
Intel® Neural Compressor
An open-source Python library supporting popular model compression techniques on all mainstream deep learning frameworks (TensorFlow, PyTorch, ONNX Runtime, and MXNet)
Architecture | Workflow | Results | Examples | Documentations
Intel® Neural Compressor aims to provide popular model compression techniques such as quantization, pruning (sparsity), distillation, and neural architecture search on mainstream frameworks such as TensorFlow, PyTorch, ONNX Runtime, and MXNet, as well as Intel extensions such as Intel Extension for TensorFlow and Intel Extension for PyTorch. In particular, the tool provides the key features, typical examples, and open collaborations as below:
-
Support a wide range of Intel hardware such as Intel Xeon Scalable processor, Intel Xeon CPU Max Series, Intel Data Center GPU Flex Series, and Intel Data Center GPU Max Series with extensive testing; support AMD CPU, ARM CPU, and NVidia GPU through ONNX Runtime with limited testing
-
Validate more than 10,000 models such as Bloom-176B, OPT-6.7B, Stable Diffusion, GPT-J, BERT-Large, and ResNet50 from popular model hubs such as Hugging Face, Torch Vision, and ONNX Model Zoo, by leveraging zero-code optimization solution Neural Coder and automatic accuracy-driven quantization strategies
-
Collaborate with cloud marketplace such as Google Cloud Platform, Amazon Web Services, and Azure, software platforms such as Alibaba Cloud and Tencent TACO, and open AI ecosystem such as Hugging Face, PyTorch, ONNX, and Lightning AI
Installation
Install from pypi
pip install neural-compressor
More installation methods can be found at Installation Guide. Please check out our FAQ for more details.
Getting Started
Quantization with Python API
# Install Intel Neural Compressor and TensorFlow
pip install neural-compressor
pip install tensorflow
# Prepare fp32 model
wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v1_6/mobilenet_v1_1.0_224_frozen.pb
from neural_compressor.config import PostTrainingQuantConfig
from neural_compressor.data import DataLoader
from neural_compressor.data import Datasets
dataset = Datasets('tensorflow')['dummy'](shape=(1, 224, 224, 3))
dataloader = DataLoader(framework='tensorflow', dataset=dataset)
from neural_compressor.quantization import fit
q_model = fit(
model="./mobilenet_v1_1.0_224_frozen.pb",
conf=PostTrainingQuantConfig(),
calib_dataloader=dataloader,
eval_dataloader=dataloader)
More quick samples can be found in Get Started Page.
Documentation
Overview | |||||||
---|---|---|---|---|---|---|---|
Architecture | Workflow | APIs | GUI | ||||
Notebook | Examples | Intel oneAPI AI Analytics Toolkit | |||||
Python-based APIs | |||||||
Quantization | Advanced Mixed Precision | Pruning (Sparsity) | Distillation | ||||
Orchestration | Benchmarking | Distributed Compression | Model Export | ||||
Neural Coder (Zero-code Optimization) | |||||||
Launcher | JupyterLab Extension | Visual Studio Code Extension | Supported Matrix | ||||
Advanced Topics | |||||||
Adaptor | Strategy | Distillation for Quantization | SmoothQuant |
Selected Publications/Events
- Blog on Medium: Effective Post-training Quantization for Large Language Models with Enhanced SmoothQuant Approach (Apr 2023)
- Blog by Intel: Intel® Xeon® Processors Are Still the Only CPU With MLPerf Results, Raising the Bar By 5x (Apr 2023)
- Post on Social Media: Adopt with Tencent TACO: Heterogeneous optimization is also key to improving AI computing power (Mar 2023)
- Post on Social Media: Training and Inference for Stable Diffusion | Intel Business (Jan 2023)
- NeurIPS'2022: Fast Distilbert on CPUs (Oct 2022)
- NeurIPS'2022: QuaLA-MiniLM: a Quantized Length Adaptive MiniLM (Oct 2022)
View our Full Publication List.
Additional Content
Research Collaborations
Welcome to raise any interesting research ideas on model compression techniques and feel free to reach us (inc.maintainers@intel.com). Look forward to our collaborations on Intel Neural Compressor!
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file neural_compressor_full-2.1.1.tar.gz
.
File metadata
- Download URL: neural_compressor_full-2.1.1.tar.gz
- Upload date:
- Size: 6.8 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.3.0 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.4.0.post20200518 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.8.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 26198af02945577a67b4a7864df6ac050325857676185139088c61c57abe19cb |
|
MD5 | 85795d536eaa79e0cad8b348f56d4f77 |
|
BLAKE2b-256 | d3fe0373342d96d2dab64baa1deee3d62d2e7c38b0989e47e3acc14e52d73b9d |
File details
Details for the file neural_compressor_full-2.1.1-py3-none-any.whl
.
File metadata
- Download URL: neural_compressor_full-2.1.1-py3-none-any.whl
- Upload date:
- Size: 7.4 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.3.0 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.4.0.post20200518 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.8.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | c42065400236b83824062b6f45aae2bd75ff0346e1a889f671c05478dd8a0538 |
|
MD5 | fa6b45516d73b6a51ce18f668833e8ea |
|
BLAKE2b-256 | fbee13af9258d4c46cb166fbcaf7042b7e360de69f3f1a097874c95068af356b |