Repository of Intel® Neural Compressor
Project description
Intel® Neural Compressor
An open-source Python library supporting popular model compression techniques on all mainstream deep learning frameworks (TensorFlow, PyTorch, ONNX Runtime, and MXNet)
Architecture | Workflow | LLMs Recipes | Results | Documentations
Intel® Neural Compressor aims to provide popular model compression techniques such as quantization, pruning (sparsity), distillation, and neural architecture search on mainstream frameworks such as TensorFlow, PyTorch, ONNX Runtime, and MXNet, as well as Intel extensions such as Intel Extension for TensorFlow and Intel Extension for PyTorch. In particular, the tool provides the key features, typical examples, and open collaborations as below:
-
Support a wide range of Intel hardware such as Intel Xeon Scalable Processors, Intel Xeon CPU Max Series, Intel Data Center GPU Flex Series, and Intel Data Center GPU Max Series with extensive testing; support AMD CPU, ARM CPU, and NVidia GPU through ONNX Runtime with limited testing
-
Validate popular LLMs such as LLama2, Falcon, GPT-J, Bloom, OPT, and more than 10,000 broad models such as Stable Diffusion, BERT-Large, and ResNet50 from popular model hubs such as Hugging Face, Torch Vision, and ONNX Model Zoo, by leveraging zero-code optimization solution Neural Coder and automatic accuracy-driven quantization strategies
-
Collaborate with cloud marketplaces such as Google Cloud Platform, Amazon Web Services, and Azure, software platforms such as Alibaba Cloud, Tencent TACO and Microsoft Olive, and open AI ecosystem such as Hugging Face, PyTorch, ONNX, ONNX Runtime, and Lightning AI
What's New
- [2024/03] A new SOTA approach AutoRound Weight-Only Quantization on Intel Gaudi2 AI accelerator is available for LLMs.
Installation
Install from pypi
pip install neural-compressor
Note: Further installation methods can be found under Installation Guide. check out our FAQ for more details.
Getting Started
Setting up the environment:
pip install "neural-compressor>=2.3" "transformers>=4.34.0" torch torchvision
After successfully installing these packages, try your first quantization program.
Weight-Only Quantization (LLMs)
Following example code demonstrates Weight-Only Quantization on LLMs, it supports Intel CPU, Intel Gauid2 AI Accelerator, Nvidia GPU, best device will be selected automatically.
To try on Intel Gaudi2, docker image with Gaudi Software Stack is recommended, please refer to following script for environment setup. More details can be found in Gaudi Guide.
docker run -it --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host vault.habana.ai/gaudi-docker/1.14.0/ubuntu22.04/habanalabs/pytorch-installer-2.1.1:latest
# Check the container ID
docker ps
# Login into container
docker exec -it <container_id> bash
# Install the optimum-habana
pip install --upgrade-strategy eager optimum[habana]
# Install INC/auto_round
pip install neural-compressor auto_round
Run the example:
from transformers import AutoModel, AutoTokenizer
from neural_compressor.config import PostTrainingQuantConfig
from neural_compressor.quantization import fit
from neural_compressor.adaptor.torch_utils.auto_round import get_dataloader
model_name = "EleutherAI/gpt-neo-125m"
float_model = AutoModel.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
dataloader = get_dataloader(tokenizer, seqlen=2048)
woq_conf = PostTrainingQuantConfig(
approach="weight_only",
op_type_dict={
".*": { # match all ops
"weight": {
"dtype": "int",
"bits": 4,
"algorithm": "AUTOROUND",
},
}
},
)
quantized_model = fit(model=float_model, conf=woq_conf, calib_dataloader=dataloader)
Note:
To try INT4 model inference, please directly use Intel Extension for Transformers, which leverages Intel Neural Compressor for model quantization.
Static Quantization (Non-LLMs)
from torchvision import models
from neural_compressor.config import PostTrainingQuantConfig
from neural_compressor.data import DataLoader, Datasets
from neural_compressor.quantization import fit
float_model = models.resnet18()
dataset = Datasets("pytorch")["dummy"](shape=(1, 3, 224, 224))
calib_dataloader = DataLoader(framework="pytorch", dataset=dataset)
static_quant_conf = PostTrainingQuantConfig()
quantized_model = fit(model=float_model, conf=static_quant_conf, calib_dataloader=calib_dataloader)
Documentation
Overview | |||||||
---|---|---|---|---|---|---|---|
Architecture | Workflow | APIs | LLMs Recipes | Examples | |||
Python-based APIs | |||||||
Quantization | Advanced Mixed Precision | Pruning (Sparsity) | Distillation | ||||
Orchestration | Benchmarking | Distributed Compression | Model Export | ||||
Neural Coder (Zero-code Optimization) | |||||||
Launcher | JupyterLab Extension | Visual Studio Code Extension | Supported Matrix | ||||
Advanced Topics | |||||||
Adaptor | Strategy | Distillation for Quantization | SmoothQuant | ||||
Weight-Only Quantization (INT8/INT4/FP4/NF4) | FP8 Quantization | Layer-Wise Quantization | |||||
Innovations for Productivity | |||||||
Neural Insights | Neural Solution |
Note: Further documentations can be found at User Guide.
Selected Publications/Events
- Blog by Intel: Neural Compressor: Boosting AI Model Efficiency (June 2024)
- Blog by Intel: Optimization of Intel AI Solutions for Alibaba Cloud’s Qwen2 Large Language Models (June 2024)
- Blog by Intel: Accelerate Meta* Llama 3 with Intel AI Solutions (Apr 2024)
- EMNLP'2023 (Under Review): TEQ: Trainable Equivalent Transformation for Quantization of LLMs (Sep 2023)
- arXiv: Efficient Post-training Quantization with FP8 Formats (Sep 2023)
- arXiv: Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs (Sep 2023)
Note: View Full Publication List.
Additional Content
Communication
- GitHub Issues: mainly for bug reports, new feature requests, question asking, etc.
- Email: welcome to raise any interesting research ideas on model compression techniques by email for collaborations.
- Discord Channel: join the discord channel for more flexible technical discussion.
- WeChat group: scan the QA code to join the technical discussion.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file neural_solution-2.6.1.tar.gz
.
File metadata
- Download URL: neural_solution-2.6.1.tar.gz
- Upload date:
- Size: 72.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.3.0 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.4.0.post20200518 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.8.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 477926d7c8e4854e47daa6fe778623588fe84b1c1866a04f724f4aaf4a1d2f00 |
|
MD5 | 41a105ecc5605bc63ef5e26dc75c1051 |
|
BLAKE2b-256 | 2f36f2d5f586eb12d0a405f41f61beae97ce3b902ebf223307b3cea3b559a5b4 |
File details
Details for the file neural_solution-2.6.1-py3-none-any.whl
.
File metadata
- Download URL: neural_solution-2.6.1-py3-none-any.whl
- Upload date:
- Size: 84.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.3.0 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.4.0.post20200518 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.8.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 582d7ba963bcbc53598a37d4c80efe6c5774a2542780c6127c88ea3abf507f6d |
|
MD5 | 39935af431d69541bb1557a8c0300e16 |
|
BLAKE2b-256 | be24e04efbec13b0f8732c7860cc57242858d72b15809dbd5db7c54ac08ebd45 |