Skip to main content

Neural Networks Compression Framework

Project description

Neural Network Compression Framework (NNCF)

NNCF provides a suite of advanced algorithms for Neural Networks inference optimization in OpenVINO™ with minimal accuracy drop.

NNCF is designed to work with models from PyTorch and TensorFlow.

NNCF provides samples that demonstrate the usage of compression algorithms for three different use cases on public PyTorch and TensorFlow models and datasets: Image Classification, Object Detection and Semantic Segmentation. Compression results achievable with the NNCF-powered samples can be found in a table at the end of this document.

The framework is organized as a Python* package that can be built and used in a standalone mode. The framework architecture is unified to make it easy to add different compression algorithms for both PyTorch and TensorFlow deep learning frameworks.

Key Features

  • Support of various compression algorithms, applied during a model fine-tuning process to achieve a better performance-accuracy trade-off:

    Compression algorithm PyTorch TensorFlow
    Quantization Supported Supported
    Mixed-Precision Quantization Supported Not supported
    Binarization Supported Not supported
    Sparsity Supported Supported
    Filter pruning Supported Supported
  • Automatic, configurable model graph transformation to obtain the compressed model.

    NOTE: Limited support for TensorFlow models. The models created using Sequential or Keras Functional API are only supported.

  • Common interface for compression methods.

  • GPU-accelerated layers for faster compressed model fine-tuning.

  • Distributed training support.

  • Configuration file examples for each supported compression algorithm.

  • Git patches for prominent third-party repositories (huggingface-transformers) demonstrating the process of integrating NNCF into custom training pipelines

  • Exporting PyTorch compressed models to ONNX* checkpoints and TensorFlow compressed models to SavedModel or Frozen Graph format, ready to use with OpenVINO™ toolkit.

  • Support for Accuracy-Aware model training pipelines via the Adaptive Compression Level Training and Early Exit Training.

Usage

The NNCF is organized as a regular Python package that can be imported in your target training pipeline script. The basic workflow is loading a JSON configuration script containing NNCF-specific parameters determining the compression to be applied to your model, and then passing your model along with the configuration script to the create_compressed_model function. This function returns a model with additional modifications necessary to enable algorithm-specific compression during fine-tuning and handle to the object allowing you to control the compression during the training process:

Usage example with PyTorch

import torch
import nncf  # Important - should be imported directly after torch

from nncf import NNCFConfig
from nncf.torch import create_compressed_model, register_default_init_args

# Instantiate your uncompressed model
from torchvision.models.resnet import resnet50
model = resnet50()

# Load a configuration file to specify compression
nncf_config = NNCFConfig.from_json("resnet50_int8.json")

# Provide data loaders for compression algorithm initialization, if necessary
import torchvision.datasets as datasets
representative_dataset = datasets.ImageFolder("/path")
init_loader = torch.utils.data.DataLoader(representative_dataset)
nncf_config = register_default_init_args(nncf_config, init_loader)

# Apply the specified compression algorithms to the model
compression_ctrl, compressed_model = create_compressed_model(model, nncf_config)

# Now use compressed_model as a usual torch.nn.Module 
# to fine-tune compression parameters along with the model weights

# ... the rest of the usual PyTorch-powered training pipeline

# Export to ONNX or .pth when done fine-tuning
compression_ctrl.export_model("compressed_model.onnx")
torch.save(compressed_model.state_dict(), "compressed_model.pth")

Usage example with TensorFlow

import tensorflow as tf

from nncf import NNCFConfig
from nncf.tensorflow import create_compressed_model, register_default_init_args

# Instantiate your uncompressed model
from tensorflow.keras.applications import ResNet50
model = ResNet50()

# Load a configuration file to specify compression
nncf_config = NNCFConfig.from_json("resnet50_int8.json")

# Provide dataset for compression algorithm initialization
representative_dataset = tf.data.Dataset.list_files("/path/*.jpeg")
nncf_config = register_default_init_args(nncf_config, representative_dataset, batch_size=1)

# Apply the specified compression algorithms to the model
compression_ctrl, compressed_model = create_compressed_model(model, nncf_config)

# Now use compressed_model as a usual Keras model
# to fine-tune compression parameters along with the model weights

# ... the rest of the usual TensorFlow-powered training pipeline

# Export to Frozen Graph, TensorFlow SavedModel or .h5  when done fine-tuning 
compression_ctrl.export_model("compressed_model.pb", save_format='frozen_graph')

For a more detailed description of NNCF usage in your training code, see this tutorial. For in-depth examples of NNCF integration, browse the sample scripts code, or the example patches to third-party repositories.

Model Compression Samples

For a quicker start with NNCF-powered compression, you can also try the sample scripts, each of which provides a basic training pipeline for classification, semantic segmentation and object detection neural network training correspondingly.

To run the samples please refer to the corresponding tutorials:

Model Compression Notebooks

A collection of ready-to-run Jupyter* notebooks are also available to demonstrate how to use NNCF compression algorithms to optimize models for inference with the OpenVINO Toolkit.

Third-party repository integration

NNCF may be straightforwardly integrated into training/evaluation pipelines of third-party repositories.

Used by

  • OpenVINO Training Extensions

    NNCF is integrated into OpenVINO Training Extensions as model optimization backend. So you can train, optimize and export new models based on the available model templates as well as run exported models with OpenVINO.

Git patches for third-party repository

See third_party_integration for examples of code modifications (Git patches and base commit IDs are provided) that are necessary to integrate NNCF into the following repositories:

System requirements

  • Ubuntu* 18.04 or later (64-bit)
  • Python* 3.6.2 or later
  • Supported frameworks:
    • PyTorch* >=1.5.0, <=1.9.1 (1.8.0 not supported)
    • TensorFlow* >=2.4.0, <=2.5.3

This repository is tested on Python* 3.6.2+, PyTorch* 1.9.1 (NVidia CUDA* Toolkit 10.2) and TensorFlow* 2.5.3 (NVidia CUDA* Toolkit 11.2).

Installation

We suggest to install or use the package in the Python virtual environment.

If you want to optimize a model from PyTorch, install PyTorch by following PyTorch installation guide. If you want to optimize a model from TensorFlow, install TensorFlow by following TensorFlow installation guide.

As a package built from a checked-out repository:

Install the package and its dependencies by running the following in the repository root directory:

python setup.py install

Alternatively, If you don't install any backend you can install NNCF and PyTorch in one line with:

python setup.py install --torch

Install NNCF and TensorFlow in one line:

python setup.py install --tf

(Experimental) Install NNCF for ONNXRuntime-OpenVINO

python setup.py install --onnx

NB: For launching example scripts in this repository, we recommend replacing the install option above with develop and setting the PYTHONPATH variable to the root of the checked-out repository.

As a PyPI package:

NNCF can be installed as a regular PyPI package via pip:

pip install nncf

Alternatively, If you don't install any backend you can install NNCF and PyTorch in one line with:

pip install nncf[torch]

Install NNCF and TensorFlow in one line:

pip install nncf[tf]

(Experimental) Install NNCF for ONNXRuntime-OpenVINO

pip install nncf[onnx]

NNCF is also available via conda:

conda install -c conda-forge nncf

From a specific commit hash using pip:

pip install git+https://github.com/openvinotoolkit/nncf@bd189e2#egg=nncf

Note that in order for this to work for pip versions >= 21.3, your Git version must be at least 2.22.

As a Docker image

Use one of the Dockerfiles in the docker directory to build an image with an environment already set up and ready for running NNCF sample scripts.

Contributing

Refer to the CONTRIBUTING.md file for guidelines on contributions to the NNCF repository.

NNCF Compressed Model Zoo

Results achieved using sample scripts, example patches to third-party repositories and NNCF configuration files provided with this repository. See README.md files for sample scripts and example patches to find instruction and links to exact configuration files and final checkpoints.

PyTorch models

Classification

PyTorch Model Compression algorithm Dataset Accuracy (Drop) %
ResNet-50 INT8 ImageNet 76.42 (-0.26)
ResNet-50 INT8 (per-tensor for weights) ImageNet 76.37 (-0.21)
ResNet-50 Mixed, 44.8% INT8 / 55.2% INT4 ImageNet 76.2 (-0.04)
ResNet-50 INT8 + Sparsity 61% (RB) ImageNet 75.43 (0.73)
ResNet-50 INT8 + Sparsity 50% (RB) ImageNet 75.55 (0.61)
ResNet-50 Filter pruning, 40%, geometric median criterion ImageNet 75.62 (0.54)
Inception V3 INT8 ImageNet 78.25 (-0.91)
Inception V3 INT8 + Sparsity 61% (RB) ImageNet 77.58 (-0.24)
MobileNet V2 INT8 ImageNet 71.35 (0.58)
MobileNet V2 INT8 (per-tensor for weights) ImageNet 71.3 (0.63)
MobileNet V2 Mixed, 46.6% INT8 / 53.4% INT4 ImageNet 70.92 (1.01)
MobileNet V2 INT8 + Sparsity 52% (RB) ImageNet 71.11 (0.82)
MobileNet V3 small INT8 ImageNet 66.94 (0.73)
SqueezeNet V1.1 INT8 ImageNet 58.28 (-0.04)
SqueezeNet V1.1 INT8 (per-tensor for weights) ImageNet 58.26 (-0.02)
SqueezeNet V1.1 Mixed, 54.7% INT8 / 45.3% INT4 ImageNet 58.9 (-0.66)
ResNet-18 XNOR (weights), scale/threshold (activations) ImageNet 61.63 (8.17)
ResNet-18 DoReFa (weights), scale/threshold (activations) ImageNet 61.61 (8.19)
ResNet-18 Filter pruning, 40%, magnitude criterion ImageNet 69.26 (0.54)
ResNet-18 Filter pruning, 40%, geometric median criterion ImageNet 69.32 (0.48)
ResNet-34 Filter pruning, 50%, geometric median criterion + KD ImageNet 73.11 (0.19)
GoogLeNet Filter pruning, 40%, geometric median criterion ImageNet 68.82 (0.93)

Object detection

PyTorch Model Compression algorithm Dataset mAP (drop) %
SSD300-MobileNet INT8 + Sparsity 70% (Magnitude) VOC12+07 train, VOC07 eval 62.94 (-0.71)
SSD300-VGG-BN INT8 VOC12+07 train, VOC07 eval 77.96 (0.32)
SSD300-VGG-BN INT8 + Sparsity 70% (Magnitude) VOC12+07 train, VOC07 eval 77.59 (0.69)
SSD300-VGG-BN Filter pruning, 40%, geometric median criterion VOC12+07 train, VOC07 eval 77.72 (0.56)
SSD512-VGG-BN INT8 VOC12+07 train, VOC07 eval 80.12 (0.14)
SSD512-VGG-BN INT8 + Sparsity 70% (Magnitude) VOC12+07 train, VOC07 eval 79.67 (0.59)

Semantic segmentation

PyTorch Model Compression algorithm Dataset Accuracy (Drop) %
UNet INT8 CamVid 71.8 (0.15)
UNet INT8 + Sparsity 60% (Magnitude) CamVid 72.03 (-0.08)
ICNet INT8 CamVid 67.86 (0.03)
ICNet INT8 + Sparsity 60% (Magnitude) CamVid 67.18 (0.71)
UNet INT8 Mapillary 55.87 (0.36)
UNet INT8 + Sparsity 60% (Magnitude) Mapillary 55.65 (0.58)
UNet Filter pruning, 25%, geometric median criterion Mapillary 55.62 (0.61)

NLP (HuggingFace Transformers-powered models)

PyTorch Model Compression algorithm Dataset Accuracy (Drop) %
BERT-base-chinese INT8 XNLI 77.22 (0.46)
BERT-base-cased INT8 CoNLL2003 99.18 (-0.01)
BERT-base-cased INT8 MRPC 84.8 (-0.24)
BERT-large (Whole Word Masking) INT8 SQuAD v1.1 F1: 92.68 (0.53)
RoBERTa-large INT8 MNLI matched: 89.25 (1.35)
DistilBERT-base INT8 SST-2 90.3 (0.8)
MobileBERT INT8 SQuAD v1.1 F1: 89.4 (0.58)
GPT-2 INT8 WikiText-2 (raw) perplexity: 20.9 (-1.17)

TensorFlow models

Classification

Tensorflow Model Compression algorithm Dataset Accuracy (Drop) %
Inception V3 INT8 (per-tensor for weights) ImageNet 78.36 (-0.44)
Inception V3 Sparsity 54% (Magnitude) ImageNet 77.87 (0.03)
Inception V3 INT8 (per-tensor for weights) + Sparsity 61% (RB) ImageNet 77.58 (0.32)
MobileNet V2 INT8 (per-tensor for weights) ImageNet 71.66 (0.19)
MobileNet V2 Sparsity 50% (RB) ImageNet 71.34 (0.51)
MobileNet V2 INT8 (per-tensor for weights) + Sparsity 52% (RB) ImageNet 71.0 (0.85)
MobileNet V3 small INT8 (per-channel, symmetric for weights; per-tensor, asymmetric for activations) ImageNet 67.75 (0.63)
MobileNet V3 small INT8 (per-channel, symmetric for weights; per-tensor, asymmetric for activations) + Sparsity 42% (RB) ImageNet 67.55 (0.83)
MobileNet V3 large INT8 (per-channel, symmetric for weights; per-tensor, asymmetric for activations) ImageNet 75.02 (0.79)
MobileNet V3 large INT8 (per-channel, symmetric for weights; per-tensor, asymmetric for activations) + Sparsity 42% (RB) ImageNet 75.28 (0.53)
ResNet50 INT8 (per-tensor for weights) ImageNet 75.0 (0.04)
ResNet50 Sparsity 80% (RB) ImageNet 74.36 (0.68)
ResNet50 INT8 (per-tensor for weightsy) + Sparsity 65% (RB) ImageNet 74.3 (0.74)
ResNet50 Filter Pruning 40%, geometric_median criterion ImageNet 74.98 (0.06)
ResNet50 Filter Pruning 40%, geometric_median criterion + INT8 (per-tensor for weights) ImageNet 75.08 (-0.04)
TensorFlow Hub MobileNet V2 Sparsity 35% (Magnitude) ImageNet 71.90 (-0.06)

Object detection

TensorFlow Model Compression algorithm Dataset mAP (drop) %
RetinaNet INT8 (per-tensor for weights) COCO2017 33.18 (0.26)
RetinaNet Sparsity 50% (Magnitude) COCO2017 33.13 (0.31)
RetinaNet Filter Pruning 40%, geometric_median criterion COCO2017 32.7 (0.74)
RetinaNet Filter Pruning 40%, geometric_median criterion + INT8 (per-tensor for weights) COCO2017 32.68 (0.76)
YOLOv4 INT8 (per-channel, symmetric for weights; per-tensor, asymmetric for activations) COCO2017 46.30 (0.74)
YOLOv4 Sparsity 50% (Magnitude) COCO2017 46.54 (0.50)

Instance segmentation

TensorFlow Model Compression algorithm Dataset mAP (drop) %
MaskRCNN INT8 (per-tensor for weights) COCO2017 bbox: 37.27 (0.06)
segm: 33.54 (0.02)
MaskRCNN Sparsity 50% (Magnitude) COCO2017 bbox: 36.93 (0.40)
segm: 33.23 (0.33)

Citing

@article{kozlov2020neural,
    title =   {Neural network compression framework for fast model inference},
    author =  {Kozlov, Alexander and Lazarevich, Ivan and Shamporov, Vasily and Lyalyushkin, Nikolay and Gorbachev, Yury},
    journal = {arXiv preprint arXiv:2002.08679},
    year =    {2020}
}

Legal Information

[*] Other names and brands may be claimed as the property of others.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nncf-2.3.0.tar.gz (519.8 kB view details)

Uploaded Source

Built Distribution

nncf-2.3.0-py3-none-any.whl (770.1 kB view details)

Uploaded Python 3

File details

Details for the file nncf-2.3.0.tar.gz.

File metadata

  • Download URL: nncf-2.3.0.tar.gz
  • Upload date:
  • Size: 519.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.10.5

File hashes

Hashes for nncf-2.3.0.tar.gz
Algorithm Hash digest
SHA256 922f33420205c98ed0dce08e910771dd081e1b716e9ed9d6e9fe1fd1caa850b0
MD5 b9e183e4da63d7fb8086b51a5b9cff60
BLAKE2b-256 7639a7797371399a60fcc8ca9362c94776bcfe36f96c9a8d29b564ae0adb0add

See more details on using hashes here.

File details

Details for the file nncf-2.3.0-py3-none-any.whl.

File metadata

  • Download URL: nncf-2.3.0-py3-none-any.whl
  • Upload date:
  • Size: 770.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.10.5

File hashes

Hashes for nncf-2.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 13e56cf78d6a68ea4318f1c60c1091c97c0c31dc7d572e579ee476cbeadf7098
MD5 d0bbccabb245017ec39f10d2685f3bdb
BLAKE2b-256 5ea3b451149caf45564e3f2984ddf5d5fb50f7447101eef7a617bb4b7b986bd4

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page