Skip to main content

Neural Networks Compression Framework

Project description

Neural Network Compression Framework (NNCF)

This repository contains a PyTorch*-based framework and samples for neural networks compression.

The framework is organized as a Python* package that can be built and used in a standalone mode. The framework architecture is unified to make it easy to add different compression methods.

The samples demonstrate the usage of compression algorithms for three different use cases on public models and datasets: Image Classification, Object Detection and Semantic Segmentation. Compression results achievable with the NNCF-powered samples can be found in a table at the end of this document.

Key Features

  • Support of various compression algorithms, applied during a model fine-tuning process to achieve best compression parameters and accuracy:
  • Automatic, configurable model graph transformation to obtain the compressed model. The source model is wrapped by the custom class and additional compression-specific layers are inserted in the graph.
  • Common interface for compression methods
  • GPU-accelerated layers for faster compressed model fine-tuning
  • Distributed training support
  • Configuration file examples for each supported compression algorithm.
  • Git patches for prominent third-party repositories (mmdetection, huggingface-transformers) demonstrating the process of integrating NNCF into custom training pipelines
  • Exporting compressed models to ONNX* checkpoints ready for usage with OpenVINO™ toolkit.

Usage

The NNCF is organized as a regular Python package that can be imported in your target training pipeline script. The basic workflow is loading a JSON configuration script containing NNCF-specific parameters determining the compression to be applied to your model, and then passing your model along with the configuration script to the nncf.create_compressed_model function. This function returns a wrapped model ready for compression fine-tuning, and handle to the object allowing you to control the compression during the training process:

import nncf
from nncf import create_compressed_model, Config as NNCFConfig

# Instantiate your uncompressed model
from torchvision.models.resnet import resnet50
model = resnet50()

# Load a configuration file to specify compression
nncf_config = NNCFConfig.from_json("resnet50_int8.json")

# Provide data loaders for compression algorithm initialization, if necessary
nncf_config = register_default_init_args(nncf_config, loss_criterion, train_loader)

# Apply the specified compression algorithms to the model
comp_ctrl, compressed_model = create_compressed_model(model, nncf_config)

# Now use compressed_model as a usual torch.nn.Module to fine-tune compression parameters along with the model weights

# ... the rest of the usual PyTorch-powered training pipeline

# Export to ONNX or .pth when done fine-tuning
comp_ctrl.export_model("compressed_model.onnx")
torch.save(compressed_model.state_dict(), "compressed_model.pth")

For a more detailed description of NNCF usage in your training code, see Usage.md. For in-depth examples of NNCF integration, browse the sample scripts code, or the example patches to third-party repositories.

For more details about the framework architecture, refer to the NNCFArchitecture.md.

Model Compression Samples

For a quicker start with NNCF-powered compression, you can also try the sample scripts, each of which provides a basic training pipeline for classification, semantic segmentation and object detection neural network training correspondingly.

To run the samples please refer to the corresponding tutorials:

Third-party repository integration

NNCF may be straightforwardly integrated into training/evaluation pipelines of third-party repositories. See third_party_integration for examples of code modifications (Git patches and base commit IDs are provided) that are necessary to integrate NNCF into select repositories.

System requirements

  • Ubuntu* 16.04 or later (64-bit)
  • Python* 3.6 or later
  • NVidia CUDA* Toolkit 10.2 or later
  • PyTorch* 1.5 or later.

Installation

We suggest to install or use the package in the Python virtual environment.

As a PyPI package:

NNCF can be installed as a regular PyPI package via pip:

sudo apt install python3-dev
pip install nncf

As a package built from checked-out repository:

  1. Install the following system dependencies:

sudo apt-get install python3-dev

  1. Install the package and its dependencies by running the following in the repository root directory:
  • For CPU & GPU-powered execution: python setup.py install
  • For CPU-only installation python setup.py install --cpu-only

As a Docker image

Use one of the Dockerfiles in the docker directory to build an image with an environment already set up and ready for running NNCF sample scripts.

Contributing

Refer to the CONTRIBUTING.md file for guidelines on contributions to the NNCF repository.

NNCF compression results

Achieved using sample scripts and NNCF configuration files provided with this repository. See README.md files for sample scripts for links to exact configuration files and final PyTorch checkpoints.

Quick jump to sample type:

Classification

Object detection

Semantic segmentation

Natural language processing (3rd-party training pipelines)

Object detection (3rd-party training pipelines)

Classification

Model Compression algorithm Dataset PyTorch FP32 baseline PyTorch compressed accuracy
ResNet-50 INT8 ImageNet 76.13 76.05
ResNet-50 Mixed, 44.8% INT8 / 55.2% INT4 ImageNet 76.13 76.3
ResNet-50 INT8 + Sparsity 61% (RB) ImageNet 76.13 75.22
ResNet-50 INT8 + Sparsity 50% (RB) ImageNet 76.13 75.60
ResNet-50 Filter pruning, 30%, magnitude criterion ImageNet 76.13 75.7
ResNet-50 Filter pruning, 30%, geometric median criterion ImageNet 76.13 75.7
Inception V3 INT8 ImageNet 77.32 76.96
Inception V3 INT8 + Sparsity 61% (RB) ImageNet 77.32 77.02
MobileNet V2 INT8 ImageNet 71.81 71.34
MobileNet V2 Mixed, 46.6% INT8 / 53.4% INT4 ImageNet 71.81 70.89
MobileNet V2 INT8 + Sparsity 52% (RB) ImageNet 71.81 70.99
SqueezeNet V1.1 INT8 ImageNet 58.18 58.02
SqueezeNet V1.1 Mixed, 54.7% INT8 / 45.3% INT4 ImageNet 58.18 58.85
ResNet-18 XNOR (weights), scale/threshold (activations) ImageNet 69.76 61.59
ResNet-18 DoReFa (weights), scale/threshold (activations) ImageNet 69.76 61.56
ResNet-18 Filter pruning, 30%, magnitude criterion ImageNet 69.76 68.73
ResNet-18 Filter pruning, 30%, geometric median criterion ImageNet 69.76 68.97
ResNet-34 Filter pruning, 30%, magnitude criterion ImageNet 73.31 72.54
ResNet-34 Filter pruning, 30%, geometric median criterion ImageNet 73.31 72.62

Object detection

Model Compression algorithm Dataset PyTorch FP32 baseline PyTorch compressed accuracy
SSD300-BN INT8 VOC12+07 78.28 78.12
SSD300-BN INT8 + Sparsity 70% (Magnitude) VOC12+07 78.28 77.94
SSD512-BN INT8 VOC12+07 80.26 80.09
SSD512-BN INT8 + Sparsity 70% (Magnitude) VOC12+07 80.26 79.88

Semantic segmentation

Model Compression algorithm Dataset PyTorch FP32 baseline PyTorch compressed accuracy
UNet INT8 CamVid 71.95 71.66
UNet INT8 + Sparsity 60% (Magnitude) CamVid 71.95 71.72
ICNet INT8 CamVid 67.89 67.87
ICNet INT8 + Sparsity 60% (Magnitude) CamVid 67.89 67.24
UNet INT8 Mapillary 56.23 56.12
UNet INT8 + Sparsity 60% (Magnitude) Mapillary 56.23 56.0

NLP

Model Compression algorithm Dataset PyTorch FP32 baseline PyTorch compressed accuracy
BERT-base-chinese INT8 XNLI 77.68 77.22
BERT-large (Whole Word Masking) INT8 SQuAD v1.1 93.21 (F1) 92.68 (F1)
RoBERTa-large INT8 MNLI 90.6 (matched) 89.25 (matched)
DistilBERT-base INT8 SST-2 91.1 90.3
MobileBERT INT8 SQuAD v1.1 89.98 (F1) 89.3 (F1)

Object detection (3rd party)

Model Compression algorithm Dataset PyTorch FP32 baseline PyTorch compressed accuracy
RetinaNet-ResNet50-FPN INT8 COCO2017 35.6 (avg box mAP) 35.3 (avg box mAP)
RetinaNet-ResNet50-FPN INT8 + Sparsity 50% COCO2017 35.6 (avg box mAP) 34.7 (avg box mAP)
RetinaNet-ResNeXt101-64x4d-FPN INT8 COCO2017 39.6 (avg box mAP) 39.1 (avg box mAP)

Legal Information

[*] Other names and brands may be claimed as the property of others.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nncf-1.4.1.tar.gz (146.3 kB view details)

Uploaded Source

Built Distribution

nncf-1.4.1-py3-none-any.whl (193.1 kB view details)

Uploaded Python 3

File details

Details for the file nncf-1.4.1.tar.gz.

File metadata

  • Download URL: nncf-1.4.1.tar.gz
  • Upload date:
  • Size: 146.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/41.2.0 requests-toolbelt/0.9.1 tqdm/4.48.0 CPython/3.8.3

File hashes

Hashes for nncf-1.4.1.tar.gz
Algorithm Hash digest
SHA256 a2cf25e4fabc3049887769d22f5d66a41c3e9b9093efb96159e4475fc209d814
MD5 c7bd8d18691210a1978de7f075a0c082
BLAKE2b-256 db5f2fce90f0b9b36af7a9061435a03d3b2c075f22a91a1cd620a4a42d4eeae2

See more details on using hashes here.

File details

Details for the file nncf-1.4.1-py3-none-any.whl.

File metadata

  • Download URL: nncf-1.4.1-py3-none-any.whl
  • Upload date:
  • Size: 193.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/41.2.0 requests-toolbelt/0.9.1 tqdm/4.48.0 CPython/3.8.3

File hashes

Hashes for nncf-1.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 9d0fa61e74d0bb9e60ff309e0bf988f42dc5dae85b88d01bc14c9dcffbab1d7a
MD5 aa5b995ca94524d967ce02854c93edaf
BLAKE2b-256 874817b899afaccdd6a2928cafd9a52f91cf183f5d8ff60ae8d925552cc0f890

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page