Skip to main content

Neural Networks Compression Framework

Project description

GitHub Release Website Apache License Version 2.0 PyPI Downloads

Neural Network Compression Framework (NNCF)

Neural Network Compression Framework (NNCF) provides a suite of post-training and training-time algorithms for optimizing inference of neural networks in OpenVINO™ with a minimal accuracy drop.

NNCF is designed to work with models from PyTorch, TorchFX, TensorFlow, ONNX and OpenVINO™.

NNCF provides samples that demonstrate the usage of compression algorithms for different use cases and models. See compression results achievable with the NNCF-powered samples on the NNCF Model Zoo page.

The framework is organized as a Python* package that can be built and used in a standalone mode. The framework architecture is unified to make it easy to add different compression algorithms for both PyTorch and TensorFlow deep learning frameworks.

For more information about NNCF, see:

Table of contents

Key Features

Post-Training Compression Algorithms

Compression algorithm OpenVINO PyTorch TorchFX TensorFlow ONNX
Post-Training Quantization Supported Supported Experimental Supported Supported
Weights Compression Supported Supported Experimental Not supported Supported
Activation Sparsity Not supported Experimental Not supported Not supported Not supported

Training-Time Compression Algorithms

Compression algorithm PyTorch TensorFlow
Quantization Aware Training Supported Supported
Weight-Only Quantization Aware Training with LoRA and NLS Supported Not Supported
Mixed-Precision Quantization Supported Not supported
Sparsity Supported Supported
Filter pruning Supported Supported
Movement pruning Experimental Not supported
  • Automatic, configurable model graph transformation to obtain the compressed model.

    NOTE: Limited support for TensorFlow models. Only models created using Sequential or Keras Functional API are supported.

  • Common interface for compression methods.
  • GPU-accelerated layers for faster compressed model fine-tuning.
  • Distributed training support.
  • Git patch for prominent third-party repository (huggingface-transformers) demonstrating the process of integrating NNCF into custom training pipelines.
  • Exporting PyTorch compressed models to ONNX* checkpoints and TensorFlow compressed models to SavedModel or Frozen Graph format, ready to use with OpenVINO™ toolkit.
  • Support for Accuracy-Aware model training pipelines via the Adaptive Compression Level Training and Early Exit Training.

Installation Guide

For detailed installation instructions, refer to the Installation guide.

NNCF can be installed as a regular PyPI package via pip:

pip install nncf

NNCF is also available via conda:

conda install -c conda-forge nncf

System requirements of NNCF correspond to the used backend. System requirements for each backend and the matrix of corresponding versions can be found in installation.md.

Third-party Repository Integration

NNCF may be easily integrated into training/evaluation pipelines of third-party repositories.

Used by

  • HuggingFace Optimum Intel

    NNCF is used as a compression backend within the renowned transformers repository in HuggingFace Optimum Intel. For instance, the command below exports the Llama-3.2-3B-Instruct model to OpenVINO format with INT4-quantized weights:

    optimum-cli export openvino -m meta-llama/Llama-3.2-3B-Instruct --weight-format int4 ./Llama-3.2-3B-Instruct-int4
    
  • Ultralytics

    NNCF is integrated into the Intel OpenVINO export pipeline, enabling quantization for the exported models.

  • ExecuTorch

    NNCF is used as primary quantization framework for the ExecuTorch OpenVINO integration.

  • torch.compile

    NNCF is used as primary quantization framework for the torch.compile OpenVINO integration.

  • OpenVINO Training Extensions

    NNCF is integrated into OpenVINO Training Extensions as a model optimization backend. You can train, optimize, and export new models based on available model templates as well as run the exported models with OpenVINO.

NNCF Compressed Model Zoo

List of models and compression results for them can be found at our NNCF Model Zoo page.

Citing

@article{kozlov2020neural,
    title =   {Neural network compression framework for fast model inference},
    author =  {Kozlov, Alexander and Lazarevich, Ivan and Shamporov, Vasily and Lyalyushkin, Nikolay and Gorbachev, Yury},
    journal = {arXiv preprint arXiv:2002.08679},
    year =    {2020}
}

Telemetry

NNCF as part of the OpenVINO™ toolkit collects anonymous usage data for the purpose of improving OpenVINO™ tools. You can opt-out at any time by running the following command in the Python environment where you have NNCF installed:

opt_in_out --opt_out

More information available on OpenVINO telemetry.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nncf-2.19.0.tar.gz (1.0 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

nncf-2.19.0-py3-none-any.whl (1.5 MB view details)

Uploaded Python 3

File details

Details for the file nncf-2.19.0.tar.gz.

File metadata

  • Download URL: nncf-2.19.0.tar.gz
  • Upload date:
  • Size: 1.0 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.14

File hashes

Hashes for nncf-2.19.0.tar.gz
Algorithm Hash digest
SHA256 d32a9667e6b2837557bdd340895bb49e8d103eb1ec004031386e8c2bb4554cac
MD5 9eddb9053a63316ea92f8127ac149633
BLAKE2b-256 3ec5902ae10b9b223a5859c70d925731a4c784d59d4dedc6acee2bef738c482b

See more details on using hashes here.

File details

Details for the file nncf-2.19.0-py3-none-any.whl.

File metadata

  • Download URL: nncf-2.19.0-py3-none-any.whl
  • Upload date:
  • Size: 1.5 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.14

File hashes

Hashes for nncf-2.19.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6c782b0e5a8120d36a3249cd61edaf311b6270dd90095661700658899726e523
MD5 2bf3013025ce58df799deb21ccbfcbf3
BLAKE2b-256 669efef9c48608372c68c1f99e53984202c961e595d9ff337afffb2d03138d47

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page