Skip to main content

Quantization algorithms to compress aidge networks.

Project description

Aidge Quantization Module

You can find in this folder the library that implements the quantization algorithms. For the moment only Post Training Quantization (PTQ) is available. Its implementation does support multiple branch architectures.

[TOC]

Installation

Dependencies

  • GCC
  • Make/Ninja
  • CMake
  • Python (optional, if you have no intend to use this library in python with pybind)

Aidge dependencies

  • aidge_core The requirements for installing the library are the followings:

    • GCC, Make and CMake for the compilation pipeline
    • The AIDGE modules aidge_core, aidge_onnx and aidge_backend_cpu
    • Python (> 3.7) if you intend to use the pybind wrapper

Configuration of environment variables

Variable Default value Description
AIDGE_INSTALL / lib / libAidge Path to the installation folder of Aidge, must be the same used for all aidge dependencies
AIDGE_C_COMPILER gcc C Compiler to use
AIDGE_CXX_COMPILER g++ CXX Compiler to use
AIDGE_BUILD_TYPE Release Can either be Release or Debug
AIDGE_ASAN OFF Compile with ASAN for debug
AIDGE_WITH_CUDA ON Compile CUDA kernel for quantization, requires nvcc and aidge_bakcend_cuda
AIDGE_CMAKE_ARCH "" Append architecture-specific arguments if provided
AIDGE_BUILD_GEN "" To specify a CMake generator (for example Ninja)

Pip installation

pip install . -v

TIPS : Use environment variables to change compilation options :

  • AIDGE_INSTALL : to set the installation folder. Defaults to /usr/local/lib. :warning: This path must be identical to aidge_core install path.
  • AIDGE_BUILD_TYPE : to set the compilation mode to Debug or Release
  • AIDGE_WITH_CUDA : if your computer hasn't graphical card, don't forget to set if OFF
  • AIDGE_BUILD_GEN : to set the build backend with

User guide

In order to perform a quantization, you will need an AIDGE model (that can be loaded from an ONNX). Then, you will have to provide a calibration dataset consisting of AIDGE tensors (that can be loaded from some numpy arrays). And finally, you will have to specify the quantization number of bits.

Performing the PTQ on your model will then be a one liner:

aidge_quantization.quantize_network(aidge_model, nb_of_bits, calibration_set)

Technical insights

The PTQ algorithm consists of 3 main steps:

- Normalization of the parameters, so that each node set of weights fits in the [-1:1] range.
- Normalization of the activations, so that each node output value fits in the [-1:1] range.
- Quantization of the scaling nodes previously inserted

To achieve those steps, one must propagate the scaling factors inside the network. One should also balance the different branches when they are merging. A particular care is needed for the biases rescaling at each step.

Doing quantization step by step

It is possible to perform the PTQ step by step, thanks to the exposed functions of the API. In that case, here is the standard pipeline:

- Prepare the network for the PTQ (remove the flatten nodes, fuse the BatchNorms ...)
- Insert the scaling nodes that will allow the model calibration
- Perform the Cross Layer Equalization if possible
- Perform the parameter normalization
- Compute the node output ranges over an input calibration dataset
- Adjust the output ranges using a specified error metric (MSE, KL, ...)
- Perform the activation normalization
- Quantize the normalized network
- Convert the scaling factors to bit-shifting operations if needed

Further work

  • add Quantization Aware Training (QAT)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

aidge_quantization-0.4.2-cp313-cp313-manylinux_2_28_x86_64.whl (1.7 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.28+ x86-64

aidge_quantization-0.4.2-cp312-cp312-manylinux_2_28_x86_64.whl (1.6 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.28+ x86-64

aidge_quantization-0.4.2-cp311-cp311-manylinux_2_28_x86_64.whl (1.6 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.28+ x86-64

aidge_quantization-0.4.2-cp310-cp310-manylinux_2_28_x86_64.whl (1.6 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.28+ x86-64

File details

Details for the file aidge_quantization-0.4.2-cp313-cp313-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for aidge_quantization-0.4.2-cp313-cp313-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 7d68f7629593a8a784c6a716fc0d6c365801f986a56a7d1f0d3586ed0444bf6c
MD5 0ebc940c904bed5f95adaad57429bcf5
BLAKE2b-256 af42d003c3b4adc1a8b55e8b52e9c6137d981de4b46ebb9dedc05785a584aefd

See more details on using hashes here.

File details

Details for the file aidge_quantization-0.4.2-cp312-cp312-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for aidge_quantization-0.4.2-cp312-cp312-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 c4cf18bf903c62aeae7639ee3254a450997aece98035465666a6a6318bafc28a
MD5 e310a4f4d0b9ada82ebc6ea9d6f2b573
BLAKE2b-256 db5e7d627fd9a467d51b2c2404736ba79680cfbbdd455389d5fe827630623ba2

See more details on using hashes here.

File details

Details for the file aidge_quantization-0.4.2-cp311-cp311-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for aidge_quantization-0.4.2-cp311-cp311-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 5f61c7d2407eda6a7ccff50bb4780fbcd33bb1a76e9ec26c8b5e2783824ba5e7
MD5 4cee5bf998755cbf832249d2cf24ca9d
BLAKE2b-256 f17d50e2acc8274a0c287a590d9907c6bcd28ce079c21d3347ad22ce6a09ba8a

See more details on using hashes here.

File details

Details for the file aidge_quantization-0.4.2-cp310-cp310-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for aidge_quantization-0.4.2-cp310-cp310-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 a40e1357d5e32762939f374befff95dbf2121843b7c4779c07ff22b0beb136b8
MD5 2f2dfc1b7c33f351d02beef32b0b8b66
BLAKE2b-256 8d14c25d00822415cd693d22cc32a6e85196d20fad9cf7b3c16d276d1d3938eb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page