Skip to main content

Quantization algorithms to compress aidge networks.

Project description

Aidge Quantization Module

You can find in this folder the library that implements the quantization algorithms. For the moment only Post Training Quantization (PTQ) is available. Its implementation does support multiple branch architectures.

[TOC]

Installation

Dependencies

  • GCC
  • Make/Ninja
  • CMake
  • Python (optional, if you have no intend to use this library in python with pybind)

Aidge dependencies

  • aidge_core The requirements for installing the library are the followings:

    • GCC, Make and CMake for the compilation pipeline
    • The AIDGE modules aidge_core, aidge_onnx and aidge_backend_cpu
    • Python (> 3.7) if you intend to use the pybind wrapper

Pip installation

pip install . -v

TIPS : Use environment variables to change compilation options :

  • AIDGE_INSTALL : to set the installation folder. Defaults to /usr/local/lib. :warning: This path must be identical to aidge_core install path.
  • AIDGE_PYTHON_BUILD_TYPE : to set the compilation mode to Debug or Release
  • AIDGE_BUILD_GEN : to set the build backend with

User guide

In order to perform a quantization, you will need an AIDGE model (that can be loaded from an ONNX). Then, you will have to provide a calibration dataset consisting of AIDGE tensors (that can be loaded from some numpy arrays). And finally, you will have to specify the quantization number of bits.

Performing the PTQ on your model will then be a one liner:

aidge_quantization.quantize_network(aidge_model, nb_of_bits, calibration_set)

Technical insights

The PTQ algorithm consists of 3 main steps:

- Normalization of the parameters, so that each node set of weights fits in the [-1:1] range.
- Normalization of the activations, so that each node output value fits in the [-1:1] range.
- Quantization of the scaling nodes previously inserted

To achieve those steps, one must propagate the scaling factors inside the network. One should also balance the different branches when they are merging. A particular care is needed for the biases rescaling at each step.

Doing quantization step by step

It is possible to perform the PTQ step by step, thanks to the exposed functions of the API. In that case, here is the standard pipeline:

- Prepare the network for the PTQ (remove the flatten nodes, fuse the BatchNorms ...)
- Insert the scaling nodes that will allow the model calibration
- Perform the Cross Layer Equalization if possible
- Perform the parameter normalization
- Compute the node output ranges over an input calibration dataset
- Adjust the output ranges using a specified error metric (MSE, KL, ...)
- Perform the activation normalization
- Quantize the normalized network
- Convert the scaling factors to bit-shifting operations if needed

Further work

  • add Quantization Aware Training (QAT)

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

aidge_quantization-0.4.0-cp313-cp313-manylinux_2_28_x86_64.whl (1.6 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.28+ x86-64

aidge_quantization-0.4.0-cp312-cp312-manylinux_2_28_x86_64.whl (1.6 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.28+ x86-64

aidge_quantization-0.4.0-cp311-cp311-manylinux_2_28_x86_64.whl (1.6 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.28+ x86-64

aidge_quantization-0.4.0-cp310-cp310-manylinux_2_28_x86_64.whl (1.6 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.28+ x86-64

File details

Details for the file aidge_quantization-0.4.0-cp313-cp313-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for aidge_quantization-0.4.0-cp313-cp313-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 c236794ec5c14b6224d9650001382c33be73bfa8e5b36133dc72022e2b422f2e
MD5 434e3b70e907d635f3e6dcf0f475aae4
BLAKE2b-256 d8f4572439a0bbb8c0cf14755816434d560819727bae168afa5881c979570a11

See more details on using hashes here.

File details

Details for the file aidge_quantization-0.4.0-cp312-cp312-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for aidge_quantization-0.4.0-cp312-cp312-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 768f334f3592f268bbc708a9da9ee4e947e644bc9a2b3aa740198bb7d93157ca
MD5 0ac1faef6987fd87d21826b1ff61dc98
BLAKE2b-256 b76e98f2f3e2ddc3b21d8062899022e15f9b741d05b049d587e9be5b9f9563d7

See more details on using hashes here.

File details

Details for the file aidge_quantization-0.4.0-cp311-cp311-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for aidge_quantization-0.4.0-cp311-cp311-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 e209f2d6ad3961ccbb1b9279528ca4a95d33ce73f8cf1f5b6e94754466dc9fe9
MD5 accae79cd31e2f6f3f80d1fcc5f25233
BLAKE2b-256 43d7eb9a2a66364855d77f90b225fe74298043e2df48143c63bba5d95546c5d7

See more details on using hashes here.

File details

Details for the file aidge_quantization-0.4.0-cp310-cp310-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for aidge_quantization-0.4.0-cp310-cp310-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 031a260485d6924c1d1e548780a4e1a6a2671366bc67bf721b525070268c239a
MD5 8aea0bd07bba4828f580a4aa899afeb4
BLAKE2b-256 84b484449d1f9207946ae1c3b570fe33e0f670c699c8964cc7e8112609dd1a71

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page