Skip to main content

Quantization algorithms to compress aidge networks.

Project description

Aidge logo

EPL 2.0 Examples PyPi Examples Documentation Status GitLab Contributors Open GitLab Issues Closed GitLab Issues

Aidge Quantization Module

You can find in this folder the library that implements the quantization algorithms. For the moment only Post Training Quantization (PTQ) is available. Its implementation does support multiple branch architectures.

Prerequisite:

  • aidge_core
  • aidge_backend_cpu
  • aidge_backend_cuda
  • aidge_onnx
pip install aidge-learning

🛠 Build from Source

Prerequisite (in addition to previous one):

1. Python or C++ installation using setup scripts

Environment C++ Development Python Development
Windows .\setup.ps1 -Modules quantization -Tests -CppOnly .\setup.ps1 -Modules quantization -Tests
Unix ./setup.sh -m quantization --tests --cpp-only ./setup.sh -m quantization --tests

[!TIP] Use Get-Help setup.ps1 (Windows) or ./setup.sh -h (Unix) for full documentation.

2. Python Installation using pip

Run these commands from the aidge_quantization/ directory:

# Standard install
pip install . -v

# Install with testing dependencies
pip install .[test] -v && pytest

Editable Install (Experimental)

Use this for real-time development without re-installing.

pip install --no-build-isolation -ve . --config-settings=editable.rebuild=true -Cbuild-dir=build

3. C++ Installation (CMake)

A CMakePresets.json is provided for standard configurations.

# Configure, Build, and Install
cmake --preset clang-debug
cmake --build --preset clang-debug
cmake --install

# Run C++ Tests
ctest --test-dir build/

[!TIP] Create a CMakeUserPresets.json to define your own local build configurations.

User guide

In order to perform a quantization, you will need an AIDGE model (that can be loaded from an ONNX). Then, you will have to provide a calibration dataset consisting of AIDGE tensors (that can be loaded from some numpy arrays). And finally, you will have to specify the quantization number of bits.

Performing the PTQ on your model will then be a one liner:

aidge_quantization.quantize_network(aidge_model, nb_of_bits, calibration_set)

Technical insights

The PTQ algorithm consists of 3 main steps:

- Normalization of the parameters, so that each node set of weights fits in the [-1:1] range.
- Normalization of the activations, so that each node output value fits in the [-1:1] range.
- Quantization of the scaling nodes previously inserted

To achieve those steps, one must propagate the scaling factors inside the network. One should also balance the different branches when they are merging. A particular care is needed for the biases rescaling at each step.

Doing quantization step by step

It is possible to perform the PTQ step by step, thanks to the exposed functions of the API. In that case, here is the standard pipeline:

- Prepare the network for the PTQ (remove the flatten nodes, fuse the BatchNorms ...)
- Insert the scaling nodes that will allow the model calibration
- Perform the Cross Layer Equalization if possible
- Perform the parameter normalization
- Compute the node output ranges over an input calibration dataset
- Adjust the output ranges using a specified error metric (MSE, KL, ...)
- Perform the activation normalization
- Quantize the normalized network
- Convert the scaling factors to bit-shifting operations if needed

Further work

  • add Quantization Aware Training (QAT)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

aidge_quantization-0.9.1.post2-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (4.6 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.27+ x86-64manylinux: glibc 2.28+ x86-64

aidge_quantization-0.9.1.post2-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (4.6 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.27+ x86-64manylinux: glibc 2.28+ x86-64

aidge_quantization-0.9.1.post2-cp310-cp310-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (4.6 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.27+ x86-64manylinux: glibc 2.28+ x86-64

File details

Details for the file aidge_quantization-0.9.1.post2-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for aidge_quantization-0.9.1.post2-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 45ff8c19bf671cc6fde0fe7ab21e77219149cab18692dd0f91b844315a912b50
MD5 97cdecf070d659d5c7587df7abb34847
BLAKE2b-256 6475f269b92ac389eaab1dd962d8e76279225cf19a7d7eed99813ef6addc5d47

See more details on using hashes here.

File details

Details for the file aidge_quantization-0.9.1.post2-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for aidge_quantization-0.9.1.post2-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 061ec7e8367b25c9d217764fa6a9fb36a98faa20301773890b9416086bc9b007
MD5 3b024c7b63cf579de6c23a659e3ff8db
BLAKE2b-256 f236116fa02845c179cb0d8942810d18aaf7e94ee5c480356fec89f6ca7756fc

See more details on using hashes here.

File details

Details for the file aidge_quantization-0.9.1.post2-cp310-cp310-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for aidge_quantization-0.9.1.post2-cp310-cp310-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 b245d603787f6071fdf8a221691add9a4333170d30d3a28e46ad73af8e2c3a25
MD5 134c72ef42d88f667443c14c4b2f838a
BLAKE2b-256 00d070b37918f64ed7c959b3ac5e7c2fc1923e24cfd73135e46d8b7a3a55ba78

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page