quantization utility modules to bridge torch fx and PT2E quantized models, as well as ONNX and others, inspired by methods in mmdeploy, without the outdated dependencies and some features not found in it.
Project description
quantizeutils
Quantization utility modules I used on my About Quantization guide.
Installation
# @ shell
pip install quantizeutils
# or
poetry add quantizeutils
Usage
Pre and Post Process FX traced models before QAT
-
quantizeutils.fx.utils.pre_procecss.propagate_split_share_qparams_pre_process()- torch.fx.trace() produces weirdly shared quantization parameters when torch.split() is present in the graph. This function fixes that.
-
quantizeutils.fx.utils.pre_procecss.relu_clamp_backend_config_unshare_observers()- ReLU and torch.clamp use shared observers in the torch native backend config (default). This expands the quantization min and max unnecessarily keeping, for example, min values below 0 on ReLU nodes and wasting quantization scaling space that is not needed. This function fixes that if applied before FX tracing.
-
quantizeutils.fx.utils.post_process.fuse_qat_bn_post_process()- Prepares QAT unfused nodes (for example batch normalization) before exporting to ONNX
-
quantizeutils.fx.utils.post_process.merge_relu_clamp_to_qparams_post_process- Some modules like Conv+ReLU will fuse automatically in the native backend but remain unfused if exported to ONNX or other backends. This function merges the ReLU and
torch.clampnode activations to the previous node as part of their q_min and q_max, instead of relying on a secondary node.
- Some modules like Conv+ReLU will fuse automatically in the native backend but remain unfused if exported to ONNX or other backends. This function merges the ReLU and
FX Backend for AIEdgeTorch export
AIEdgeTorch is a powerful (but still volatile) tool to convert torch models to tensorflow through PT2E. Since some models are currently only quantized with FX graphs, I thought to write an FX backend configuration to potentially convert FX models to ai_edge_torch exportable models. More on my About Quantization guide.
quantizeutils.fx.backend_config.ai_edge_backend
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file quantizeutils-0.1.0.tar.gz.
File metadata
- Download URL: quantizeutils-0.1.0.tar.gz
- Upload date:
- Size: 20.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.1.4 CPython/3.10.12 Linux/5.15.153.1-microsoft-standard-WSL2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
710bd7a1fd4f7c4c22819ab6ed3cebbfa1d4c68f6ae66ce750190278094c2ae2
|
|
| MD5 |
8c508819579c3de2b7c6ea6dad55d96c
|
|
| BLAKE2b-256 |
1952cc5b52422bcc8b17daf2ab2b5e6ec6fe5f284adf41437fe95b2e16c090b5
|
File details
Details for the file quantizeutils-0.1.0-py3-none-any.whl.
File metadata
- Download URL: quantizeutils-0.1.0-py3-none-any.whl
- Upload date:
- Size: 22.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.1.4 CPython/3.10.12 Linux/5.15.153.1-microsoft-standard-WSL2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
35020e25f0b07b9e931f0361c064522a3d77a0eddcd4fdabdad6f771fe45d850
|
|
| MD5 |
a624d6bc5941ec12585d3e4f5034396a
|
|
| BLAKE2b-256 |
0db412a696eb9877d2e3848dd7bbc8a6ce195c78c0e44da329455b5430283fb6
|