Skip to main content

quantization utility modules to bridge torch fx and PT2E quantized models, as well as ONNX and others, inspired by methods in mmdeploy, without the outdated dependencies and some features not found in it.

Project description

quantizeutils

Quantization utility modules I used on my About Quantization guide.

Installation

# @ shell

pip install quantizeutils

# or

poetry add quantizeutils

Usage

Pre and Post Process FX traced models before QAT

  • quantizeutils.fx.utils.pre_procecss.propagate_split_share_qparams_pre_process()

    • torch.fx.trace() produces weirdly shared quantization parameters when torch.split() is present in the graph. This function fixes that.
  • quantizeutils.fx.utils.pre_procecss.relu_clamp_backend_config_unshare_observers()

    • ReLU and torch.clamp use shared observers in the torch native backend config (default). This expands the quantization min and max unnecessarily keeping, for example, min values below 0 on ReLU nodes and wasting quantization scaling space that is not needed. This function fixes that if applied before FX tracing.
  • quantizeutils.fx.utils.post_process.fuse_qat_bn_post_process()

    • Prepares QAT unfused nodes (for example batch normalization) before exporting to ONNX
  • quantizeutils.fx.utils.post_process.merge_relu_clamp_to_qparams_post_process

    • Some modules like Conv+ReLU will fuse automatically in the native backend but remain unfused if exported to ONNX or other backends. This function merges the ReLU and torch.clamp node activations to the previous node as part of their q_min and q_max, instead of relying on a secondary node.

FX Backend for AIEdgeTorch export

AIEdgeTorch is a powerful (but still volatile) tool to convert torch models to tensorflow through PT2E. Since some models are currently only quantized with FX graphs, I thought to write an FX backend configuration to potentially convert FX models to ai_edge_torch exportable models. More on my About Quantization guide.

quantizeutils.fx.backend_config.ai_edge_backend

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

quantizeutils-0.1.0.tar.gz (20.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

quantizeutils-0.1.0-py3-none-any.whl (22.9 kB view details)

Uploaded Python 3

File details

Details for the file quantizeutils-0.1.0.tar.gz.

File metadata

  • Download URL: quantizeutils-0.1.0.tar.gz
  • Upload date:
  • Size: 20.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.4 CPython/3.10.12 Linux/5.15.153.1-microsoft-standard-WSL2

File hashes

Hashes for quantizeutils-0.1.0.tar.gz
Algorithm Hash digest
SHA256 710bd7a1fd4f7c4c22819ab6ed3cebbfa1d4c68f6ae66ce750190278094c2ae2
MD5 8c508819579c3de2b7c6ea6dad55d96c
BLAKE2b-256 1952cc5b52422bcc8b17daf2ab2b5e6ec6fe5f284adf41437fe95b2e16c090b5

See more details on using hashes here.

File details

Details for the file quantizeutils-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: quantizeutils-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 22.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.4 CPython/3.10.12 Linux/5.15.153.1-microsoft-standard-WSL2

File hashes

Hashes for quantizeutils-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 35020e25f0b07b9e931f0361c064522a3d77a0eddcd4fdabdad6f771fe45d850
MD5 a624d6bc5941ec12585d3e4f5034396a
BLAKE2b-256 0db412a696eb9877d2e3848dd7bbc8a6ce195c78c0e44da329455b5430283fb6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page