Skip to main content

High Granularity Quantizarion

Project description

HGQ-logo

High Granularity Quantization

License Apache 2.0 Documentation PyPI version ArXiv

HGQ is an gradient-based automatic bitwidth optimization and quantization-aware training algorithm for neural networks to be deployed on FPGAs, By laveraging gradients, it allows for bitwidth optimization at arbitrary granularity, up to per-weight and per-activation level.

HGQ-overview

Compare to the other heterogeneous quantization approach, like the QKeras counterpart, HGQ provides the following advantages:

  • High Granularity: HGQ supports per-weight and per-activation bitwidth optimization, or any other lower granularity.
  • Automatic Quantization: By setting a resource regularization term, HGQ could automatically optimize the bitwidth of all parameters during training. Pruning is performed naturally when a bitwidth is reduced to 0.
  • Bit-accurate conversion to hls4ml: You get exactly what you get from Keras models from hls4ml models. HGQ provides a bit-accurate conversion interface, proxy models, for bit-accurate conversion to hls4ml models.
    • still subject to machine float precision limitation.
  • Accurate Resource Estimation: BOPs estimated by HGQ is roughly #LUTs + 55#DSPs for actual (post place & route) FPGA resource consumption. This metric is available during training, and one can estimate the resource consumption of the final model in a very early stage.

Depending on the specific application, HGQ could achieve up to 20x resource reduction compared to the AutoQkeras approach, while maintaining the same accuracy. For some more challenging tasks, where the model is already under-fitted, HGQ could still improve the performance under the same on-board resource consumption. For more details, please refer to our paper here.

Installation

You will need python>=3.10 and tensorflow>=2.13 to run this framework. You can install it via pip:

pip install hgq

Usage

Please refer to the documentation for more details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hgq-0.2.2.tar.gz (95.5 kB view details)

Uploaded Source

Built Distribution

HGQ-0.2.2-py3-none-any.whl (46.4 kB view details)

Uploaded Python 3

File details

Details for the file hgq-0.2.2.tar.gz.

File metadata

  • Download URL: hgq-0.2.2.tar.gz
  • Upload date:
  • Size: 95.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.19

File hashes

Hashes for hgq-0.2.2.tar.gz
Algorithm Hash digest
SHA256 a952c8188bc67ae4da8fab8f22bad4dbebb9d8ec7c4c0e6a293a5a431003e203
MD5 91839f22a36d1205b3bf208680eb7b6c
BLAKE2b-256 f5869dd773efe24a456e5c7bb0fb5b322dbe010ce343c6e4c461faca5efa9c3f

See more details on using hashes here.

File details

Details for the file HGQ-0.2.2-py3-none-any.whl.

File metadata

  • Download URL: HGQ-0.2.2-py3-none-any.whl
  • Upload date:
  • Size: 46.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.19

File hashes

Hashes for HGQ-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 5b90afdd5e4536066a277b2f3d5b76a33b5d7904a4496ecc2a2c3240911d09fd
MD5 cbe62fbda9aa9d0c6719b1692c135163
BLAKE2b-256 56f529136f66d6525a39d06cf5cdd6efd2fa96c6eeb486f3d93d80b3b111ae3d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page