Skip to main content

High Granularity Quantization 2

Project description

HGQ2: High Granularity Quantization 2

repo PyPI LGPLv3 Documentation

HGQ2 Overview

HGQ2 (High Granularity Quantization 2) is a quantization-aware training framework built on Keras v3, targeting real-time deep learning applications on edge devices like FPGAs. It provides a comprehensive set of tools for creating and training quantized neural networks with minimal effort.

HGQ2 implements an gradient-based automatic bitwidth optimization and quantization-aware training algorithm. By laveraging gradients, it allows for bitwidth optimization at arbitrary granularity, up to per-weight and per-activation level.

  • High Granularity: HGQ supports per-weight and per-activation bitwidth optimization, or any other lower granularity.
  • Automatic Quantization: Bit-widths are optimized via gradients, no need to manually tune them in general.
  • What you see is what you get: One get exactly what you get from Keras models from RTL models.
    • still subject to machine float precision limitation.
  • Accurate Resource Estimation: EBOPs estimated by HGQ gives a good indication of the actual resource usage on FPGA, either upper limit of LUT (da4ml) or LUT + 55 * DSP (hls4ml).

In addition, this framework improves upon the old HGQ implementation in the following aspects:

  • Scalability: HGQ2 supports TensorFlow, JAX, and PyTorch. As XLA compilation inJAX and TensorFlow can significantly speed up the training process. Training speed on HGQ2 can be 1.2-5 times faster than the previous implementation.
  • Quantizers:
    • Fixed-point: While the last implementation only optimizes the number of floating bits with one way of parameterizing the fixed-point numbers, HGQ2 supports multiple ways of parametrizing them, and allows of optimizing any part of them via gradients.
    • Minifloat: Training with minifloat quantization is supported, also with surrogate gradients support (alpha quality).
  • More Layers: More layers are supported now, including the powerful EinsumDense(BatchNorm) layer and the MultiHeadAttention layer with bit-accurate softmax and scaled dot-product attention.

Installation

pip install HGQ2

If you are using da4ml, please make sure it is at least version 0.6:

pip install da4ml>=0.6

If you are using hls4ml, please make sure it is at least version 1.2:

pip install hls4ml>=1.2.0

Usage

Please refer to the documentation for more details on how to use the library.

A minimal example is shown below:

   import keras
   from hgq.layers import QDense, QConv2D
   from hgq.config import LayerConfigScope, QuantizerConfigScope

   # Setup quantization configuration
   # These values are the defaults, just for demonstration purposes here
   with (
      # Configuration scope for setting the default quantization type and overflow mode
      # The second configuration scope overrides the first one for the 'datalane' place
      QuantizerConfigScope(place='all', default_q_type='kbi', overflow_mode='SAT_SYM'),
      # Configuration scope for enabling EBOPs and setting the beta0 value
      QuantizerConfigScope(place='datalane', default_q_type='kif', overflow_mode='WRAP'),
      LayerConfigScope(enable_ebops=True, beta0=1e-5),
   ):
      model = keras.Sequential([
         QConv2D(32, (3, 3), activation='relu'),
         keras.layers.MaxPooling2D((2, 2)),
         keras.layers.Flatten(),
         QDense(10)
      ])

Citation

If you use HGQ2 in your research, please consider citing the following paper:

@inproceedings{hgq,
author = {Sun, Chang and Que, Zhiqiang and Aarrestad, Thea and Loncar, Vladimir and Ngadiuba, Jennifer and Luk, Wayne and Spiropulu, Maria},
title = {HGQ: High Granularity Quantization for Real-time Neural Networks on FPGAs},
year = {2026},
isbn = {9798400720796},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3748173.3779200},
doi = {10.1145/3748173.3779200},
booktitle = {Proceedings of the 2026 ACM/SIGDA International Symposium on Field Programmable Gate Arrays},
pages = {79–91},
numpages = {13},
keywords = {quantization-aware training, fpga, real-time inference, neural networks, hardware-software codesign, low-latency, quantization},
location = {USA},
series = {FPGA '26}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hgq2-0.1.8.tar.gz (193.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

hgq2-0.1.8-py3-none-any.whl (101.7 kB view details)

Uploaded Python 3

File details

Details for the file hgq2-0.1.8.tar.gz.

File metadata

  • Download URL: hgq2-0.1.8.tar.gz
  • Upload date:
  • Size: 193.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for hgq2-0.1.8.tar.gz
Algorithm Hash digest
SHA256 1eecaf1a8a6ce5caeab3af2aee1754a2fffb4c0ee69d77f4b1ffb9e252b78fad
MD5 f22c9afff2675ebe7d9b239982c39756
BLAKE2b-256 c6bedc2b5ac6131a6bf8d1b2232caffc744cae7996f2a6a43ff147f476429882

See more details on using hashes here.

Provenance

The following attestation bundles were made for hgq2-0.1.8.tar.gz:

Publisher: python-publish.yml on calad0i/HGQ2

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file hgq2-0.1.8-py3-none-any.whl.

File metadata

  • Download URL: hgq2-0.1.8-py3-none-any.whl
  • Upload date:
  • Size: 101.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for hgq2-0.1.8-py3-none-any.whl
Algorithm Hash digest
SHA256 0b2823ca41034f7d779aa70f0bb0d352fc1c9fb327303acaa06ff6965ae1e654
MD5 ed175680b5477479cf2f5ad7f9f5864c
BLAKE2b-256 eaaf32870ad47191b0d5842a92c4ac4e338cb225da1b51a03baf1bd28db2c436

See more details on using hashes here.

Provenance

The following attestation bundles were made for hgq2-0.1.8-py3-none-any.whl:

Publisher: python-publish.yml on calad0i/HGQ2

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page