Skip to main content

High Granularity Quantization 2

Project description

HGQ2: High Granularity Quantization 2

repo PyPI LGPLv3 Documentation

HGQ2 Overview

HGQ2 (High Granularity Quantization 2) is a quantization-aware training framework built on Keras v3, targeting real-time deep learning applications on edge devices like FPGAs. It provides a comprehensive set of tools for creating and training quantized neural networks with minimal effort.

HGQ2 implements an gradient-based automatic bitwidth optimization and quantization-aware training algorithm. By laveraging gradients, it allows for bitwidth optimization at arbitrary granularity, up to per-weight and per-activation level.

  • High Granularity: HGQ supports per-weight and per-activation bitwidth optimization, or any other lower granularity.
  • Automatic Quantization: Bit-widths are optimized via gradients, no need to manually tune them in general.
  • What you see is what you get: One get exactly what you get from Keras models from RTL models.
    • still subject to machine float precision limitation.
  • Accurate Resource Estimation: EBOPs estimated by HGQ gives a good indication of the actual resource usage on FPGA, either upper limit of LUT (da4ml) or LUT + 55 * DSP (hls4ml).

In addition, this framework improves upon the old HGQ implementation in the following aspects:

  • Scalability: HGQ2 supports TensorFlow, JAX, and PyTorch. As XLA compilation inJAX and TensorFlow can significantly speed up the training process. Training speed on HGQ2 can be 1.2-5 times faster than the previous implementation.
  • Quantizers:
    • Fixed-point: While the last implementation only optimizes the number of floating bits with one way of parameterizing the fixed-point numbers, HGQ2 supports multiple ways of parametrizing them, and allows of optimizing any part of them via gradients.
    • Minifloat: Training with minifloat quantization is supported, also with surrogate gradients support (alpha quality).
  • More Layers: More layers are supported now, including the powerful EinsumDense(BatchNorm) layer and the MultiHeadAttention layer with bit-accurate softmax and scaled dot-product attention.

Installation

pip install HGQ2

If you are using da4ml, please make sure it is at least version 0.3:

pip install da4ml>=0.3

If you are using hls4ml, please make sure it is at least version 1.2:

pip install hls4ml>=1.2.0

Usage

Please refer to the documentation for more details on how to use the library.

A minimal example is shown below:

   import keras
   from hgq.layers import QDense, QConv2D
   from hgq.config import LayerConfigScope, QuantizerConfigScope

   # Setup quantization configuration
   # These values are the defaults, just for demonstration purposes here
   with (
      # Configuration scope for setting the default quantization type and overflow mode
      # The second configuration scope overrides the first one for the 'datalane' place
      QuantizerConfigScope(place='all', default_q_type='kbi', overflow_mode='SAT_SYM'),
      # Configuration scope for enabling EBOPs and setting the beta0 value
      QuantizerConfigScope(place='datalane', default_q_type='kif', overflow_mode='WRAP'),
      LayerConfigScope(enable_ebops=True, beta0=1e-5),
   ):
      model = keras.Sequential([
         QConv2D(32, (3, 3), activation='relu'),
         keras.layers.MaxPooling2D((2, 2)),
         keras.layers.Flatten(),
         QDense(10)
      ])

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hgq2-0.1.7.tar.gz (189.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

hgq2-0.1.7-py3-none-any.whl (97.7 kB view details)

Uploaded Python 3

File details

Details for the file hgq2-0.1.7.tar.gz.

File metadata

  • Download URL: hgq2-0.1.7.tar.gz
  • Upload date:
  • Size: 189.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for hgq2-0.1.7.tar.gz
Algorithm Hash digest
SHA256 33fcc835517a6ba6632d25c88fbf14cb634dcd9606e4838024a90633dc85baf0
MD5 dded4fc1e98043dd2f4c922b1dcb9282
BLAKE2b-256 5c521f05633144151ef09554bcf78a10451d45e18e11c1f1a96c7ce5ca20cde0

See more details on using hashes here.

Provenance

The following attestation bundles were made for hgq2-0.1.7.tar.gz:

Publisher: python-publish.yml on calad0i/HGQ2

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file hgq2-0.1.7-py3-none-any.whl.

File metadata

  • Download URL: hgq2-0.1.7-py3-none-any.whl
  • Upload date:
  • Size: 97.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for hgq2-0.1.7-py3-none-any.whl
Algorithm Hash digest
SHA256 332b3a7e9b945583d3b5e9c40094ca83b5e9dcc0b6a07e5dec48bf0b86105785
MD5 f13657da9f54629dbde6e7603db5491b
BLAKE2b-256 de8c6ad7745a4d3fc666fde7c004570ce1f724bcea1e333f37d0e2b7a1fa4c58

See more details on using hashes here.

Provenance

The following attestation bundles were made for hgq2-0.1.7-py3-none-any.whl:

Publisher: python-publish.yml on calad0i/HGQ2

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page