Skip to main content

High Granularity Quantization 2

Project description

HGQ2: High Granularity Quantization 2

repo PyPI LGPLv3 Documentation

HGQ2 Overview

HGQ2 (High Granularity Quantization 2) is a quantization-aware training framework built on Keras v3, targeting real-time deep learning applications on edge devices like FPGAs. It provides a comprehensive set of tools for creating and training quantized neural networks with minimal effort.

HGQ2 implements an gradient-based automatic bitwidth optimization and quantization-aware training algorithm. By laveraging gradients, it allows for bitwidth optimization at arbitrary granularity, up to per-weight and per-activation level.

  • High Granularity: HGQ supports per-weight and per-activation bitwidth optimization, or any other lower granularity.
  • Automatic Quantization: Bit-widths are optimized via gradients, no need to manually tune them in general.
  • What you see is what you get: One get exactly what you get from Keras models from RTL models.
    • still subject to machine float precision limitation.
  • Accurate Resource Estimation: EBOPs estimated by HGQ gives a good indication of the actual resource usage on FPGA, either upper limit of LUT (da4ml) or LUT + 55 * DSP (hls4ml).

In addition, this framework improves upon the old HGQ implementation in the following aspects:

  • Scalability: HGQ2 supports TensorFlow, JAX, and PyTorch. As XLA compilation inJAX and TensorFlow can significantly speed up the training process. Training speed on HGQ2 can be 1.2-5 times faster than the previous implementation.
  • Quantizers:
    • Fixed-point: While the last implementation only optimizes the number of floating bits with one way of parameterizing the fixed-point numbers, HGQ2 supports multiple ways of parametrizing them, and allows of optimizing any part of them via gradients.
    • Minifloat: Training with minifloat quantization is supported, also with surrogate gradients support (alpha quality).
  • More Layers: More layers are supported now, including the powerful EinsumDense(BatchNorm) layer and the MultiHeadAttention layer with bit-accurate softmax and scaled dot-product attention.

Installation

pip install HGQ2

If you are using da4ml, please make sure it is at least version 0.3:

pip install da4ml>=0.3

If you are using hls4ml, please make sure it is at least version 1.2:

pip install hls4ml>=1.2.0

Usage

Please refer to the documentation for more details on how to use the library.

A minimal example is shown below:

   import keras
   from hgq.layers import QDense, QConv2D
   from hgq.config import LayerConfigScope, QuantizerConfigScope

   # Setup quantization configuration
   # These values are the defaults, just for demonstration purposes here
   with (
      # Configuration scope for setting the default quantization type and overflow mode
      # The second configuration scope overrides the first one for the 'datalane' place
      QuantizerConfigScope(place='all', default_q_type='kbi', overflow_mode='SAT_SYM'),
      # Configuration scope for enabling EBOPs and setting the beta0 value
      QuantizerConfigScope(place='datalane', default_q_type='kif', overflow_mode='WRAP'),
      LayerConfigScope(enable_ebops=True, beta0=1e-5),
   ):
      model = keras.Sequential([
         QConv2D(32, (3, 3), activation='relu'),
         keras.layers.MaxPooling2D((2, 2)),
         keras.layers.Flatten(),
         QDense(10)
      ])

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hgq2-0.1.5.tar.gz (177.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

hgq2-0.1.5-py3-none-any.whl (82.3 kB view details)

Uploaded Python 3

File details

Details for the file hgq2-0.1.5.tar.gz.

File metadata

  • Download URL: hgq2-0.1.5.tar.gz
  • Upload date:
  • Size: 177.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for hgq2-0.1.5.tar.gz
Algorithm Hash digest
SHA256 4f5076b13ecbf6b6338622a5fa2537d24c71e80a595eb89fce00c0e0ca618fd8
MD5 50bdcae535261f422a0f3206f6bfa25f
BLAKE2b-256 ee21bf12579e24b5c27c23c79c3aa7bae07fa6d09aad7293f30f5c79ec7cf170

See more details on using hashes here.

Provenance

The following attestation bundles were made for hgq2-0.1.5.tar.gz:

Publisher: python-publish.yml on calad0i/HGQ2

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file hgq2-0.1.5-py3-none-any.whl.

File metadata

  • Download URL: hgq2-0.1.5-py3-none-any.whl
  • Upload date:
  • Size: 82.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for hgq2-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 1cd91794a6c3045aacc68ad22de843cbe8275de24e473940a3ff37ef081dc5f9
MD5 8c37e316a8f5290cdec2579f0095d696
BLAKE2b-256 462290ef7df05b4fe583b0270ed329506c7f96c9778f62511a7de139bc8869f2

See more details on using hashes here.

Provenance

The following attestation bundles were made for hgq2-0.1.5-py3-none-any.whl:

Publisher: python-publish.yml on calad0i/HGQ2

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page