Skip to main content

High Granularity Quantization 2

Project description

HGQ2: High Granularity Quantization 2

repo PyPI LGPLv3 Documentation

HGQ2 Overview

HGQ2 (High Granularity Quantization 2) is a quantization-aware training framework built on Keras v3, targeting real-time deep learning applications on edge devices like FPGAs. It provides a comprehensive set of tools for creating and training quantized neural networks with minimal effort.

HGQ2 implements an gradient-based automatic bitwidth optimization and quantization-aware training algorithm. By laveraging gradients, it allows for bitwidth optimization at arbitrary granularity, up to per-weight and per-activation level.

  • High Granularity: HGQ supports per-weight and per-activation bitwidth optimization, or any other lower granularity.
  • Automatic Quantization: Bit-widths are optimized via gradients, no need to manually tune them in general.
  • What you see is what you get: One get exactly what you get from Keras models from RTL models.
    • still subject to machine float precision limitation.
  • Accurate Resource Estimation: EBOPs estimated by HGQ gives a good indication of the actual resource usage on FPGA, either upper limit of LUT (da4ml) or LUT + 55 * DSP (hls4ml).

In addition, this framework improves upon the old HGQ implementation in the following aspects:

  • Scalability: HGQ2 supports TensorFlow, JAX, and PyTorch. As XLA compilation inJAX and TensorFlow can significantly speed up the training process. Training speed on HGQ2 can be 1.2-5 times faster than the previous implementation.
  • Quantizers:
    • Fixed-point: While the last implementation only optimizes the number of floating bits with one way of parameterizing the fixed-point numbers, HGQ2 supports multiple ways of parametrizing them, and allows of optimizing any part of them via gradients.
    • Minifloat: Training with minifloat quantization is supported, also with surrogate gradients support (alpha quality).
  • More Layers: More layers are supported now, including the powerful EinsumDense(BatchNorm) layer and the MultiHeadAttention layer with bit-accurate softmax and scaled dot-product attention.

Installation

pip install HGQ2

If you are using da4ml, please make sure it is at least version 0.3:

pip install da4ml>=0.3

If you are using hls4ml, please make sure it is at least version 1.2:

pip install hls4ml>=1.2.0

Usage

Please refer to the documentation for more details on how to use the library.

A minimal example is shown below:

   import keras
   from hgq.layers import QDense, QConv2D
   from hgq.config import LayerConfigScope, QuantizerConfigScope

   # Setup quantization configuration
   # These values are the defaults, just for demonstration purposes here
   with (
      # Configuration scope for setting the default quantization type and overflow mode
      # The second configuration scope overrides the first one for the 'datalane' place
      QuantizerConfigScope(place='all', default_q_type='kbi', overflow_mode='SAT_SYM'),
      # Configuration scope for enabling EBOPs and setting the beta0 value
      QuantizerConfigScope(place='datalane', default_q_type='kif', overflow_mode='WRAP'),
      LayerConfigScope(enable_ebops=True, beta0=1e-5),
   ):
      model = keras.Sequential([
         QConv2D(32, (3, 3), activation='relu'),
         keras.layers.MaxPooling2D((2, 2)),
         keras.layers.Flatten(),
         QDense(10)
      ])

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hgq2-0.1.4.tar.gz (175.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

hgq2-0.1.4-py3-none-any.whl (79.8 kB view details)

Uploaded Python 3

File details

Details for the file hgq2-0.1.4.tar.gz.

File metadata

  • Download URL: hgq2-0.1.4.tar.gz
  • Upload date:
  • Size: 175.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for hgq2-0.1.4.tar.gz
Algorithm Hash digest
SHA256 c8c9dcfd189df50e3fd9beafa39ad4a9f07bb45efd214492a7dbdc9c22d33fd6
MD5 e4f7049cfb92ec4761c15aa16fd8dfcc
BLAKE2b-256 ba8a9c7cf17b00903139a729e1eaef7c8b7bb1f92598b0b49f8e0479cf39eae0

See more details on using hashes here.

Provenance

The following attestation bundles were made for hgq2-0.1.4.tar.gz:

Publisher: python-publish.yml on calad0i/HGQ2

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file hgq2-0.1.4-py3-none-any.whl.

File metadata

  • Download URL: hgq2-0.1.4-py3-none-any.whl
  • Upload date:
  • Size: 79.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for hgq2-0.1.4-py3-none-any.whl
Algorithm Hash digest
SHA256 61277d0b5d5ca50cd06d5ff9a28b0ff992ac25532d305ccce65574364c387281
MD5 6254085dd33140d12d136b44796587d0
BLAKE2b-256 d868a94532aec9fd6cd9ba56cc68f04c2d62e93a88a9fee6b15485af1a558747

See more details on using hashes here.

Provenance

The following attestation bundles were made for hgq2-0.1.4-py3-none-any.whl:

Publisher: python-publish.yml on calad0i/HGQ2

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page