Skip to main content

High Granularity Quantizarion for hls4ml

Project description

High Granularity Quantization for Ultra-Fast Inference on FPGAs

HGQ is a method for quantization aware training of neural works to be deployed on FPGAs, which allows for per-weight and per-activation bitwidth optimization.

Depending on the specific application, HGQ could achieve up to 10x resource reduction compared to the traditional AutoQkeras approach, while maintaining the same accuracy. For some more challenging tasks, where the model is already under-fitted, HGQ could still improve the performance under the same on-board resource consumption. For more details, please refer to our paper (link coming not too soon).

This repository implements HGQ for tensorflow.keras models. It is independent of the QKeras project.

Notice: this repository is still under development, and the API might change in the future.

This package is still under development. Any API might change without notice at any time.

Installation

pip install HGQ, and you are good to go. Note that HGQ requires python3.10 and tensorflow>=2.11.

Usage Guide

Please refer to the usage guide for more details. This repo contains some use cases for HGQ.

FAQ

Please refer to the FAQ for more details.

Citation

The paper is not ready. Please check back later.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

HGQ-0.1.1.tar.gz (26.5 kB view hashes)

Uploaded Source

Built Distribution

HGQ-0.1.1-py3-none-any.whl (29.7 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page