Skip to main content

A simple package for converting TensorFlow DenseNet121 and DenseNet201 models to ONNX and TensorRT formats.

Project description

quantizedensenet

Python TensorFlow TensorRT

A Python package for seamlessly converting DenseNet TensorFlow models to ONNX and TensorRT formats, enabling optimized inference on NVIDIA GPUs.

Features

  • TensorFlow to ONNX Conversion: Convert SavedModel or model weights to ONNX format with FP32 or FP16 precision.
  • ONNX to TensorRT Conversion: Build optimized TensorRT engines with dynamic batch sizes.
  • Direct TF to TRT Pipeline: One-step conversion from TensorFlow to TensorRT.
  • INT8 Quantization Support: Improve inference speed with INT8 calibration using image directories or cache files.
  • DenseNet Support: specialized support for DenseNet121 and DenseNet201 architectures.
  • Comprehensive Logging: Detailed conversion process tracking and validation.

Table of Contents

Installation

Prerequisites

Before installing the package, ensure you have the following:

  • Python 3.8 or higher - Check with python --version
  • pip (Python package manager) - Check with pip --version
  • NVIDIA GPU with CUDA support
  • CUDA Toolkit 12.x - (Required by tensorrt_cu12 and cuda_bindings==12.9.2)
  • TensorRT 10.14+

Install from Wheel File

  1. Download the wheel file provided to you.
  2. Open a terminal and navigate to the directory containing the .whl file.
  3. Install the package using pip.
  4. If dependencies are not automatically installed with the wheel file, you can install them using the provided requirements.txt file:
pip install -r requirements.txt

Quick Start

  1. Install the wheel and dependencies, then import the converter: from quantizedensenet.converter import Converter

  2. Create a converter instance: converter = Converter()

  3. Convert TensorFlow to ONNX or straight to TensorRT, with optional FP16/INT8 settings and dynamic batching.

Usage Examples

Converter Class

The Converter class provides three utilities: convert TensorFlow models to ONNX, build TensorRT engines from ONNX, and run a one-call TensorFlow to TensorRT pipeline for deployment-ready inference engines.

Keep In Mind

  • The original input shape $(N, H, W, C)$ will be changed to $(N, C, H, W)$ for optimized execution.

  • All the functions' output_path or engine_file_path could be None; this way the functions return the created models/engines in memory.

  • Memory Management: It is not recommended to use the converted models for inference immediately after conversion in the same script. The best practice is to restart the Python kernel to free up allocated CUDA memory. After that, with a new run, you can load the engine and run inference without errors.

  • FP16 TensorRT engines are generally the best choice, as they provide the fastest inference without significant accuracy loss.

tf_to_onnx

  • Converts a tf.keras SavedModel or a .h5 Weights file to an ONNX model.

  • If you pass a .h5 file that only contains weights, you must specify the only_weigths_of_model argument (Supports: 'DenseNet121', 'DenseNet201').

  • The keras model input shape must be (None, 224, 224, 3).

  • Supports exporting to FP32 or FP16 ONNX graph.

from quantizedensenet.converter import Converter

converter = Converter()
onnx_model = converter.tf_to_onnx(
    input_model="path/to/densenet121_weights.h5",
    output_path="path/to/output/model.onnx",
    precision="fp16",                  
    only_weigths_of_model='DenseNet121', # Specify base model if using weights only
    opset=13
)

onnx_to_trt

  • Parses an ONNX model and builds a TensorRT engine.

  • The input model can be passed as a path to a .onnx file or as a onnx.ModelProto object.

  • The ONNX model input shape must be (-1, 3, 224, 224).

  • If engine_file_path is None and auto_generate_engine_path=True, it auto-generates the path based on the input filename.

  • Supports exporting to FP32, FP16, or INT8 TRT engines.

  • Dynamic Batching: You must specify 3 arguments:

    • min_batch: The minimum number of images a batch could ever contain (usually 1).

    • opt_batch: The most common batch size for inference (should be close to max_batch).

    • max_batch: The maximum number of images a batch could ever contain.

from quantizedensenet.converter import Converter

converter = Converter()
engine = converter.onnx_to_trt(
    input_model="path/to/model.onnx",
    engine_file_path="path/to/output/model.trt",
    precision="fp16",                    
    min_batch=1,
    opt_batch=16,
    max_batch=32
)
  • INT8 Calibration: INT8 mode requires calibration data or an existing cache.

  • You can provide:

    • A directory path containing images.

    • A single image path.

    • A list of image paths.

    • A path to a calibration cache file.

  • If calibration_cache is provided but does not exist, it will be created using the calibration_images.

from quantizedensenet.converter import Converter

converter = Converter()
model = converter.onnx_to_trt(
    input_model="path/to/model.onnx",
    engine_file_path="path/to/int8_densenet.trt",
    min_batch=1,
    max_batch=32,
    opt_batch=32,
    precision="int8",
    calibration_images="path/to/calibration/images/dir",
    calibration_cache="path/to/calibration.cache",
)

tf_to_trt

Runs the end-to-end pipeline in one call. Exports a TensorFlow model to ONNX, then builds a TensorRT engine with the selected precision and batch profiles, streamlining deployment.

  • If you pass a .h5 file that only contains weights, you should also specify the base model using only_weigths_of_model.

  • When using INT8, provide calibration_images or a calibration_cache.

from quantizedensenet.converter import Converter

converter = Converter()
model = converter.tf_to_trt(
    input_model="path/to/densenet201_weights.h5",
    engine_file_path="path/to/int8_densenet201.trt",
    only_weigths_of_model='DenseNet201',
    min_batch=1,
    opt_batch=32,
    max_batch=32,
    precision="int8",
    calibration_images="path/to/calibration/images/dir",
    calibration_cache="path/to/calibration.cache",
    auto_generate_engine_path=False
)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

quantizedensenet-0.0.3.tar.gz (16.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

quantizedensenet-0.0.3-py3-none-any.whl (16.3 kB view details)

Uploaded Python 3

File details

Details for the file quantizedensenet-0.0.3.tar.gz.

File metadata

  • Download URL: quantizedensenet-0.0.3.tar.gz
  • Upload date:
  • Size: 16.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for quantizedensenet-0.0.3.tar.gz
Algorithm Hash digest
SHA256 6ea08564b898dccee2b28dc50380df20d1bd02c4178199c8b5cc857ff4180fcf
MD5 b30879f423336eb4009628928d6cdd00
BLAKE2b-256 b3cca9f3d2637d711ede79f9f8be6309cd275d7a6879b783a1d5c1317a9b5b25

See more details on using hashes here.

Provenance

The following attestation bundles were made for quantizedensenet-0.0.3.tar.gz:

Publisher: python-publish.yml on Geridev/quantizedensenet

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file quantizedensenet-0.0.3-py3-none-any.whl.

File metadata

File hashes

Hashes for quantizedensenet-0.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 9791908ea1591833cbd2e75436db4b89f15eb3c02a7abb567b895a696392850b
MD5 c3b6b3dc1e907042ff55238c7a6fff74
BLAKE2b-256 b0934a05564f603e3394d3c7b032207dacc0a9822baa507f542404f2bb58aa37

See more details on using hashes here.

Provenance

The following attestation bundles were made for quantizedensenet-0.0.3-py3-none-any.whl:

Publisher: python-publish.yml on Geridev/quantizedensenet

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page