Skip to main content

A Tensorflow TensorRT Model Quantizer for FP32 ,FP16 and calibrated INT8 model quantization.Runs on TensorRT default calibration on frozen model graphs for faster inference

Project description

The author of this package has not provided a project description

Project details


Release history Release notifications | RSS feed

This version

0.1

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

TFModelQuantizer-0.1.tar.gz (2.8 kB view details)

Uploaded Source

File details

Details for the file TFModelQuantizer-0.1.tar.gz.

File metadata

  • Download URL: TFModelQuantizer-0.1.tar.gz
  • Upload date:
  • Size: 2.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.5.0.1 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.47.0 CPython/3.8.3

File hashes

Hashes for TFModelQuantizer-0.1.tar.gz
Algorithm Hash digest
SHA256 8240ef37ab4660ca27d6f9360fa7df07eb546c6d63a65e5ffd1b7d7743e8c0f1
MD5 6030faf85f7dc069f0ca63a14dd6ea32
BLAKE2b-256 9e8aa95142c0ea57174e0dc29834439c446ba74d80af70118ba643a324c927a3

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page