Skip to main content

A high performance deep learning inference library

Project description

NVIDIA TensorRT is an SDK that facilitates high-performance machine learning inference. It is designed to work in a complementary fashion with training frameworks such as TensorFlow, PyTorch, and MXNet. It focuses specifically on running an already-trained network quickly and efficiently on NVIDIA hardware.

IMPORTANT: This is a special release of TensorRT designed to work only with TensorRT-LLM. Please refrain from upgrading to this version if you are not using TensorRT-LLM.

To install, please execute the following:

pip install tensorrt --extra-index-url https://pypi.nvidia.com

Or add the index URL to the (space-separated) PIP_EXTRA_INDEX_URL environment variable:

export PIP_EXTRA_INDEX_URL='https://pypi.nvidia.com'
pip install tensorrt

When the extra index url does not contain https://pypi.nvidia.com, a nested pip install will run with the proper extra index url hard-coded.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tensorrt_dispatch-cu11-10.1.0.tar.gz (18.3 kB view details)

Uploaded Source

File details

Details for the file tensorrt_dispatch-cu11-10.1.0.tar.gz.

File metadata

  • Download URL: tensorrt_dispatch-cu11-10.1.0.tar.gz
  • Upload date:
  • Size: 18.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.10.12

File hashes

Hashes for tensorrt_dispatch-cu11-10.1.0.tar.gz
Algorithm Hash digest
SHA256 652a78932d1c283d953b30464c12ce3b650e8239481816cb5613d9fc4783cb60
MD5 1342108d12d677c673872345b27f73b5
BLAKE2b-256 c139c9c0a116efdffa7a2162c6003c74a0494250041ea338e455bc40c45471ea

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page