A high performance deep learning inference library
Project description
NVIDIA TensorRT is an SDK that facilitates high-performance machine learning inference. It is designed to work in a complementary fashion with training frameworks such as TensorFlow, PyTorch, and MXNet. It focuses specifically on running an already-trained network quickly and efficiently on NVIDIA hardware.
IMPORTANT: This is a special release of TensorRT designed to work only with TensorRT-LLM. Please refrain from upgrading to this version if you are not using TensorRT-LLM.
To install, please execute the following:
pip install tensorrt --extra-index-url https://pypi.nvidia.com
Or add the index URL to the (space-separated) PIP_EXTRA_INDEX_URL environment variable:
export PIP_EXTRA_INDEX_URL='https://pypi.nvidia.com'
pip install tensorrt
When the extra index url does not contain https://pypi.nvidia.com
, a nested pip install
will run with the proper extra index url hard-coded.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file tensorrt-cu12-10.6.0.post1.tar.gz
.
File metadata
- Download URL: tensorrt-cu12-10.6.0.post1.tar.gz
- Upload date:
- Size: 18.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.10.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9d8456edb3ea60ae987e5b430b9f0e13ca0c4c7f7c396e58fa383bb2e3e7610a |
|
MD5 | 88fbbe48998c25eedd5cf8fb32250171 |
|
BLAKE2b-256 | 58669248c94305b75888f1016494c73202259ce2adf68a42b36e148d1a715c28 |