A high performance deep learning inference library
Project description
NVIDIA TensorRT RTX is an SDK for high-performance AI inference on NVIDIA RTX GPUs. It includes a Just-in-Time compiler for fast on-device inference optimizations that enable portable deployments and runtime performance specialization. It also introduces convenience features such as built-in CUDA graph support, runtime cache, and a simplified development workflow.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file tensorrt_rtx_cu12-1.4.0.76.tar.gz.
File metadata
- Download URL: tensorrt_rtx_cu12-1.4.0.76.tar.gz
- Upload date:
- Size: 18.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
87c5258e9f18d657d415c6ae0996397719a247a21f77cc70427d37deaec950b1
|
|
| MD5 |
73158fd9f69866e190b1c6a1bb606da2
|
|
| BLAKE2b-256 |
88a023fcd01a4c14123a3857b454fc59a3d735cdd9e4a8ad1f335a0fbcb73222
|