Skip to main content

A high performance deep learning inference library

Project description

NVIDIA TensorRT RTX is an SDK for high-performance AI inference on NVIDIA RTX GPUs. It includes a Just-in-Time compiler for fast on-device inference optimizations that enable portable deployments and runtime performance specialization. It also introduces convenience features such as built-in CUDA graph support, runtime cache, and a simplified development workflow.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tensorrt_rtx_cu12-1.4.0.76.tar.gz (18.0 kB view details)

Uploaded Source

File details

Details for the file tensorrt_rtx_cu12-1.4.0.76.tar.gz.

File metadata

  • Download URL: tensorrt_rtx_cu12-1.4.0.76.tar.gz
  • Upload date:
  • Size: 18.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for tensorrt_rtx_cu12-1.4.0.76.tar.gz
Algorithm Hash digest
SHA256 87c5258e9f18d657d415c6ae0996397719a247a21f77cc70427d37deaec950b1
MD5 73158fd9f69866e190b1c6a1bb606da2
BLAKE2b-256 88a023fcd01a4c14123a3857b454fc59a3d735cdd9e4a8ad1f335a0fbcb73222

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page