Skip to main content

NVIDIA's experimental tool to explore TRT engines

Project description

trt-engine-explorer

This repository contains Python code (trex package) to explore various aspects of a TensorRT engine plan and its associated inference profiling data.

An engine plan file is a serialized TensorRT engine format. It contains information about the final inference graph and can be deserialized for inference runtime execution. An engine plan is specific to the hardware and software versions of the system used to build the engine.

trex is useful for initial model performance debugging, visualization of plan graphs, and for understanding the characteristics of an engine plan. For in-depth performance analysis, Nvidia ® Nsight Systems ™ is the recommended performance analysis tool.

Features

The trex package contains an API and Jupyter notebooks for viewing and inspecting TensorRT engine-plan files and profiling data.

  • An engine plan graph (JSON) is loaded to a Pandas dataframe which allows slicing, querying, filtering, viewing and diagraming.
  • An engine plan graph can be visualized as SVG/PNG files.
  • Layer linters are an API for flagging potential performance hazards (preview feature).
  • Four Jupyter notebooks provide:
    • An introduction to trex tutorial.
    • trex API examples.
    • Detailed engine plan performance, characteristics and structure analysis.
    • Comparison of two or more engine plans.
  • Because trex operates on JSON input files, it does not require a GPU.

Caveats

When trtexec times individual layers, the total engine latency (computed by summing the average latency of each layer) is higher than the latency reported for the entire engine. This is due to per-layer measurement overheads.

To measure per-layer execution times, when trtexec enqueues kernel layers for execution in a stream, it places CUDA event objects between the layers to monitor the start and completion of each layer. These CUDA events add a small overhead which is more noticeable with smaller networks (shallow and narrow networks or networks with small activation data).

Supported TensorRT Versions

Starting with TensorRT 8.2, engine-plan graph and profiling data can be exported to JSON files. trex supports TensortRT 8.2 and 8.4.

trex has only been tested on Ubuntu 18.04, 20.04, 22.04, with Python 3.8.
trex does not require a GPU, but generating the input JSON file(s) does require a GPU.

Installation

The instructions below detail how to use a Python3 virtualenv for installing and using trex (Python 3.8+ is required).

1. Clone the trex code repository from TensorRT OSS repository

$ git clone https://github.com/NVIDIA/TensorRT.git

2. Create and activate a Python virtual environment

The commands listed below create and activate a Python virtual environment named env_trex which is stored in a directory by the same name, and configures the current shell to use it as the default python environment.

$ cd TensorRT/tools/experimental/trt-engine-explorer
$ python3 -m virtualenv env_trex
$ source env_trex/bin/activate

3. Install trex in development mode and the Jupyter extensions required for the notebooks

$ python3 -m pip install -e .
$ jupyter nbextension enable widgetsnbextension --user --py

4. Install Graphviz

Generating dot and SVG graphs requires Graphviz, an open source graph visualization software:

$ sudo apt-get --yes install graphviz

Workflow

The typical trex workflow is depicted below:

  1. Convert an external model to a TensorRT INetworkDefinition.
  2. Build a TensorRT engine.
  3. Profile the engine while creating the necessary JSON files.
  4. Explore the engine by loading the JSON files in a trex notebook.

The Python script utils/process_engine.py implements this workflow for ONNX models:

  1. Use trtexec to import an ONNX model and create an engine.
  2. Load the engine and create an engine-graph JSON file.
  3. Use trtexec to profile the engine's inference execution and store the results in an engine profiling JSON file.
  4. Create an engine graph diagram in SVG format.

For more information see TensorRT Engine Inspector and the Tutorial notebook.

Jupyter Server

Launch the Jupyter notebook server as detailed below and open your browser at http://localhost:8888 or http://<your-ip-address>:8888

$ jupyter-notebook --ip=0.0.0.0 --no-browser

License

The TensorRT Engine Explorer license can be found in the LICENSE file.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

trt_engine_explorer-0.1.5.tar.gz (41.9 kB view details)

Uploaded Source

Built Distribution

trt_engine_explorer-0.1.5-py3-none-any.whl (50.8 kB view details)

Uploaded Python 3

File details

Details for the file trt_engine_explorer-0.1.5.tar.gz.

File metadata

  • Download URL: trt_engine_explorer-0.1.5.tar.gz
  • Upload date:
  • Size: 41.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.7.0 CPython/3.9.15 Linux/5.4.0-1103-aws

File hashes

Hashes for trt_engine_explorer-0.1.5.tar.gz
Algorithm Hash digest
SHA256 b91c499ad5fb7ee5bd56f12c28892e3e4bde86564347cffeaf7c1810380f35be
MD5 dd0fe7570fd107a4de79f44a8fa00f61
BLAKE2b-256 9eddc5e4620225f34676054a21eade4bee2446c99a3f7bceba9e43f1c598a7f5

See more details on using hashes here.

File details

Details for the file trt_engine_explorer-0.1.5-py3-none-any.whl.

File metadata

File hashes

Hashes for trt_engine_explorer-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 23bc9266167cfadc23acdfdcd5f4b33e8d917cbdd97cd5661632545eb4e6d1fd
MD5 340e86e8ef0d48b6052898785b93e59e
BLAKE2b-256 09a29b3e6200052dc89e7e39b7010ab0ed1a604fa82cdfc5738d38236bd2eb35

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page