TODO
Project description
Aidge Export: TensorRT
The aim of this module is to provide an export to TensorRT SDK via the Aidge framework.
Table of Contents
Installation
Requirements
To compile the export, you need one of the following:
- Docker installed (recommended). The Docker image includes TensorRT, so no additional installation is required.
- TensorRT 8.6 or 10.10 installed on your host machine (required only if compiling directly on the host without Docker).
Using setup.ps1
[!NOTE] Windows only* *Unless you installed powershell on your system
# ----------[ C++ development]----------
# From aidge/
setup.ps1 -Modules export_tensorrt -Clean -Tests
# ----------[ Python development]----------
# From aidge/
setup.ps1 -Modules export_tensorrt -Clean -Tests -Python
[!TIP] Run
Get-Help setup.ps1 --Fullto display documentation
Using setup.sh
[!NOTE] Unix only*
# ----------[ C++ development]----------
# From aidge/
./setup.sh -m export_tensorrt --clean --tests
# ----------[ Python development]----------
# From aidge/
./setup.sh -m export_tensorrt --clean --tests --python
[!TIP] Run
setup.sh -hto display documentation
Using pip
[!NOTE] If using virtual environment, make sure to use the same for every installation !
# ----------[ Python development]----------
# only in aidge/aidge/aidge_export_tensorrt/
pip install . -v
# If you want to install test, do this instead
pip install .[test] -v
# Launch tests using pytest
pytest
[!TIP]
-vis to enable verbose mode !
Development mode install
[!WARNING] Experimental Untested & experimental feature, see https://scikit-build-core.readthedocs.io/en/latest/configuration/index.html#editable-installs.
pip install --no-build-isolation --config-settings=editable.rebuild=true -Cbuild-dir=build -ve.
pip install .
Usage
aidge_export_tensorrt module exposes its main python API using export() function.
import aidge_export_tensorrt
aidge_export_tensorrt.export("export_trt", "model.onnx")
The export provides a Makefile with several options to utilize the export on your machine. You can generate either a C++ export or a Python export.
Additionally, you have the option to compile the export and/or the Python library using Docker if your host machine lacks the necessary packages.
The available commands are summarized in the following table:
| Command | Description |
|---|---|
make / make help |
Display the different options available |
make build_cpp |
Compile the export on host for C++ apps (generate an executable in build/bin) |
make build_lib_python |
Compile the export on host for Python apps (generate a python lib in build/lib) |
make build_image_docker |
Generate the docker image of the tensorrt compiler |
make build_cpp_docker |
Compile the export in a container for C++ apps (generate an executable in build/bin) |
make test_cpp_docker |
Test the executable for C++ apps in a container |
make build_lib_python_docker |
Compile the export in a container for Python apps (generate a python lib in build/lib) |
make test_lib_python_docker |
Test the lib for Python apps in a container |
make clean |
Clean up the build and bin folders |
Here's an example to compile and test the export Python library using Docker:
cd export_trt/
make build_lib_python_docker
make test_lib_python_docker
This will execute the test.py file within the Docker container, initializing and profiling the selected model.
Known issue
Export side
Issue related to the usage of the TensorRT export.
No CMAKE_CUDA_COMPILER could be found
CMake Error at CMakeLists.txt:21 (enable_language):
No CMAKE_CUDA_COMPILER could be found.
Tell CMake where to find the compiler by setting either the environment
variable "CUDACXX" or the CMake cache entry CMAKE_CUDA_COMPILER to the full
path to the compiler, or to the compiler name if it is in the PATH.
This error occur when you try to compile your project without having NVCC to your PATH.
To fix this, add nvcc to the path:
export PATH=<NVCC_PATH>:$PATH;
Where <NVCC_PATH> is the path to the nvcc compiler.
For recent ORIN nvcc is installed at: /usr/local/cuda/bin.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file aidge_export_tensorrt-0.9.0-py3-none-any.whl.
File metadata
- Download URL: aidge_export_tensorrt-0.9.0-py3-none-any.whl
- Upload date:
- Size: 93.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
17605a179bf3cad12b7d33b323b90eb836dbd4bd98fe0a7934e04378e842988c
|
|
| MD5 |
61fc47b924016cf41712b7ff27349193
|
|
| BLAKE2b-256 |
a30eef6c4b147a30250bd82e42f47e60a7f7e373df092dd8fb0e50cf00d1a26b
|