NVIDIA Plugin for OpenVINO Inference Engine Python* API
Project description
OpenVINO™ Toolkit - NVIDIA GPU plugin
OpenVINO™ NVIDIA GPU plugin is developed in order to enable deep neural networks inference on NVIDIA GPUs, using OpenVINO™ API. The plugin uses custom kernels and [cuBLAS, cuDNN, cuTENSOR libraries*] as a backend.
Supported Platforms
OpenVINO™ NVIDIA GPU plugin is supported and validated on the following platforms:
| OS | GPU | Driver |
|---|---|---|
| Ubuntu* 20.04 (64-bit) | NVIDIA Quadro RTX 4000 | 520.61.05 |
Distribution
OpenVINO™ NVIDIA GPU plugin is not included into Intel® Distribution of OpenVINO™. To use the plugin, it should be built from source code.
How to build
Prerequisites
NVIDIA GPU plugin uses the following dependencies to be downloaded and installed separately. Upon downloading them the user should agree with license of each component:
- Install one of the following compilers with support of C++17:
- Install gcc-7 compiler
sudo apt-get update
sudo apt-get install gcc-7 g++7
- Install clang-8 compiler
sudo apt-get update
sudo apt-get install clang-8 clang++8
-
Install suitable NVIDIA driver from NVIDIA download drivers
-
Install CUDA 11.8 from How to install CUDA
Do not forget to add
<path_to_cuda>/bin/in PATH variable for exampleexport PATH="<path_to_cuda>/bin:$PATH" -
Install cuDNN 8.6.0 from How to install cuDNN
-
Install cuTENSOR 1.6.1 from How to install cuTENSOR
Build with cmake
In order to build the plugin, you must prebuild OpenVINO™ package from source using this guideline.
Afterwards plugin build procedure is as following:
- Clone
openvino_contribrepository:
git clone --recurse-submodules --single-branch --branch=2024.1.0 https://github.com/openvinotoolkit/openvino_contrib.git
- Go to plugin directory:
cd openvino_contrib/modules/nvidia_plugin
- Prepare a build folder:
mkdir build && cd build
-
Build plugin
First of all, switch OpenVINO™ to tag 2024.1.0 and then build it according the instruction How to build
Then build CUDA Plugin with one of 2 options:
-
Using
build.shSetup the following environment variables:
export OPENVINO_HOME=<OpenVINO source directory> export OPENVINO_CONTRIB=<OpenVINOContrib packages source directory> export OPENVINO_BUILD_PATH=<OpenVINO build directory>
Then run one of the following commands:
# Run cmake configuration (if necessary) and then build ../build.sh --build # Run cmake configuration ../build.sh --setup # For old build delete old configuration, generate new one and then build ../build.sh --rebuild
-
Using OpenVINODeveloperPackage
Run the following command:
cmake -DOpenVINODeveloperPackage_DIR=<path to OpenVINO package build folder> -DCMAKE_BUILD_TYPE=Release .. cmake --build . --target nvidia_gpu -j `nproc`
Build with setup.py
If python available the CUDA Plugin could be compiled with setup.py script as following:
- Clone
openvino_contribrepository:
git clone --recurse-submodules --single-branch --branch=2024.1.0 https://github.com/openvinotoolkit/openvino_contrib.git
- Go to plugin directory:
cd openvino_contrib/modules/nvidia_plugin
- Setup
CUDACXXenvironment variable to point to the CUDA nvcc compiler like the next (use yours path)
export CUDACXX=/usr/local/cuda-11.8/bin/nvcc
- Add the path to the cuda libraries to the
LD_LIBRARY_PATHenvironment variable like the next (use yours path)
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-11.8/bin/nvcc
- Run setup.py build command as follows.
export NVIDIA_PLUGIN_SRC_ROOT_DIR=</path/to/openvino_contrib>/modules/nvidia_plugin
python3 ${NVIDIA_PLUGIN_SRC_ROOT_DIR}/wheel/setup.py build
This will automatically download, build OpenVINO and build CUDA Plugin finally. The location of the resulting library file will be like the next.
${NVIDIA_PLUGIN_SRC_ROOT_DIR}/build/temp.linux-x86_64-3.6/deps/openvino/bin/intel64/Debug/lib/libopenvino_nvidia_gpu_plugin.so
Install as python package with setup.py
To install CUDA Plugin as python package do all steps except last one from the Build with setup.py section.
After that installation could be done by running setup.py install command as follows.
export OPENVINO_CONTRIB=</path/to/openvino_contrib>
python3 ${OPENVINO_CONTRIB}/modules/nvidia_plugin/wheel/setup.py install
This command will install dependent openvino package if needed and update it for using with NVIDIA GPU plugin.
Docker support
Build docker container
First build docker container:
- Install
docker:
./docker.sh install
su $USER # Relogin for current user
- Download all
*.debpackages for CUDA and put them in one folder - Build docker container:
CUDA_PACKAGES_PATH=<path to CUDA pakcages> ./docker.sh build
Build openvino_nvidia_gpu_plugin in docker container
In order to build openvino_nvidia_gpu_plugin in docker, follow the steps:
- Enter the docker container:
docker run --gpus all -it openvino/cudaplugin-2022.3 bin/bash
- Build the OpenVINO and openvino_nvidia_gpu_plugin according the steps described in ## How to build, except 3), 4), 5) steps (this packages already installed in image)
- Commit all your changes in container:
docker commit openvino/cudaplugin-2022.3 <name of new image>
Supported Configuration Parameters
The plugin supports the configuration parameters listed below:
ov::hint::performance_modeov::hint::execution_modeov::hint::inference_precisionov::num_streamsov::enable_profiling
Please refer to OpenVINO documentation for details.
Plugin specific parameters
ov::nvidia_gpu::operation_benchmark- specifies if operation level benchmark should be run for increasing performance of network (falseby default)ov::nvidia_gpu::use_cuda_graph- specifies if NVIDIA plugin attempts to use CUDA Graph feature to speed up sequential network inferences (trueby default)
All parameters must be set before calling ov::Core::compile_model() in order to take effect.
Plugin specific properties
ov::nvidia_gpu::number_of_cuda_graphs- Read-only property showing the number of CUDA Graphs, used for the current model
Compile options
During compilation of the openvino_nvidia_gpu_plugin, user could specify the following options:
-DCUDA_KERNEL_PRINT_LOG=ONenables print logs from kernels (WARNING, be careful with this options, could print to many logs)-DENABLE_CUDNN_BACKEND_APIenables cuDNN backend support that could increase performance of convolutions by 20%-DCMAKE_CUDA_ARCHITECTURES=<arch_set>e.g.-DCMAKE_CUDA_ARCHITECTURES=75, (CMake documentation). This option overrides the default architectures (CUDA Compute Capabitities) listed inopenvino_contrib/modules/nvidia_plugin/CMakeLists.txt. This option allows to build the plugin for specific architecture or architecture set. Building for the lesser amount of architectures can significally decrease the size oflibopenvino_nvidia_gpu_plugin.so. To find out the compute capabitity of nVidia devices in your system, you may use the following command:
nvidia-smi --query-gpu=compute_cap --format=csv
Python package
Python pacakge could be built using wheel/setup.py file provided in nvidia_plugin folder.
Prerequisites
Run the following commands as prerequests to setup.py:
export OPENVINO_HOME=<OPENVINO_HOME_DIR> # If not provided, setup.py will download openvino automatically
python3 -m pip install wheel
Building the package
To build it, use simply the following command:
python3 ./wheel/setup.py bdist_wheel
Installing the package
To install:
python3 ./wheel/setup.py install
Usage
Now you can use openvino-nvidia package, here is example:
import openvino_nvidia
import openvino as ov
core = ov.Core()
model = core.read_model(model=...)
core.compile_model(model=model, device_name="NVIDIA")
During the import of package openvino_nvidia it try to register itsefl in openvino package.
Registrestion happens in "lightweight" manner, it means if "NVIDIA" plugin already registere than it does nothing.
If you want to forly to overwrite a path to plugin library you can do it importing from openvino_nvidia package attribute force_install:
from openvino_nvidia import force_install # will overwrite a path to plugin library
import openvino as ov
core = ov.Core()
model = core.read_model(model=...)
core.compile_model(model=model, device_name="NVIDIA")
For simetry there is also install attribute:
from openvino_nvidia import install # will register plugin if it does not yet
import openvino as ov
core = ov.Core()
model = core.read_model(model=...)
core.compile_model(model=model, device_name="NVIDIA")
Supported Layers and Limitations
The plugin supports IRv10 and higher. The list of supported layers and its limitations are defined in cuda_opset.md.
License
OpenVINO™ NVIDIA GPU plugin is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
How to Contribute
We welcome community contributions to openvino_contrib repository.
If you have an idea how to improve the modules, please share it with us.
All guidelines for contributing to the repository can be found here.
* Other names and brands may be claimed as the property of others.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file openvino_nvidia-2025.3.0.post1-cp311-cp311d-manylinux_2_35_x86_64.whl.
File metadata
- Download URL: openvino_nvidia-2025.3.0.post1-cp311-cp311d-manylinux_2_35_x86_64.whl
- Upload date:
- Size: 19.9 MB
- Tags: CPython 3.11d, manylinux: glibc 2.35+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6d7e5101cf369dd855332dbdfe13a19f8730027d84f9734c3912fbf241716f16
|
|
| MD5 |
f70d0e946a3c86048e047b4372b2c9b6
|
|
| BLAKE2b-256 |
aed8c2aaee077db6f5cc895e39e3be1d8edaf80d1a45742a723623fa60f9dfc3
|