Skip to main content

Accelerate PyTorch models with ONNX Runtime OpenVINO EP

Project description

OpenVINO™ Integration with Torch-ORT accelerates PyTorch models using OpenVINO™ Execution Provider for ONNX Runtime. This product is designed for PyTorch developers who want to get started with OpenVINO™ in their inferencing applications. It delivers OpenVINO™ inline optimizations that enhance inferencing performance with minimal code modifications.

OpenVINO™ Integration with Torch-ORT accelerates inference across many AI models on a variety of Intel® hardware such as:

  • Intel® CPUs
  • Intel® integrated GPUs
  • Intel® Movidius™ Vision Processing Units - referred to as VPU.

Installation

Requirements

  • Ubuntu 18.04, 20.04
  • Python 3.7, 3.8 or 3.9

This package supports:

  • Intel® CPUs
  • Intel® integrated GPUs
  • Intel® Movidius™ Vision Processing Units (VPUs).

The torch-ort-infer package has dependency on the onnxruntime-openvino package that will be installed by default to run inference workloads. This onnxruntime-openvino package comes with pre-built libraries of OpenVINO™ version 2022.2.0 eliminating the need to install OpenVINO™ separately. The OpenVINO™ libraries are prebuilt with CXX11_ABI flag set to 0.

For more details, please refer to OpenVINO™ Execution Provider for ONNX Runtime.

Post-installation step

Once torch-ort-infer is installed, there is a post-installation step:

python -m torch_ort.configure

Usage

By default, Intel® CPU is used to run inference. However, you can change the default option to either Intel® integrated GPU or Intel® VPU for AI inferencing. Invoke the provider options to change the hardware on which inferencing is done.

For more API calls and environment variables, see Usage.

Samples

For quick start, explore the samples for few HuggingFace and TorchVision models.

License

OpenVINO™ Integration with Torch-ORT is licensed under MIT. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.

Support

Please submit your questions, feature requests and bug reports via GitHub Issues.

How to Contribute

We welcome community contributions to OpenVINO™ Integration with Torch-ORT. If you have an idea for improvement:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

torch_ort_infer-1.13.1-py3-none-any.whl (10.8 kB view details)

Uploaded Python 3

File details

Details for the file torch_ort_infer-1.13.1-py3-none-any.whl.

File metadata

File hashes

Hashes for torch_ort_infer-1.13.1-py3-none-any.whl
Algorithm Hash digest
SHA256 e36fc35903c80252981cdca07e4b2f48856a3eb3615569ead0013d7e7914f7d4
MD5 dfedbe188f37e15781e2c774557ca7a6
BLAKE2b-256 cc29cbd6b2c8e5111e993d79e5e992d288d3dd81a3569461f813e1ec040fbddf

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page