Skip to main content

Accelerate PyTorch models with ONNX Runtime OpenVINO EP

Project description

The torch-ort-inference package uses the PyTorch APIs to accelerate PyTorch models using ONNX Runtime OpenVINO EP.

Dependencies

The torch-ort-inference package depends on the onnxruntime-openvino package.

Post-installation step

Once torch-ort-inference is installed, there is a post-installation step:

python -m torch_ort.configure

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

torch_ort_infer-0.0.1-py3-none-any.whl (9.7 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page