Skip to main content

OpenVINO™ integration with TensorFlow

Project description

OpenVINO™ integration with TensorFlow

OpenVINO™ integration with TensorFlow is a product that delivers OpenVINO™ inline optimizations and runtime needed for an enhanced level of TensorFlow compatiblity. It is designed for developers who want to get started with OpenVINO™ in their inferencing applications to enhance inferencing performance with minimal code modifications. OpenVINO™ integration with TensorFlow accelerates inference across many AI models on a variety of Intel® silicon such as:

  • Intel® CPUs
  • Intel® integrated GPUs
  • Intel® Movidius™ Vision Processing Units - referred as VPU
  • Intel® Vision Accelerator Design with 8 Intel Movidius™ MyriadX VPUs - referred as VAD-M or HDDL

Note: For maximum performance, efficiency, tooling customization, and hardware control, we recommend going beyond this component to adopt OpenVINO™ APIs and its runtime.

Installation

Requirements

  • Ubuntu 18.04
  • Python 3.6, 3.7, or 3.8
  • TensorFlow 2.4.1

Use OpenVINO™ integration with TensorFlow alongside PyPi TensorFlow

This OpenVINO™ integration with TensorFlow package comes with pre-built libraries of OpenVINO™ version 2021.3 meaning you don't have to install OpenVINO™ separately. This package supports:

  • Intel® CPUs

  • Intel® integrated GPUs

  • Intel® Movidius™ Vision Processing Units (VPUs)

      pip3 install -U pip==21.0.1
      pip3 install -U tensorflow==2.4.1
      pip3 install openvino-tensorflow
    

If you want to leverage Intel® Vision Accelerator Design with Movidius™ (VAD-M) for inference, install OpenVINO™ integration with TensorFlow alongside the Intel® Distribution of OpenVINO™ Toolkit)

Verify Installation

Once you've installed OpenVINO™ integration with TensorFlow, you can use TensorFlow to run inference using a trained model.

To see if OpenVINO™ integration with TensorFlow is properly installed, run

python3 -c "import tensorflow as tf; print('TensorFlow version: ',tf.__version__);\
            import openvino_tensorflow; print(openvino_tensorflow.__version__)"

This should produce an output like:

    TensorFlow version:  2.4.1
    OpenVINO integration with TensorFlow version: b'0.5.0'
    OpenVINO version used for this build: b'2021.3'
    TensorFlow version used for this build: v2.4.1
    CXX11_ABI flag used for this build: 0
    OpenVINO integration with TensorFlow built with Grappler: False

By default, Intel® CPU is used to run inference. However, you can change the default option to either Intel® integrated GPU or Intel® VPU for AI inferencing. Invoke the following function to change the hardware on which inferencing is done.

openvino_tensorflow.set_backend('<backend_name>')

Supported backends include 'CPU', 'GPU', 'MYRIAD'.

To determine the processing units available on your system for inference, use the following function:

openvino_tensorflow.list_backends()

For more API calls and environment variables, see USAGE.md.

More detailed examples on how to use OpenVINO™ integration with TensorFlow are located in the examples directory.

License

OpenVINO™ integration with TensorFlow is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.

Support

Please submit your questions, feature requests and bug reports via GitHub issues.

How to Contribute

We welcome community contributions to OpenVINO™ integration with TensorFlow. If you have an idea for improvement:

We will review your contribution as soon as possible. If any additional fixes or modifications are necessary, we will guide you and provide feedback. Before you make your contribution, make sure you can build OpenVINO™ integration with TensorFlow and run all the examples with your fix/patch. If you want to introduce a large feature, create test cases for your feature. Upon the verification of your pull request, we will merge it to the repository provided that the pull request has met the above mentioned requirements and proved acceptable.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

File details

Details for the file openvino_tensorflow-0.5.0-cp38-cp38-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for openvino_tensorflow-0.5.0-cp38-cp38-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 19253275a7c8609b3215865ae6ca7ef1164981f1637800f3dbcb6a8bdaaa2364
MD5 74c8a6d7de5c74031c842713a2444539
BLAKE2b-256 69e1588f5185a62a4ea557fc610569fed96d40b4aa9f86bf5dcda443eeadebfb

See more details on using hashes here.

File details

Details for the file openvino_tensorflow-0.5.0-cp37-cp37m-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for openvino_tensorflow-0.5.0-cp37-cp37m-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 9739a565a3194aa5fb20ba0f3d76c062c0bcbec1179304d3826c62c23589f582
MD5 633c63151dc443f4e03349701bbbdf96
BLAKE2b-256 5125d54b6228a2a46d713fe66edf2e9569a9f6db804445dec6d94444fa42a450

See more details on using hashes here.

File details

Details for the file openvino_tensorflow-0.5.0-cp36-cp36m-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for openvino_tensorflow-0.5.0-cp36-cp36m-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 7d6ae47a33a59faf8fef3de73b88919567b764caf9aaa2909d4bd76dd92b051d
MD5 64128dc1b6394b2bc09ae7747f7a15d8
BLAKE2b-256 a5f69a63d6ead236a252d7444ba22287a0b6fe9d6d42ffd364ed610fb517c9a5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page