Skip to main content

OpenVINO™ integration with TensorFlow

Project description

OpenVINO™ integration with TensorFlow

OpenVINO™ integration with TensorFlow is a product designed for TensorFlow* developers who want to get started with OpenVINO™ in their inferencing applications. This product delivers OpenVINO™ inline optimizations which enhance inferencing performance with minimal code modifications. OpenVINO™ integration with TensorFlow accelerates inference across many AI models on a variety of Intel® silicon such as:

  • Intel® CPUs
  • Intel® integrated and discrete GPUs

Note: Support for Intel Movidius™ MyriadX VPUs is no longer maintained. Consider previous releases for running on Myriad VPUs.

[Note: For maximum performance, efficiency, tooling customization, and hardware control, we recommend the developers to adopt native OpenVINO™ APIs and its runtime.]

Installation

Requirements

  • Ubuntu 18.04, macOS 11.2.3 or Windows1 10 - 64 bit
  • Python* 3.7, 3.8 or 3.9
  • TensorFlow* v2.9.3

1Windows release supports only Python3.9

This OpenVINO™ integration with TensorFlow package comes with pre-built libraries of OpenVINO™ version 2022.3.0 meaning you do not have to install OpenVINO™ separately. This package supports:

  • Intel® CPUs

  • Intel® integrated GPUs

      pip3 install -U pip
      pip3 install tensorflow==2.9.3
      pip3 install openvino-tensorflow==2.3.0
    

For installation instructions on Windows please refer to OpenVINO™ integration with TensorFlow for Windows

For more details on installation please refer to INSTALL.md, and for build from source options please refer to BUILD.md

Verify Installation

Once you have installed OpenVINO™ integration with TensorFlow, you can use TensorFlow to run inference using a trained model.

To check if OpenVINO™ integration with TensorFlow is properly installed, run

python3 -c "import tensorflow as tf; print('TensorFlow version: ',tf.__version__);\
            import openvino_tensorflow; print(openvino_tensorflow.__version__)"

This should produce an output like:

    TensorFlow version:  2.9.3
    OpenVINO integration with TensorFlow version: b'2.3.0'
    OpenVINO version used for this build: b'2022.3.0'
    TensorFlow version used for this build: v2.9.3
    CXX11_ABI flag used for this build: 1

Usage

By default, Intel® CPU is used to run inference. However, you can change the default option to Intel® integrated or discrete GPUs (GPU, GPU.0, GPU.1 etc). Invoke the following function to change the hardware on which inferencing is done.

openvino_tensorflow.set_backend('<backend_name>')

Supported backends include 'CPU', 'GPU', 'GPU_FP16'.

To determine what processing units are available on your system for inference, use the following function:

openvino_tensorflow.list_backends()

For further performance improvements, it is advised to set the environment variable OPENVINO_TF_CONVERT_VARIABLES_TO_CONSTANTS=1. For more API calls and environment variables, see USAGE.md.

[Note: If a CUDA capable device is present in the system then set the environment variable CUDA_VISIBLE_DEVICES to -1]

Examples

To see what you can do with OpenVINO™ integration with TensorFlow, explore the demos located in the examples repository.

Docker Support

Dockerfiles for Ubuntu* 18.04, Ubuntu* 20.04, and TensorFlow* Serving are provided which can be used to build runtime Docker* images for OpenVINO™ integration with TensorFlow on CPU, GPU. For more details see docker readme.

Prebuilt Images

Try it on Intel® DevCloud

Sample tutorials are also hosted on Intel® DevCloud. The demo applications are implemented using Jupyter Notebooks. You can interactively execute them on Intel® DevCloud nodes, compare the results of OpenVINO™ integration with TensorFlow, native TensorFlow and OpenVINO™.

License

OpenVINO™ integration with TensorFlow is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.

Support

Please submit your questions, feature requests and bug reports via GitHub issues.

How to Contribute

We welcome community contributions to OpenVINO™ integration with TensorFlow. If you have an idea for improvement:

We will review your contribution as soon as possible. If any additional fixes or modifications are necessary, we will guide you and provide feedback. Before you make your contribution, make sure you can build OpenVINO™ integration with TensorFlow and run all the examples with your fix/patch. If you want to introduce a large feature, create test cases for your feature. Upon the verification of your pull request, we will merge it to the repository provided that the pull request has met the above mentioned requirements and proved acceptable.


* Other names and brands may be claimed as the property of others.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

openvino_tensorflow-2.3.0-cp39-cp39-win_amd64.whl (25.8 MB view details)

Uploaded CPython 3.9 Windows x86-64

openvino_tensorflow-2.3.0-cp39-cp39-manylinux_2_27_x86_64.whl (27.1 MB view details)

Uploaded CPython 3.9 manylinux: glibc 2.27+ x86-64

openvino_tensorflow-2.3.0-cp39-cp39-macosx_12_0_x86_64.whl (22.1 MB view details)

Uploaded CPython 3.9 macOS 12.0+ x86-64

openvino_tensorflow-2.3.0-cp38-cp38-manylinux_2_27_x86_64.whl (27.1 MB view details)

Uploaded CPython 3.8 manylinux: glibc 2.27+ x86-64

openvino_tensorflow-2.3.0-cp38-cp38-macosx_12_0_x86_64.whl (22.1 MB view details)

Uploaded CPython 3.8 macOS 12.0+ x86-64

openvino_tensorflow-2.3.0-cp37-cp37m-manylinux_2_27_x86_64.whl (27.1 MB view details)

Uploaded CPython 3.7m manylinux: glibc 2.27+ x86-64

openvino_tensorflow-2.3.0-cp37-cp37m-macosx_12_0_x86_64.whl (22.1 MB view details)

Uploaded CPython 3.7m macOS 12.0+ x86-64

File details

Details for the file openvino_tensorflow-2.3.0-cp39-cp39-win_amd64.whl.

File metadata

File hashes

Hashes for openvino_tensorflow-2.3.0-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 5c40b7e97f8a7fb73440924d66421f878a277946e868dcebdcdb8a824a4bfada
MD5 37b7743ed2c1213cc591ff3551d44421
BLAKE2b-256 8bfbb163a3fdff1495c58c9e8f8f1c88d835a27293b9f96b890132340dd770ad

See more details on using hashes here.

File details

Details for the file openvino_tensorflow-2.3.0-cp39-cp39-manylinux_2_27_x86_64.whl.

File metadata

File hashes

Hashes for openvino_tensorflow-2.3.0-cp39-cp39-manylinux_2_27_x86_64.whl
Algorithm Hash digest
SHA256 74864a9e35fcd5a5f1ca928e7c2c832716851cf899accf5d06d27df441969ffc
MD5 da7f7175f88286277fbe4a35b792c5cf
BLAKE2b-256 436bb6c8026d244a1f6041d94d9ead79bbc88a3d1c3211cf71be5e3406cf8429

See more details on using hashes here.

File details

Details for the file openvino_tensorflow-2.3.0-cp39-cp39-macosx_12_0_x86_64.whl.

File metadata

File hashes

Hashes for openvino_tensorflow-2.3.0-cp39-cp39-macosx_12_0_x86_64.whl
Algorithm Hash digest
SHA256 61b41d74b37aa12c538b6301fa15cea7048e5c46b4ff55a1a5533a81265a40bf
MD5 01970d16959a9057eb498d72a4e1c89d
BLAKE2b-256 f381d8b00210c57c3c7e81e7ac16833b6676dfd5ce0c58238027aa711a0bbc4a

See more details on using hashes here.

File details

Details for the file openvino_tensorflow-2.3.0-cp38-cp38-manylinux_2_27_x86_64.whl.

File metadata

File hashes

Hashes for openvino_tensorflow-2.3.0-cp38-cp38-manylinux_2_27_x86_64.whl
Algorithm Hash digest
SHA256 085d8639eccb564b0cad1d567f8787f3f05b7fff053559ae99179e8840a13610
MD5 c301921ecc5d13b03b2ed595b434ca28
BLAKE2b-256 775abca0953db18f1202dd5f68e578ad8df464464093498706fcb1b687a9ebfa

See more details on using hashes here.

File details

Details for the file openvino_tensorflow-2.3.0-cp38-cp38-macosx_12_0_x86_64.whl.

File metadata

File hashes

Hashes for openvino_tensorflow-2.3.0-cp38-cp38-macosx_12_0_x86_64.whl
Algorithm Hash digest
SHA256 33d9cf1e9bfaeb3e65ea5d45b2eacaa55763f6fb517de3842cbb63cad431e64c
MD5 163f3ef4f18cbbe34acf0871d1e5a79e
BLAKE2b-256 95a8686cbc4957953d20a7534daf84185bab0425219d922d3018c5b35603201e

See more details on using hashes here.

File details

Details for the file openvino_tensorflow-2.3.0-cp37-cp37m-manylinux_2_27_x86_64.whl.

File metadata

File hashes

Hashes for openvino_tensorflow-2.3.0-cp37-cp37m-manylinux_2_27_x86_64.whl
Algorithm Hash digest
SHA256 4fd454a24713f8f87a46cf31ae7868a97667b1048cf2c15ec467fa82723f2582
MD5 12b88fd8eabc94c53de9d58274049c3c
BLAKE2b-256 fdce9665f54eca583ed2c253750390bc724a7ed114b51fa1329a8b20e37b59ab

See more details on using hashes here.

File details

Details for the file openvino_tensorflow-2.3.0-cp37-cp37m-macosx_12_0_x86_64.whl.

File metadata

File hashes

Hashes for openvino_tensorflow-2.3.0-cp37-cp37m-macosx_12_0_x86_64.whl
Algorithm Hash digest
SHA256 c91e3bf8caedf68ef02a5596f4e34ce253784fc2e40483201fa8bfbad319d82c
MD5 727fc2f78a0d10a603a457f1f5de71d1
BLAKE2b-256 6ca2d3eaea54b05aaa263ae58f500f2d401829fd9451404f7a33e17839f9eeb5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page