Skip to main content

OpenVINO™ integration with TensorFlow

Project description

OpenVINO™ integration with TensorFlow

OpenVINO™ integration with TensorFlow is a product designed for TensorFlow* developers who want to get started with OpenVINO™ in their inferencing applications. This product delivers OpenVINO™ inline optimizations which enhance inferencing performance with minimal code modifications. OpenVINO™ integration with TensorFlow accelerates inference across many AI models on a variety of Intel® silicon such as:

  • Intel® CPUs
  • Intel® integrated GPUs
  • Intel® Movidius™ Vision Processing Units - referred to as VPU
  • Intel® Vision Accelerator Design with 8 Intel Movidius™ MyriadX VPUs - referred to as VAD-M or HDDL

[Note: For maximum performance, efficiency, tooling customization, and hardware control, we recommend the developers to adopt native OpenVINO™ APIs and its runtime.]

Installation

Requirements

  • Ubuntu 18.04, macOS 11.2.3 or Windows1 10 - 64 bit
  • Python* 3.7, 3.8 or 3.9
  • TensorFlow* v2.9.2

1Windows release supports only Python3.9

This OpenVINO™ integration with TensorFlow package comes with pre-built libraries of OpenVINO™ version 2022.2.0 meaning you do not have to install OpenVINO™ separately. This package supports:

  • Intel® CPUs

  • Intel® integrated GPUs

  • Intel® Movidius™ Vision Processing Units (VPUs)

      pip3 install -U pip
      pip3 install tensorflow==2.9.2
      pip3 install openvino-tensorflow==2.2.0
    

To leverage Intel® Vision Accelerator Design with Movidius™ (VAD-M) for inference, please refer to: OpenVINO™ integration with TensorFlow alongside the Intel® Distribution of OpenVINO™ Toolkit.

For installation instructions on Windows please refer to OpenVINO™ integration with TensorFlow for Windows

For more details on installation please refer to INSTALL.md, and for build from source options please refer to BUILD.md

Verify Installation

Once you have installed OpenVINO™ integration with TensorFlow, you can use TensorFlow to run inference using a trained model.

To check if OpenVINO™ integration with TensorFlow is properly installed, run

python3 -c "import tensorflow as tf; print('TensorFlow version: ',tf.__version__);\
            import openvino_tensorflow; print(openvino_tensorflow.__version__)"

This should produce an output like:

    TensorFlow version:  2.9.2
    OpenVINO integration with TensorFlow version: b'2.2.0'
    OpenVINO version used for this build: b'2022.2.0'
    TensorFlow version used for this build: v2.9.2
    CXX11_ABI flag used for this build: 1

Usage

By default, Intel® CPU is used to run inference. However, you can change the default option to either Intel® integrated GPU or Intel® VPU for AI inferencing. Invoke the following function to change the hardware on which inferencing is done.

openvino_tensorflow.set_backend('<backend_name>')

Supported backends include 'CPU', 'GPU', 'GPU_FP16', and 'MYRIAD'.

To determine what processing units are available on your system for inference, use the following function:

openvino_tensorflow.list_backends()

For further performance improvements, it is advised to set the environment variable OPENVINO_TF_CONVERT_VARIABLES_TO_CONSTANTS=1. For more API calls and environment variables, see USAGE.md.

[Note: If a CUDA capable device is present in the system then set the environment variable CUDA_VISIBLE_DEVICES to -1]

Examples

To see what you can do with OpenVINO™ integration with TensorFlow, explore the demos located in the examples repository.

Docker Support

Dockerfiles for Ubuntu* 18.04, Ubuntu* 20.04, and TensorFlow* Serving are provided which can be used to build runtime Docker* images for OpenVINO™ integration with TensorFlow on CPU, GPU, VPU, and VAD-M. For more details see docker readme.

Prebuilt Images

Try it on Intel® DevCloud

Sample tutorials are also hosted on Intel® DevCloud. The demo applications are implemented using Jupyter Notebooks. You can interactively execute them on Intel® DevCloud nodes, compare the results of OpenVINO™ integration with TensorFlow, native TensorFlow and OpenVINO™.

License

OpenVINO™ integration with TensorFlow is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.

Support

Please submit your questions, feature requests and bug reports via GitHub issues.

How to Contribute

We welcome community contributions to OpenVINO™ integration with TensorFlow. If you have an idea for improvement:

We will review your contribution as soon as possible. If any additional fixes or modifications are necessary, we will guide you and provide feedback. Before you make your contribution, make sure you can build OpenVINO™ integration with TensorFlow and run all the examples with your fix/patch. If you want to introduce a large feature, create test cases for your feature. Upon the verification of your pull request, we will merge it to the repository provided that the pull request has met the above mentioned requirements and proved acceptable.


* Other names and brands may be claimed as the property of others.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

openvino_tensorflow-2.2.0-cp39-cp39-win_amd64.whl (27.8 MB view details)

Uploaded CPython 3.9 Windows x86-64

openvino_tensorflow-2.2.0-cp39-cp39-manylinux_2_27_x86_64.whl (27.0 MB view details)

Uploaded CPython 3.9 manylinux: glibc 2.27+ x86-64

openvino_tensorflow-2.2.0-cp39-cp39-macosx_11_0_x86_64.whl (25.5 MB view details)

Uploaded CPython 3.9 macOS 11.0+ x86-64

openvino_tensorflow-2.2.0-cp38-cp38-manylinux_2_27_x86_64.whl (27.0 MB view details)

Uploaded CPython 3.8 manylinux: glibc 2.27+ x86-64

openvino_tensorflow-2.2.0-cp38-cp38-macosx_11_0_x86_64.whl (25.5 MB view details)

Uploaded CPython 3.8 macOS 11.0+ x86-64

openvino_tensorflow-2.2.0-cp37-cp37m-manylinux_2_27_x86_64.whl (27.0 MB view details)

Uploaded CPython 3.7m manylinux: glibc 2.27+ x86-64

openvino_tensorflow-2.2.0-cp37-cp37m-macosx_11_0_x86_64.whl (25.5 MB view details)

Uploaded CPython 3.7m macOS 11.0+ x86-64

File details

Details for the file openvino_tensorflow-2.2.0-cp39-cp39-win_amd64.whl.

File metadata

File hashes

Hashes for openvino_tensorflow-2.2.0-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 619121b1b91554682b43d66d830ac75b4a806069c2ba01e864170c88d02b6ef8
MD5 ce4ec8f3f29be0c74f45601bc4fe0038
BLAKE2b-256 cac73e8c166804a679e9cc01f2774bfbe8871ba8f935ac4487d3220fd1c05693

See more details on using hashes here.

File details

Details for the file openvino_tensorflow-2.2.0-cp39-cp39-manylinux_2_27_x86_64.whl.

File metadata

File hashes

Hashes for openvino_tensorflow-2.2.0-cp39-cp39-manylinux_2_27_x86_64.whl
Algorithm Hash digest
SHA256 a29c185c8daa783b0adf6dcbb686f26fa542669dad35a0cd6a097ae268d61763
MD5 30b1d8179332232a623b14157fd4ec69
BLAKE2b-256 c849df2a4af12ec157884fca535f86c66039617fb6931e14a585d1817305e8f8

See more details on using hashes here.

File details

Details for the file openvino_tensorflow-2.2.0-cp39-cp39-macosx_11_0_x86_64.whl.

File metadata

File hashes

Hashes for openvino_tensorflow-2.2.0-cp39-cp39-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 d145f50d5d693c52d784d708bed2d70703a7b5dbb6ffeb9ecbfdcd5d7478577e
MD5 a5945150945bb3a3968f71529c9def88
BLAKE2b-256 d5c4238faa4a28336b4e7cd061f87167d5373538691d7f625a0b43f0720f67fd

See more details on using hashes here.

File details

Details for the file openvino_tensorflow-2.2.0-cp38-cp38-manylinux_2_27_x86_64.whl.

File metadata

File hashes

Hashes for openvino_tensorflow-2.2.0-cp38-cp38-manylinux_2_27_x86_64.whl
Algorithm Hash digest
SHA256 293f3f144ac055ae0ed99eb57db5fe9537d75ffdcedbc1eda77bd79403791764
MD5 b6a57c796a049acfb3f20822cd2b9134
BLAKE2b-256 313a892a3ed1346e69ac23fdaa32822d1f3e735def42407c2fde100e162228b3

See more details on using hashes here.

File details

Details for the file openvino_tensorflow-2.2.0-cp38-cp38-macosx_11_0_x86_64.whl.

File metadata

File hashes

Hashes for openvino_tensorflow-2.2.0-cp38-cp38-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 b16dca6484a731eb6954783de01109fb457e5f29059918e09bd1e4157b95aa2f
MD5 c141b4f78bca2a43772ed0748f06d6cc
BLAKE2b-256 c7d8ff694e3fb4e8084f432bc3a5c4d728a52a05f26ce1b00aef122c2363ec50

See more details on using hashes here.

File details

Details for the file openvino_tensorflow-2.2.0-cp37-cp37m-manylinux_2_27_x86_64.whl.

File metadata

File hashes

Hashes for openvino_tensorflow-2.2.0-cp37-cp37m-manylinux_2_27_x86_64.whl
Algorithm Hash digest
SHA256 a4604837140b1dd06cc917799146d5349cf8d45f145206e7d4a9ca694057ef92
MD5 cb45f098a33885689b49775b4b2ff730
BLAKE2b-256 78297a3193e40855fb152f9a31a2dc23ae155c3310889b6823ff7d454dddfd63

See more details on using hashes here.

File details

Details for the file openvino_tensorflow-2.2.0-cp37-cp37m-macosx_11_0_x86_64.whl.

File metadata

File hashes

Hashes for openvino_tensorflow-2.2.0-cp37-cp37m-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 4cf1290aa0f93ab55927aac8eea346e662a58cc5030756d09ad9f42b32db1ae9
MD5 b76a04161fc2af529f5ae5b50ca40db8
BLAKE2b-256 da10f2d207bf1e7257000e4f5237811715a420be0552d9ae23ff19e7ed81249d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page