Skip to main content

OpenVINO™ integration with TensorFlow

Project description

OpenVINO™ integration with TensorFlow

OpenVINO™ integration with TensorFlow is a product designed for TensorFlow* developers who want to get started with OpenVINO™ in their inferencing applications. This product delivers OpenVINO™ inline optimizations which enhance inferencing performance with minimal code modifications. OpenVINO™ integration with TensorFlow accelerates inference across many AI models on a variety of Intel® silicon such as:

  • Intel® CPUs
  • Intel® integrated GPUs
  • Intel® Movidius™ Vision Processing Units - referred to as VPU
  • Intel® Vision Accelerator Design with 8 Intel Movidius™ MyriadX VPUs - referred to as VAD-M or HDDL

[Note: For maximum performance, efficiency, tooling customization, and hardware control, we recommend the developers to adopt native OpenVINO™ APIs and its runtime.]

Installation

Requirements

  • Ubuntu 18.04, macOS 11.2.3 or Windows1 10 - 64 bit
  • Python* 3.7, 3.8 or 3.9
  • TensorFlow* v2.9.1

1Windows release supports only Python3.9

This OpenVINO™ integration with TensorFlow package comes with pre-built libraries of OpenVINO™ version 2022.1.0 meaning you do not have to install OpenVINO™ separately. This package supports:

  • Intel® CPUs

  • Intel® integrated GPUs

  • Intel® Movidius™ Vision Processing Units (VPUs)

      pip3 install -U pip
      pip3 install tensorflow==2.9.1
      pip3 install openvino-tensorflow==2.1.0
    

To leverage Intel® Vision Accelerator Design with Movidius™ (VAD-M) for inference, please refer to: OpenVINO™ integration with TensorFlow alongside the Intel® Distribution of OpenVINO™ Toolkit.

For installation instructions on Windows please refer to OpenVINO™ integration with TensorFlow for Windows

For more details on installation please refer to INSTALL.md, and for build from source options please refer to BUILD.md

Verify Installation

Once you have installed OpenVINO™ integration with TensorFlow, you can use TensorFlow to run inference using a trained model.

To check if OpenVINO™ integration with TensorFlow is properly installed, run

python3 -c "import tensorflow as tf; print('TensorFlow version: ',tf.__version__);\
            import openvino_tensorflow; print(openvino_tensorflow.__version__)"

This should produce an output like:

    TensorFlow version:  2.9.1
    OpenVINO integration with TensorFlow version: b'2.1.0'
    OpenVINO version used for this build: b'2022.1.0'
    TensorFlow version used for this build: v2.9.1
    CXX11_ABI flag used for this build: 1

Usage

By default, Intel® CPU is used to run inference. However, you can change the default option to either Intel® integrated GPU or Intel® VPU for AI inferencing. Invoke the following function to change the hardware on which inferencing is done.

openvino_tensorflow.set_backend('<backend_name>')

Supported backends include 'CPU', 'GPU', 'GPU_FP16', and 'MYRIAD'.

To determine what processing units are available on your system for inference, use the following function:

openvino_tensorflow.list_backends()

For further performance improvements, it is advised to set the environment variable OPENVINO_TF_CONVERT_VARIABLES_TO_CONSTANTS=1. For more API calls and environment variables, see USAGE.md.

[Note: If a CUDA capable device is present in the system then set the environment variable CUDA_VISIBLE_DEVICES to -1]

Examples

To see what you can do with OpenVINO™ integration with TensorFlow, explore the demos located in the examples repository.

Docker Support

Dockerfiles for Ubuntu* 18.04, Ubuntu* 20.04, and TensorFlow* Serving are provided which can be used to build runtime Docker* images for OpenVINO™ integration with TensorFlow on CPU, GPU, VPU, and VAD-M. For more details see docker readme.

Prebuilt Images

Try it on Intel® DevCloud

Sample tutorials are also hosted on Intel® DevCloud. The demo applications are implemented using Jupyter Notebooks. You can interactively execute them on Intel® DevCloud nodes, compare the results of OpenVINO™ integration with TensorFlow, native TensorFlow and OpenVINO™.

License

OpenVINO™ integration with TensorFlow is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.

Support

Please submit your questions, feature requests and bug reports via GitHub issues.

How to Contribute

We welcome community contributions to OpenVINO™ integration with TensorFlow. If you have an idea for improvement:

We will review your contribution as soon as possible. If any additional fixes or modifications are necessary, we will guide you and provide feedback. Before you make your contribution, make sure you can build OpenVINO™ integration with TensorFlow and run all the examples with your fix/patch. If you want to introduce a large feature, create test cases for your feature. Upon the verification of your pull request, we will merge it to the repository provided that the pull request has met the above mentioned requirements and proved acceptable.


* Other names and brands may be claimed as the property of others.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

openvino_tensorflow-2.1.0-cp39-cp39-win_amd64.whl (26.3 MB view details)

Uploaded CPython 3.9 Windows x86-64

openvino_tensorflow-2.1.0-cp39-cp39-manylinux_2_27_x86_64.whl (25.1 MB view details)

Uploaded CPython 3.9 manylinux: glibc 2.27+ x86-64

openvino_tensorflow-2.1.0-cp39-cp39-macosx_11_0_x86_64.whl (25.0 MB view details)

Uploaded CPython 3.9 macOS 11.0+ x86-64

openvino_tensorflow-2.1.0-cp38-cp38-manylinux_2_27_x86_64.whl (25.1 MB view details)

Uploaded CPython 3.8 manylinux: glibc 2.27+ x86-64

openvino_tensorflow-2.1.0-cp38-cp38-macosx_11_0_x86_64.whl (25.0 MB view details)

Uploaded CPython 3.8 macOS 11.0+ x86-64

openvino_tensorflow-2.1.0-cp37-cp37m-manylinux_2_27_x86_64.whl (25.1 MB view details)

Uploaded CPython 3.7m manylinux: glibc 2.27+ x86-64

openvino_tensorflow-2.1.0-cp37-cp37m-macosx_11_0_x86_64.whl (25.0 MB view details)

Uploaded CPython 3.7m macOS 11.0+ x86-64

File details

Details for the file openvino_tensorflow-2.1.0-cp39-cp39-win_amd64.whl.

File metadata

File hashes

Hashes for openvino_tensorflow-2.1.0-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 1b9420a4e7f568bfe0d0aa53bb6183a7bb153e22b9c5f10c90799360dfe36368
MD5 84197e87174a931429184c8d98eda86c
BLAKE2b-256 10c3fb6d62fd8c41194bc54f913c34867afc3f6037efb8d60fa9739a8cc975d4

See more details on using hashes here.

File details

Details for the file openvino_tensorflow-2.1.0-cp39-cp39-manylinux_2_27_x86_64.whl.

File metadata

File hashes

Hashes for openvino_tensorflow-2.1.0-cp39-cp39-manylinux_2_27_x86_64.whl
Algorithm Hash digest
SHA256 aa60b5d08eaf06912f469f98d9db9b78558626d837d2398b962231521cff2dfb
MD5 08f72f2e68fc175ec3e1f7cba1e7b90d
BLAKE2b-256 42332e689dacefd1b5022564979afef0b311e21da7b52ef360962be2d1c7f872

See more details on using hashes here.

File details

Details for the file openvino_tensorflow-2.1.0-cp39-cp39-macosx_11_0_x86_64.whl.

File metadata

File hashes

Hashes for openvino_tensorflow-2.1.0-cp39-cp39-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 9fc6d79b5564107cfbb295ad478819f45daf700f45adff6d2da1431e52cd4eb0
MD5 2973fb139f28bfbad016d4a8acf865db
BLAKE2b-256 fd309afde57900430d02224d61e0613fac75c6fed4d713c5f4399f262d2882e4

See more details on using hashes here.

File details

Details for the file openvino_tensorflow-2.1.0-cp38-cp38-manylinux_2_27_x86_64.whl.

File metadata

File hashes

Hashes for openvino_tensorflow-2.1.0-cp38-cp38-manylinux_2_27_x86_64.whl
Algorithm Hash digest
SHA256 e3b312f871162252b0b77becd6cff1786880514a0a8a78036d37da9b987b9c44
MD5 81dc7997302cb22e51f8112ed5d6e553
BLAKE2b-256 257e222c0dd6cd9f543f49afc5a95825fc3257ce9f526ea06ee0ea0e9c5a4e64

See more details on using hashes here.

File details

Details for the file openvino_tensorflow-2.1.0-cp38-cp38-macosx_11_0_x86_64.whl.

File metadata

File hashes

Hashes for openvino_tensorflow-2.1.0-cp38-cp38-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 1aba127278b37947f6da620f55159feb5d48ba7b3a2fb330607998131071c7af
MD5 76f2ae312ce9e9fea6da6054062b8861
BLAKE2b-256 616b4c81f2c31c1c05da5733e9ac1205ce6d7c0d59fa420d01bcf090b11fb701

See more details on using hashes here.

File details

Details for the file openvino_tensorflow-2.1.0-cp37-cp37m-manylinux_2_27_x86_64.whl.

File metadata

File hashes

Hashes for openvino_tensorflow-2.1.0-cp37-cp37m-manylinux_2_27_x86_64.whl
Algorithm Hash digest
SHA256 407bdf65cb67b0c7e0549c1aca351fb64518c729f3b07c08801c78aeb995e605
MD5 efa8f967248b7f05df6ae7c64fa9187a
BLAKE2b-256 70771270625817075211d1f25e4f57dcfaa8c39d45b1b999ea45af3a5ca0f7ba

See more details on using hashes here.

File details

Details for the file openvino_tensorflow-2.1.0-cp37-cp37m-macosx_11_0_x86_64.whl.

File metadata

File hashes

Hashes for openvino_tensorflow-2.1.0-cp37-cp37m-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 362180f3b082ef9c246327a9af09f811b40a89b1f33b459b8a5464c20fd1084d
MD5 da43a5498d6ed8c42fdfc107529db4e5
BLAKE2b-256 f0b7c7b761cad9059189e27325bbd2ffbb249361e19b92cb4353e5cf028b6363

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page