This script converts the OpenVINO IR model to Tensorflow's saved_model, tflite, h5 and pb. in (NCHW) format
Project description
openvino2tensorflow
This script converts the OpenVINO IR model to Tensorflow's saved_model, tflite, h5, TensorFlow.js, TF-TRT(TensorRT), CoreML, EdgeTPU, ONNX, Myriad blob and pb. And the conversion from .pb to saved_model and from saved_model to .pb and from .pb to .tflite and from saved_model to tflite and saved_model to onnx.
Work in progress now.
I'm continuing to add more layers of support and bug fixes on a daily basis. If you have a model that you are having trouble converting, please share the .bin
and .xml
with the issue. I will try to convert as much as possible.
1. Environment
- TensorFlow v2.3.1+
pip3 install --upgrade tensorflow
orpip3 install --upgrade tf-nightly
- OpenVINO 2021.1.110+
- Python 3.6+
- tensorflowjs
pip3 install --upgrade tensorflowjs
- tensorrt
- coremltools
pip3 install --upgrade coremltools
- onnx
pip3 install --upgrade onnx
- tf2onnx
pip3 install --upgrade tf2onnx
- tensorflow-datasets
pip3 install --upgrade tensorflow-datasets
- edgetpu_compiler
- Docker
2. Use case
-
PyTorch (NCHW) -> ONNX (NCHW) -> OpenVINO (NCHW) ->
- ->
openvino2tensorflow
-> Tensorflow/Keras (NHWC) -> TFLite (NHWC) - ->
openvino2tensorflow
-> Tensorflow/Keras (NHWC) -> TFJS (NHWC) - ->
openvino2tensorflow
-> Tensorflow/Keras (NHWC) -> TF-TRT (NHWC) - ->
openvino2tensorflow
-> Tensorflow/Keras (NHWC) -> EdgeTPU (NHWC) - ->
openvino2tensorflow
-> Tensorflow/Keras (NHWC) -> CoreML (NHWC) - ->
openvino2tensorflow
-> Tensorflow/Keras (NHWC) -> ONNX (NHWC) - ->
openvino2tensorflow
-> Myriad Inference Engine Blob (NCHW)
- ->
-
Caffe (NCHW) -> OpenVINO (NCHW) ->
- ->
openvino2tensorflow
-> Tensorflow/Keras (NHWC) -> TFLite (NHWC) - ->
openvino2tensorflow
-> Tensorflow/Keras (NHWC) -> TFJS (NHWC) - ->
openvino2tensorflow
-> Tensorflow/Keras (NHWC) -> TF-TRT (NHWC) - ->
openvino2tensorflow
-> Tensorflow/Keras (NHWC) -> EdgeTPU (NHWC) - ->
openvino2tensorflow
-> Tensorflow/Keras (NHWC) -> CoreML (NHWC) - ->
openvino2tensorflow
-> Tensorflow/Keras (NHWC) -> ONNX (NHWC) - ->
openvino2tensorflow
-> Myriad Inference Engine Blob (NCHW)
- ->
-
MXNet (NCHW) -> OpenVINO (NCHW) ->
- ->
openvino2tensorflow
-> Tensorflow/Keras (NHWC) -> TFLite (NHWC) - ->
openvino2tensorflow
-> Tensorflow/Keras (NHWC) -> TFJS (NHWC) - ->
openvino2tensorflow
-> Tensorflow/Keras (NHWC) -> TF-TRT (NHWC) - ->
openvino2tensorflow
-> Tensorflow/Keras (NHWC) -> EdgeTPU (NHWC) - ->
openvino2tensorflow
-> Tensorflow/Keras (NHWC) -> CoreML (NHWC) - ->
openvino2tensorflow
-> Tensorflow/Keras (NHWC) -> ONNX (NHWC) - ->
openvino2tensorflow
-> Myriad Inference Engine Blob (NCHW)
- ->
-
Keras (NHWC) -> OpenVINO (NCHW・Optimized) ->
- ->
openvino2tensorflow
-> Tensorflow/Keras (NHWC) -> TFLite (NHWC) - ->
openvino2tensorflow
-> Tensorflow/Keras (NHWC) -> TFJS (NHWC) - ->
openvino2tensorflow
-> Tensorflow/Keras (NHWC) -> TF-TRT (NHWC) - ->
openvino2tensorflow
-> Tensorflow/Keras (NHWC) -> EdgeTPU (NHWC) - ->
openvino2tensorflow
-> Tensorflow/Keras (NHWC) -> CoreML (NHWC) - ->
openvino2tensorflow
-> Tensorflow/Keras (NHWC) -> ONNX (NHWC) - ->
openvino2tensorflow
-> Myriad Inference Engine Blob (NCHW)
- ->
-
saved_model ->
saved_model_to_pb
-> pb -
saved_model ->
- ->
saved_model_to_tflite
-> TFLite - ->
saved_model_to_tflite
-> TFJS - ->
saved_model_to_tflite
-> TF-TRT - ->
saved_model_to_tflite
-> EdgeTPU - ->
saved_model_to_tflite
-> CoreML - ->
saved_model_to_tflite
-> ONNX
- ->
-
pb ->
pb_to_tflite
-> TFLite -
pb ->
pb_to_saved_model
-> saved_model
3. Supported Layers
- Currently, there are problems with the Reshape operation of 5D Tensor.
No. | OpenVINO Layer | TF Layer | Remarks |
---|---|---|---|
1 | Parameter | Input | |
2 | Const | Constant, Bias | |
3 | Convolution | Conv2D | |
4 | Add | Add | |
5 | ReLU | ReLU | |
6 | PReLU | PReLU | Maximum(0.0,x)+alpha*Minimum(0.0,x) |
7 | MaxPool | MaxPool2D | |
8 | AvgPool | AveragePooling2D | |
9 | GroupConvolution | DepthwiseConv2D, Conv2D/Split/Concat | |
10 | ConvolutionBackpropData | Conv2DTranspose | |
11 | Concat | Concat | |
12 | Multiply | Multiply | |
13 | Tan | Tan | |
14 | Tanh | Tanh | |
15 | Elu | Elu | |
16 | Sigmoid | Sigmoid | |
17 | HardSigmoid | hard_sigmoid | |
18 | SoftPlus | SoftPlus | |
19 | Swish | Swish | You can replace swish and hard-swish with each other by using the "--replace_swish_and_hardswish" option |
20 | Interpolate | ResizeNearestNeighbor, ResizeBilinear | |
21 | ShapeOf | Shape | |
22 | Convert | Cast | |
23 | StridedSlice | Strided_Slice | |
24 | Pad | Pad, MirrorPad | |
25 | Clamp | ReLU6, Clip | |
26 | TopK | ArgMax, top_k | |
27 | Transpose | Transpose | |
28 | Squeeze | Squeeze | |
29 | Unsqueeze | Identity, expand_dims | WIP |
30 | ReduceMean | reduce_mean | |
31 | ReduceMax | reduce_max | |
32 | ReduceMin | reduce_min | |
33 | ReduceSum | reduce_sum | |
34 | ReduceProd | reduce_prod | |
35 | Subtract | Subtract | |
36 | MatMul | MatMul | |
37 | Reshape | Reshape | |
38 | Range | Range | WIP |
39 | Exp | Exp | |
40 | Abs | Abs | |
41 | SoftMax | SoftMax | |
42 | Negative | Negative | |
43 | Maximum | Maximum | No broadcast |
44 | Minimum | Minimum | No broadcast |
45 | Acos | Acos | |
46 | Acosh | Acosh | |
47 | Asin | Asin | |
48 | Asinh | Asinh | |
49 | Atan | Atan | |
50 | Atanh | Atanh | |
51 | Ceiling | Ceil | |
52 | Cos | Cos | |
53 | Cosh | Cosh | |
54 | Sin | Sin | |
55 | Sinh | Sinh | |
56 | Gather | Gather | |
57 | Divide | Divide, FloorDiv | |
58 | Erf | Erf | |
59 | Floor | Floor | |
60 | FloorMod | FloorMod | |
61 | HSwish | HardSwish | x*ReLU6(x+3)*0.16666667, You can replace swish and hard-swish with each other by using the "--replace_swish_and_hardswish" option |
62 | Log | Log | |
63 | Power | Pow | No broadcast |
64 | Mish | Mish | x*Tanh(softplus(x)) |
65 | Selu | Selu | |
66 | Equal | equal | |
67 | NotEqual | not_equal | |
68 | Greater | greater | |
69 | GreaterEqual | greater_equal | |
70 | Less | less | |
71 | LessEqual | less_equal | |
72 | Select | Select | No broadcast |
73 | LogicalAnd | logical_and | |
74 | LogicalNot | logical_not | |
75 | LogicalOr | logical_or | |
76 | LogicalXor | logical_xor | |
77 | Broadcast | broadcast_to, ones, Multiply | numpy / bidirectional mode, WIP |
78 | Split | Split | |
79 | VariadicSplit | Split, Slice, SplitV | |
80 | MVN | reduce_mean, sqrt, reduce_variance | (x-reduce_mean(x))/sqrt(reduce_variance(x)+eps) |
81 | NonZero | not_equal, boolean_mask | |
82 | ReduceL2 | Multiply, reduce_sum, rsqrt | |
83 | SpaceToDepth | SpaceToDepth | |
84 | DepthToSpace | DepthToSpace | |
85 | Sqrt | sqrt | |
86 | SquaredDifference | squared_difference | |
87 | FakeQuantize | subtract, multiply, round, greater, where, less_equal, add | |
88 | Result | Identity | Output |
4. Setup
4-1. [Environment construction pattern 1] Execution by Docker (strongly recommended
)
You do not need to install any packages other than Docker.
$ docker pull pinto0309/openvino2tensorflow
or
$ docker build -t pinto0309/openvino2tensorflow:latest .
# If no INT8 quantization or conversion to EdgeTPU model is performed
$ xhost +local: && \
docker run --gpus all -it --rm \
-v `pwd`:/home/user/workdir \
-v /tmp/.X11-unix/:/tmp/.X11-unix:rw \
--device /dev/video0:/dev/video0:mwr \
--net=host \
-e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
-e DISPLAY=$DISPLAY \
--privileged \
pinto0309/openvino2tensorflow:latest bash
# For INT8 quantization and conversion to EdgeTPU model
# "TFDS" is the folder where TensorFlow Datasets are downloaded.
$ xhost +local: && \
docker run --gpus all -it --rm \
-v `pwd`:/home/user/workdir \
-v ${HOME}/TFDS:/workspace/TFDS \
-v /tmp/.X11-unix/:/tmp/.X11-unix:rw \
--device /dev/video0:/dev/video0:mwr \
--net=host \
-e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
-e DISPLAY=$DISPLAY \
--privileged \
pinto0309/openvino2tensorflow:latest bash
4-2. [Environment construction pattern 2] Execution by Host machine
To install using the Python Package Index (PyPI), use the following command.
$ pip3 install openvino2tensorflow --upgrade
To install with the latest source code of the main branch, use the following command.
$ pip3 install git+https://github.com/PINTO0309/openvino2tensorflow --upgrade
5. Usage
5-1. openvino to tensorflow convert
usage: openvino2tensorflow [-h] --model_path MODEL_PATH
[--model_output_path MODEL_OUTPUT_PATH]
[--output_saved_model OUTPUT_SAVED_MODEL]
[--output_h5 OUTPUT_H5]
[--output_weight_and_json OUTPUT_WEIGHT_AND_JSON]
[--output_pb OUTPUT_PB]
[--output_no_quant_float32_tflite OUTPUT_NO_QUANT_FLOAT32_TFLITE]
[--output_weight_quant_tflite OUTPUT_WEIGHT_QUANT_TFLITE]
[--output_float16_quant_tflite OUTPUT_FLOAT16_QUANT_TFLITE]
[--output_integer_quant_tflite OUTPUT_INTEGER_QUANT_TFLITE]
[--output_full_integer_quant_tflite OUTPUT_FULL_INTEGER_QUANT_TFLITE]
[--output_integer_quant_type OUTPUT_INTEGER_QUANT_TYPE]
[--string_formulas_for_normalization STRING_FORMULAS_FOR_NORMALIZATION]
[--calib_ds_type CALIB_DS_TYPE]
[--ds_name_for_tfds_for_calibration DS_NAME_FOR_TFDS_FOR_CALIBRATION]
[--split_name_for_tfds_for_calibration SPLIT_NAME_FOR_TFDS_FOR_CALIBRATION]
[--download_dest_folder_path_for_the_calib_tfds DOWNLOAD_DEST_FOLDER_PATH_FOR_THE_CALIB_TFDS]
[--tfds_download_flg TFDS_DOWNLOAD_FLG]
[--load_dest_file_path_for_the_calib_npy LOAD_DEST_FILE_PATH_FOR_THE_CALIB_NPY]
[--output_tfjs OUTPUT_TFJS]
[--output_tftrt OUTPUT_TFTRT]
[--output_coreml OUTPUT_COREML]
[--output_edgetpu OUTPUT_EDGETPU]
[--output_onnx OUTPUT_ONNX]
[--onnx_opset ONNX_OPSET]
[--output_myriad OUTPUT_MYRIAD]
[--vpu_number_of_shaves VPU_NUMBER_OF_SHAVES]
[--vpu_number_of_cmx_slices VPU_NUMBER_OF_CMX_SLICES]
[--replace_swish_and_hardswish REPLACE_SWISH_AND_HARDSWISH]
[--optimizing_hardswish_for_edgetpu OPTIMIZING_HARDSWISH_FOR_EDGETPU]
[--replace_prelu_and_minmax REPLACE_PRELU_AND_MINMAX]
[--yolact]
[--weight_replacement_config WEIGHT_REPLACEMENT_CONFIG]
[--debug]
[--debug_layer_number DEBUG_LAYER_NUMBER]
optional arguments:
-h, --help show this help message and exit
--model_path MODEL_PATH
input IR model path (.xml)
--model_output_path MODEL_OUTPUT_PATH
The output folder path of the converted model file
--output_saved_model OUTPUT_SAVED_MODEL
saved_model output switch
--output_h5 OUTPUT_H5
.h5 output switch
--output_weight_and_json OUTPUT_WEIGHT_AND_JSON
weight of h5 and json output switch
--output_pb OUTPUT_PB
.pb output switch
--output_no_quant_float32_tflite OUTPUT_NO_QUANT_FLOAT32_TFLITE
float32 tflite output switch
--output_weight_quant_tflite OUTPUT_WEIGHT_QUANT_TFLITE
weight quant tflite output switch
--output_float16_quant_tflite OUTPUT_FLOAT16_QUANT_TFLITE
float16 quant tflite output switch
--output_integer_quant_tflite OUTPUT_INTEGER_QUANT_TFLITE
integer quant tflite output switch
--output_full_integer_quant_tflite OUTPUT_FULL_INTEGER_QUANT_TFLITE
full integer quant tflite output switch
--output_integer_quant_type OUTPUT_INTEGER_QUANT_TYPE
Input and output types when doing Integer Quantization
('int8 (default)' or 'uint8')
--string_formulas_for_normalization STRING_FORMULAS_FOR_NORMALIZATION
String formulas for normalization. It is evaluated by
Pythons eval() function.
Default: '(data - [127.5,127.5,127.5]) / [127.5,127.5,127.5]'
--calib_ds_type CALIB_DS_TYPE
Types of data sets for calibration. tfds or numpy
Default: numpy
--ds_name_for_tfds_for_calibration DS_NAME_FOR_TFDS_FOR_CALIBRATION
Dataset name for TensorFlow Datasets for calibration.
https://www.tensorflow.org/datasets/catalog/overview
--split_name_for_tfds_for_calibration SPLIT_NAME_FOR_TFDS_FOR_CALIBRATION
Split name for TensorFlow Datasets for calibration.
https://www.tensorflow.org/datasets/catalog/overview
--download_dest_folder_path_for_the_calib_tfds DOWNLOAD_DEST_FOLDER_PATH_FOR_THE_CALIB_TFDS
Download destination folder path for the calibration
dataset. Default: $HOME/TFDS
--tfds_download_flg TFDS_DOWNLOAD_FLG
True to automatically download datasets from
TensorFlow Datasets. True or False
--load_dest_file_path_for_the_calib_npy LOAD_DEST_FILE_PATH_FOR_THE_CALIB_NPY
The path from which to load the .npy file containing
the numpy binary version of the calibration data.
Default: sample_npy/calibration_data_img_sample.npy
--output_tfjs OUTPUT_TFJS
tfjs model output switch
--output_tftrt OUTPUT_TFTRT
tftrt model output switch
--output_coreml OUTPUT_COREML
coreml model output switch
--output_edgetpu OUTPUT_EDGETPU
edgetpu model output switch
--output_onnx OUTPUT_ONNX
onnx model output switch
--onnx_opset ONNX_OPSET
onnx opset version number
--output_myriad OUTPUT_MYRIAD
myriad inference engine blob output switch
--vpu_number_of_shaves VPU_NUMBER_OF_SHAVES
vpu number of shaves. Default: 4
--vpu_number_of_cmx_slices VPU_NUMBER_OF_CMX_SLICES
vpu number of cmx slices. Default: 4
--replace_swish_and_hardswish REPLACE_SWISH_AND_HARDSWISH
Replace swish and hard-swish with each other
--optimizing_hardswish_for_edgetpu OPTIMIZING_HARDSWISH_FOR_EDGETPU
Optimizing hardswish for edgetpu
--replace_prelu_and_minmax REPLACE_PRELU_AND_MINMAX
Replace prelu and minimum/maximum with each other
--yolact Specify when converting the Yolact model
--weight_replacement_config WEIGHT_REPLACEMENT_CONFIG
Replaces the value of Const for each layer_id defined
in json. Specify the path to the json file.
'weight_replacement_config.json'
--debug debug mode switch
--debug_layer_number DEBUG_LAYER_NUMBER
The last layer number to output when debugging. Used
only when --debug=True
5-2. saved_model to tflite convert
usage: saved_model_to_tflite [-h] --saved_model_dir_path
SAVED_MODEL_DIR_PATH
[--signature_def SIGNATURE_DEF]
[--input_shapes INPUT_SHAPES]
[--model_output_dir_path MODEL_OUTPUT_DIR_PATH]
[--output_no_quant_float32_tflite OUTPUT_NO_QUANT_FLOAT32_TFLITE]
[--output_weight_quant_tflite OUTPUT_WEIGHT_QUANT_TFLITE]
[--output_float16_quant_tflite OUTPUT_FLOAT16_QUANT_TFLITE]
[--output_integer_quant_tflite OUTPUT_INTEGER_QUANT_TFLITE]
[--output_full_integer_quant_tflite OUTPUT_FULL_INTEGER_QUANT_TFLITE]
[--output_integer_quant_type OUTPUT_INTEGER_QUANT_TYPE]
[--string_formulas_for_normalization STRING_FORMULAS_FOR_NORMALIZATION]
[--calib_ds_type CALIB_DS_TYPE]
[--ds_name_for_tfds_for_calibration DS_NAME_FOR_TFDS_FOR_CALIBRATION]
[--split_name_for_tfds_for_calibration SPLIT_NAME_FOR_TFDS_FOR_CALIBRATION]
[--download_dest_folder_path_for_the_calib_tfds DOWNLOAD_DEST_FOLDER_PATH_FOR_THE_CALIB_TFDS]
[--tfds_download_flg TFDS_DOWNLOAD_FLG]
[--load_dest_file_path_for_the_calib_npy LOAD_DEST_FILE_PATH_FOR_THE_CALIB_NPY]
[--output_tfjs OUTPUT_TFJS]
[--output_tftrt OUTPUT_TFTRT]
[--output_coreml OUTPUT_COREML]
[--output_edgetpu OUTPUT_EDGETPU]
[--output_onnx OUTPUT_ONNX]
[--onnx_opset ONNX_OPSET]
optional arguments:
-h, --help show this help message and exit
--saved_model_dir_path SAVED_MODEL_DIR_PATH
Input saved_model dir path
--signature_def SIGNATURE_DEF
Specifies the signature name to load from saved_model
--input_shapes INPUT_SHAPES
Overwrites an undefined input dimension (None or -1).
Specify the input shape in [n,h,w,c] format.
For non-4D tensors, specify [a,b,c,d,e], [a,b], etc.
A comma-separated list if there are multiple inputs.
(e.g.) --input_shapes [1,256,256,3],[1,64,64,3],[1,2,16,16,3]
--model_output_dir_path MODEL_OUTPUT_DIR_PATH
The output folder path of the converted model file
--output_no_quant_float32_tflite OUTPUT_NO_QUANT_FLOAT32_TFLITE
float32 tflite output switch
--output_weight_quant_tflite OUTPUT_WEIGHT_QUANT_TFLITE
weight quant tflite output switch
--output_float16_quant_tflite OUTPUT_FLOAT16_QUANT_TFLITE
float16 quant tflite output switch
--output_integer_quant_tflite OUTPUT_INTEGER_QUANT_TFLITE
integer quant tflite output switch
--output_full_integer_quant_tflite OUTPUT_FULL_INTEGER_QUANT_TFLITE
full integer quant tflite output switch
--output_integer_quant_type OUTPUT_INTEGER_QUANT_TYPE
Input and output types when doing Integer Quantization
('int8 (default)' or 'uint8')
--string_formulas_for_normalization STRING_FORMULAS_FOR_NORMALIZATION
String formulas for normalization. It is evaluated by
Pythons eval() function.
Default: '(data - [127.5,127.5,127.5]) / [127.5,127.5,127.5]'
--calib_ds_type CALIB_DS_TYPE
Types of data sets for calibration. tfds or numpy
Default: numpy
--ds_name_for_tfds_for_calibration DS_NAME_FOR_TFDS_FOR_CALIBRATION
Dataset name for TensorFlow Datasets for calibration.
https://www.tensorflow.org/datasets/catalog/overview
--split_name_for_tfds_for_calibration SPLIT_NAME_FOR_TFDS_FOR_CALIBRATION
Split name for TensorFlow Datasets for calibration.
https://www.tensorflow.org/datasets/catalog/overview
--download_dest_folder_path_for_the_calib_tfds DOWNLOAD_DEST_FOLDER_PATH_FOR_THE_CALIB_TFDS
Download destination folder path for the calibration
dataset. Default: $HOME/TFDS
--tfds_download_flg TFDS_DOWNLOAD_FLG
True to automatically download datasets from
TensorFlow Datasets. True or False
--load_dest_file_path_for_the_calib_npy LOAD_DEST_FILE_PATH_FOR_THE_CALIB_NPY
The path from which to load the .npy file containing
the numpy binary version of the calibration data.
Default: sample_npy/calibration_data_img_sample.npy
--output_tfjs OUTPUT_TFJS
tfjs model output switch
--output_tftrt OUTPUT_TFTRT
tftrt model output switch
--output_coreml OUTPUT_COREML
coreml model output switch
--output_edgetpu OUTPUT_EDGETPU
edgetpu model output switch
--output_onnx OUTPUT_ONNX
onnx model output switch
--onnx_opset ONNX_OPSET
onnx opset version number
5-3. pb to saved_model convert
usage: pb_to_saved_model [-h] --pb_file_path PB_FILE_PATH
--inputs INPUTS
--outputs OUTPUTS
[--model_output_path MODEL_OUTPUT_PATH]
optional arguments:
-h, --help show this help message and exit
--pb_file_path PB_FILE_PATH
Input .pb file path (.pb)
--inputs INPUTS (e.g.1) input:0,input:1,input:2
(e.g.2) images:0,input:0,param:0
--outputs OUTPUTS (e.g.1) output:0,output:1,output:2
(e.g.2) Identity:0,Identity:1,output:0
--model_output_path MODEL_OUTPUT_PATH
The output folder path of the converted model file
5-4. pb to tflite convert
usage: pb_to_tflite [-h] --pb_file_path PB_FILE_PATH --inputs INPUTS
--outputs OUTPUTS
[--model_output_path MODEL_OUTPUT_PATH]
optional arguments:
-h, --help show this help message and exit
--pb_file_path PB_FILE_PATH
Input .pb file path (.pb)
--inputs INPUTS (e.g.1) input,input_1,input_2
(e.g.2) images,input,param
--outputs OUTPUTS (e.g.1) output,output_1,output_2
(e.g.2) Identity,Identity_1,output
--model_output_path MODEL_OUTPUT_PATH
The output folder path of the converted model file
5-5. saved_model to pb convert
usage: saved_model_to_pb [-h] --saved_model_dir_path SAVED_MODEL_DIR_PATH
[--model_output_dir_path MODEL_OUTPUT_DIR_PATH]
[--signature_name SIGNATURE_NAME]
optional arguments:
-h, --help show this help message and exit
--saved_model_dir_path SAVED_MODEL_DIR_PATH
Input saved_model dir path
--model_output_dir_path MODEL_OUTPUT_DIR_PATH
The output folder path of the converted model file (.pb)
--signature_name SIGNATURE_NAME
Signature name to be extracted from saved_model
5-6. Extraction of IR weight
usage: ir_weight_extractor [-h] -m MODEL -o OUTPUT_PATH
optional arguments:
-h, --help show this help message and exit
-m MODEL, --model MODEL
input IR model path
-o OUTPUT_PATH, --output_path OUTPUT_PATH
weights output folder path
6. Execution sample
6-1. Conversion of OpenVINO IR to Tensorflow models
OutOfMemory may occur when converting to saved_model or h5 when the file size of the original model is large, please try the conversion to a pb file alone.
$ openvino2tensorflow \
--model_path=openvino/448x448/FP32/Resnet34_3inputs_448x448_20200609.xml \
--output_saved_model True \
--output_pb True \
--output_weight_quant_tflite True \
--output_float16_quant_tflite True \
--output_no_quant_float32_tflite True
6-2. Convert Protocol Buffer (.pb) to saved_model
This tool is useful if you want to check the internal structure of pb files, tflite files, .h5 files, coreml files and IR (.xml) files. https://lutzroeder.github.io/netron/
$ pb_to_saved_model \
--pb_file_path model_float32.pb \
--inputs inputs:0 \
--outputs Identity:0
6-3. Convert Protocol Buffer (.pb) to tflite
$ pb_to_tflite \
--pb_file_path model_float32.pb \
--inputs inputs \
--outputs Identity,Identity_1,Identity_2
6-4. Convert saved_model to Protocol Buffer (.pb)
$ saved_model_to_pb \
--saved_model_dir_path saved_model \
--model_output_dir_path pb_from_saved_model \
--signature_name serving_default
6-5. Converts saved_model to OpenVINO IR
$ python3 ${INTEL_OPENVINO_DIR}/deployment_tools/model_optimizer/mo_tf.py \
--saved_model_dir saved_model \
--output_dir openvino/reverse
6-6. Checking the structure of saved_model
$ saved_model_cli show \
--dir saved_model \
--tag_set serve \
--signature_def serving_default
6-7. Replace weights or constant values in Const
OP
If the transformation behavior of Reshape
, Transpose
, etc. does not go as expected, you can force the Const
content to change by defining weights and constant values in a JSON file and having it read in.
$ openvino2tensorflow \
--model_path=xxx.xml \
--output_saved_model True \
--output_pb True \
--output_weight_quant_tflite True \
--output_float16_quant_tflite True \
--output_no_quant_float32_tflite True \
--weight_replacement_config weight_replacement_config_sample.json
Structure of JSON sample
{
"format_version": 1,
"layers": [
{
"layer_id": "1123",
"replace_mode": "direct",
"values": [
1,
2,
513,
513
]
},
{
"layer_id": "1125",
"replace_mode": "npy",
"values": "weights_sample/1125.npy"
}
]
}
No. | Elements | Description |
---|---|---|
1 | format_version | Format version of weight_replacement_config. Only 1 so far. |
2 | layers | A list of layers. Enclose it with "[ ]" to define multiple layers to child elements. |
2-1 | layer_id | ID of the Const layer whose weight/constant parameter is to be swapped. For example, specify "1123" for layer id="1123" for type="Const" in .xml. |
2-2 | replace_mode | "direct" or "npy". "direct": Specify the values of the Numpy matrix directly in the "values" attribute. Ignores the values recorded in the .bin file and replaces them with the values specified in "values". "npy": Load a Numpy binary file with the matrix output by np.save('xyz', a). The "values" attribute specifies the path to the Numpy binary file. |
2-3 | values | Specify the value or the path to the Numpy binary file to replace the weight/constant value recorded in .bin. The way to specify is as described in the description of 'replace_mode'. |
6-8. Check the contents of the .npy file, which is a binary version of the image file
$ view_npy --npy_file_path sample_npy/calibration_data_img_sample.npy
Press the Q
button to display the next image. calibration_data_img_sample.npy
contains 20 images extracted from the MS-COCO data set.
7. Output sample
8. Model Structure
https://digital-standard.com/threedpose/models/Resnet34_3inputs_448x448_20200609.onnx
ONNX | OpenVINO | TFLite |
---|---|---|
9. My article
10. Conversion Confirmed Models
- u-2-net
- mobilenet-v2-pytorch
- midasnet
- footprints
- efficientnet-b0-pytorch
- efficientdet-d0
- dense_depth
- deeplabv3
- colorization-v2-norebal
- age-gender-recognition-retail-0013
- resnet
- arcface
- emotion-ferplus
- mosaic
- retinanet
- shufflenet-v2
- squeezenet
- version-RFB-320
- yolov4
- yolov4x-mish
- ThreeDPoseUnityBarracuda - Resnet34_3inputs_448x448
- efficientnet-lite4
- nanodet
- yolov4-tiny
- yolov5s
- yolact
- MiDaS v2
- MODNet
- Person Reidentification
- DeepSort
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for openvino2tensorflow-1.11.0.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 37feb95584b59911e0e80d721c33a365dcd45a5caf3314581b2428dc2e5b9bd0 |
|
MD5 | f699ce2e272b73ba4554c7feaad84f1a |
|
BLAKE2b-256 | 2c33c5262de74d9a8d3dce0a56d2eb0f97030aff1881ec912c0709b9e7d67239 |
Hashes for openvino2tensorflow-1.11.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | edaac78c6884ac4b1b576c78a32c862d4fc85648c29d6accdd9417287af1221c |
|
MD5 | d5acdd031f27abaab13c093279e32fe1 |
|
BLAKE2b-256 | 14b5f5023f7939d65b817867e90332ae79b4fb4e8cc509e90638871cd8a07371 |