Model API: model wrappers and pipelines for inference with OpenVINO
Project description
OpenVINO Model API
Model API is a set of wrapper classes for particular tasks and model architectures, simplifying data preprocess and postprocess as well as routine procedures (model loading, asynchronous execution, etc.). It is aimed at simplifying end-to-end model inference for different deployment scenarious, including local execution and serving. The Model API is based on the OpenVINO inference API.
How it works
Model API searches for additional information required for model inference, data, pre/postprocessing, label names, etc. directly in OpenVINO Intermediate Representation. This information is used to prepare the inference data, process and ouput the inference results in a human-readable format.
Features
- Python and C++ API
- Automatic prefetch of public models from OpenVINO Model Zoo (Python only)
- Syncronous and asynchronous inference
- Local inference and servring through the rest API (Python only)
- Model preprocessing embedding for faster inference
Installation
Python
- Clone this repository
- Navigate to
model_api/python
folder - Run
pip install .
C++
-
Install dependencies. For installation on Ubuntu, you can use the following script:
chmod +x model_api/cpp/install_dependencies.sh sudo model_api/cpp/install_dependencies.sh
-
Build library:
- Create
build
folder and navigate into it:
mkdir build && cd build
- Run cmake:
cmake ../model_api/cpp -DOpenCV_DIR=<OpenCV cmake dir> -DOpenVINO_DIR=<OpenVINO cmake dir>
- Build:
cmake --build . -j
- To build a
.tar.gz
package with the library, run:
cmake --build . --target package -j
- Create
Usage
Python
from openvino.model_api.models import DetectionModel
# Create a model (downloaded and cached automatically for OpenVINO Model Zoo models)
# Use URL to work with served model, e.g. "localhost:9000/models/ssd300"
ssd = DetectionModel.create_model("ssd300")
# Run synchronous inference locally
detections = ssd(image) # image is numpy.ndarray
# Print the list of Detection objects with box coordinates, confidence and label string
print(f"Detection results: {detections}")
C++
#include <models/detection_model.h>
#include <models/results.h>
// Load the model fetched using Python API
auto model = DetectionModel::create_model("~/.cache/omz/public/ssd300/FP16/ssd300.xml");
// Run synchronous inference locally
auto result = model->infer(image); // image is cv::Mat
// Iterate over the vector of DetectedObject with box coordinates, confidence and label string
for (auto& obj : result->objects) {
std::cout << obj.label << " | " << obj.confidence << " | " << int(obj.x) << " | " << int(obj.y) << " | "
<< int(obj.x + obj.width) << " | " << int(obj.y + obj.height) << std::endl;
}
Model's static method create_model()
has two overloads. One constructs the model from a string (a path or a model name) (shown above) and the other takes an already constructed InferenceAdapter
.
Prepare a model for InferenceAdapter
There are usecases when it is not possible to modify an internal ov::Model
and it is hidden behind InferenceAdapter
. For example the model can be served using OVMS. create_model()
can construct a model from a given InferenceAdapter
. That approach assumes that the model in InferenceAdapter
was already configured by create_model()
called with a string (a path or a model name). It is possible to prepare such model using C++ or Python:
C++
auto model = DetectionModel::create_model("~/.cache/omz/public/ssd300/FP16/ssd300.xml");
const std::shared_ptr<ov::Model>& ov_model = model->getModel();
ov::serialize(ov_model, "serialized.xml");
Python
model = DetectionModel.create_model("~/.cache/omz/public/ssd300/FP16/ssd300.xml")
model.save("serialized.xml")
After that the model can be constructed from InferenceAdapter
:
ov::Core core;
std::shared_ptr<ov::Model> ov_model = core.read_model("serialized.xml");
std::shared_ptr<InferenceAdapter> adapter = std::make_shared<OpenVINOInferenceAdapter>();
adapter->loadModel(ov_model, core);
auto model = DetectionModel::create_model(adapter);
For more details please refer to the examples of this project.
Supported models
Python:
- Image Classification:
- Object Detection:
- OpenVINO Model Zoo models:
- SSD-based models (e.g. "ssd300", "ssdlite_mobilenet_v2", etc.)
- YOLO-based models (e.g. "yolov3", "yolov4", etc.)
- CTPN: "ctpn"
- DETR: "detr-resnet50"
- CenterNet: "ctdet_coco_dlav0_512"
- FaceBoxes: "faceboxes-pytorch"
- RetinaFace: "retinaface-resnet50-pytorch"
- Ultra Lightweight Face Detection: "ultra-lightweight-face-detection-rfb-320" and "ultra-lightweight-face-detection-slim-320"
- NanoDet with ShuffleNetV2: "nanodet-m-1.5x-416"
- NanoDet Plus with ShuffleNetV2: "nanodet-plus-m-1.5x-416"
- OpenVINO Model Zoo models:
- Semantic Segmentation:
- Instance Segmentation:
C++:
- Image Classification:
- Object Detection:
- SSD-based models (e.g. "ssd300", "ssdlite_mobilenet_v2", etc.)
- YOLO-based models (e.g. "yolov3", "yolov4", etc.)
- CenterNet: "ctdet_coco_dlav0_512"
- FaceBoxes: "faceboxes-pytorch"
- RetinaFace: "retinaface-resnet50-pytorch"
- SSD-based models (e.g. "ssd300", "ssdlite_mobilenet_v2", etc.)
- Semantic Segmentation:
Model configuration discusses possible configurations.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file openvino_model_api-0.1.9.2.tar.gz
.
File metadata
- Download URL: openvino_model_api-0.1.9.2.tar.gz
- Upload date:
- Size: 5.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.13
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4977d0f3f02e6971f776e5660c8ddef3decbb958b6f7f6b72c394f2b5fd95c76 |
|
MD5 | 279d403fd304578e656138cf5da094e5 |
|
BLAKE2b-256 | 15e18bbf4eec3e18270fc7f5a3bed00486910f6ed16735c526af3701d5a96196 |
File details
Details for the file openvino_model_api-0.1.9.2-py3-none-any.whl
.
File metadata
- Download URL: openvino_model_api-0.1.9.2-py3-none-any.whl
- Upload date:
- Size: 111.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.13
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | b45d58dea1b16bbb907a27cb1f8b21f59f9a370daf0903791f7ca0f67ed158e0 |
|
MD5 | dfae6802495757f852d939b383409749 |
|
BLAKE2b-256 | c8ec791161c8c218f4f971f848b709867abcf90c08bf3e258e559bbeda7ce4f0 |