Converter for neural models into various formats.
Project description
ModelConverter - Compilation Library
Convert your ONNX models to a format compatible with any generation of Luxonis camera using the Model Compilation Library.
Status
Package | Test | Deploy |
---|---|---|
RVC2 | ||
RVC3 | ||
RVC4 | ||
Hailo |
Table of Contents
Installation
System Requirements
ModelConverter
requires docker
to be installed on your system.
It is recommended to use Ubuntu OS for the best compatibility.
On Windows or MacOS, it is recommended to install docker
using the Docker Desktop.
Otherwise follow the installation instructions for your OS from the official website.
Before You Begin
ModelConverter
is in an experimental public beta stage. Some parts might change in the future.
To build the images, you need to download additional packages depending on the selected target.
RVC2 and RVC3
Requires openvino_2022_3_vpux_drop_patched.tar.gz
to be present in docker/extra_packages
.
You can download the archive here.
RVC4
Requires snpe.zip
archive to be present in docker/extra_packages
. You can download an archive with the current version here. You only need to rename it to snpe.zip
and place it in the docker/extra_packages
directory.
HAILO
Requires hailo_ai_sw_suite_2024-04:1
docker image to be present on the system. You can download the image from the Hailo website. Furthermore, you need to use the docker/hailo/Dockerfile.public
file to build the image. The docker/hailo/Dockerfile
is for internal use only.
Instructions
-
Build the docker image:
docker build -f docker/<package>/Dockerfile -t luxonis/modelconverter-<package>:latest .
-
For easier use, you can install the ModelConverter CLI. You can install it from PyPI using the following command:
pip install modelconv
For usage instructions, see
modelconverter --help
.
GPU Support
To enable GPU acceleration for hailo
conversion, install the Nvidia Container Toolkit.
Running ModelConverter
Configuration for the conversion predominantly relies on a yaml
config file. For reference, see defaults.yaml and other examples located in the shared_with_container/configs directory.
However, you have the flexibility to modify specific settings without altering the config file itself. This is done using command line arguments. You provide the arguments in the form of key value
pairs. For better understanding, see Examples.
Sharing Files
When using the supplied docker-compose.yaml
, the shared_with_container
directory facilitates file sharing between the host and container. This directory is mounted as /app/shared_with_container/
inside the container. You can place your models, calibration data, and config files here. The directory structure is:
shared_with_container/
│
├── calibration_data/
│ └── <calibration data will be downloaded here>
│
├── configs/
│ ├── resnet18.yaml
│ └── <configs will be downloaded here>
│
├── models/
│ ├── resnet18.onnx
│ └── <models will be downloaded here>
│
└── outputs/
└── <output_dir>
├── resnet18.onnx
├── resnet18.dlc
├── logs.txt
├── config.yaml
└── intermediate_outputs/
└── <intermediate files generated during the conversion>
While adhering to this structure is not mandatory as long as the files are visible inside the container, it is advised to keep the files organized.
The converter first searches for files exactly at the provided path. If not found, it searches relative to /app/shared_with_container/
.
The output_dir
can be specified using the --output-dir
CLI argument. If such a directory already exists, the output_dir_name
will be appended with the current date and time. If not specified, the output_dir_name
will be autogenerated in the following format: <model_name>_to_<target>_<date>_<time>
.
Usage
You can run the built image either manually using the docker run
command or using the modelconverter
CLI.
-
Set your credentials as environment variables (if required):
export AWS_SECRET_ACCESS_KEY=<your_aws_secret_access_key> export AWS_ACCESS_KEY_ID=<your_aws_access_key_id> export AWS_S3_ENDPOINT_URL=<your_aws_s3_endpoint_url>
-
If
shared_with_container
directory doesn't exist on your host, create it. -
Without remote files, place the model, config, and calibration data in the respective directories (refer Sharing Files).
-
Execute the conversion:
- If using the
docker run
command:docker run --rm -it \ -v $(pwd)/shared_with_container:/app/shared_with_container/ \ -e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \ -e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \ -e AWS_S3_ENDPOINT_URL=$AWS_S3_ENDPOINT_URL \ luxonis/modelconverter-<package>:latest \ convert <target> \ --path <s3_url_or_path> [ config overrides ]
- If using the
modelconverter
CLI:modelconverter convert <target> --path <s3_url_or_path> [ config overrides ]
- If using
docker-compose
:docker compose run <target> convert <target> ...
Examples
Use resnet18.yaml
config, but override calibration.path
:
modelconverter convert rvc4 --path configs/resnet18.yaml \
calibration.path s3://path/to/calibration_data
Override inputs and outputs with command line arguments:
modelconverter convert rvc3 --path configs/resnet18.yaml \
inputs.0.name input_1 \
inputs.0.shape "[1,3,256,256]" \
outputs.0.name output_0
Specify all options via the command line without a config file:
modelconverter convert rvc2 input_model models/yolov6n.onnx \
scale_values "[255,255,255]" \
reverse_input_channels True \
shape "[1,3,256,256]" \
outputs.0.name out_0 \
outputs.1.name out_1 \
outputs.2.name out_2
Multi-Stage Conversion
The converter supports multi-stage conversion. This means conversion of multiple
models where the output of one model is the input to another model. For mulit-stage
conversion you must specify the stages
section in the config file, see defaults.yaml
and multistage.yaml for reference.
The output directory structure would be (assuming RVC4 conversion):
output_path/
├── config.yaml
├── modelconverter.log
├── stage_name1
│ ├── config.yaml
│ ├── intermediate_outputs/
│ ├── model1.onnx
│ └── model1.dlc
└── stage_name2
├── config.yaml
├── intermediate_outputs/
├── model2.onnx
└── model2.dlc
Interactive Mode
Run the container interactively without any post-target arguments:
modelconverter shell rvc4
Inside, you'll find all the necessary tools for manual conversion.
The modelconverter
CLI is available inside the container as well.
Calibration Data
Calibration data can be a mix of images (.jpg
, .png
, .jpeg
) and .npy
, .raw
files.
Image files will be loaded and converted to the format specified in the config.
No conversion is performed for .npy
or .raw
files, the files are used as provided.
NOTE for RVC4: RVC4
expects images to be provided in NHWC
layout. If you provide the calibration data in a form of .npy
or .raw
format, you need to make sure they have the correct layout.
Inference
A basic support for inference. To run the inference, use modelconverter infer <target> <args>
.
For usage instructions, see modelconverter infer --help
.
The input files must be provided in a specific directory structure.
input_path/
├── <name of first input node>
│ ├── 0.npy
│ ├── 1.npy
│ └── ...
├── <name of second input node>
│ ├── 0.npy
│ ├── 1.npy
│ └── ...
├── ...
└── <name of last input node>
├── 0.npy
├── 1.npy
└── ...
Note: The numpy files are sent to the model with no preprocessing, so they must be provided in the correct format and shape.
The output files are then saved in a similar structure.
Inference Example
For yolov6n
model, the input directory structure would be:
input_path/
└── images
├── 0.npy
├── 1.npy
└── ...
To run the inference, use:
modelconverter infer rvc4 \
--model_path <path_to_model.dlc> \
--output-dir <output_dir_name> \
--input_path <input_path>
--path <path_to_config.yaml>
The output directory structure would be:
output_path/
├── output1_yolov6r2
│ ├── 0.npy
│ ├── 1.npy
│ └── ...
├── output2_yolov6r2
│ └── <outputs>
└── output3_yolov6r2
└── <outputs>
Benchmarking
The ModelConverter additionally supports benchmarking of converted models.
To install the package with the benchmarking dependencies, use:
pip install modelconv[bench]
To run the benchmark, use modelconverter benchmark <target> <args>
.
For usage instructions, see modelconverter benchmark --help
.
Example:
modelconverter benchmark rvc3 --model-path <path_to_model.xml>
The command prints a table with the benchmark results to the console and
optionally saves the results to a .csv
file.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file modelconv-0.2.0.tar.gz
.
File metadata
- Download URL: modelconv-0.2.0.tar.gz
- Upload date:
- Size: 62.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.9.20
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | f8aa080eb0276aa76f220a97a064a26114750b419898340a416274519a4cb8e6 |
|
MD5 | a59100af48db094fa5c1896496338584 |
|
BLAKE2b-256 | 4fb7b18f3539dfff5dfbe182be96122f041a8735e6c0089fd0d7872ec9332304 |
File details
Details for the file modelconv-0.2.0-py3-none-any.whl
.
File metadata
- Download URL: modelconv-0.2.0-py3-none-any.whl
- Upload date:
- Size: 81.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.9.20
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 855c7312f1309c0c6cb1ebf028da8bb651a448e928927db40109b3088f6a53a3 |
|
MD5 | 31e8b2d419a24b4381cb7807413508c1 |
|
BLAKE2b-256 | 66b660f2faa73eff82ab0eb987eb7d90ed9030cb9c4928487dcafd3f64db718e |