ML model to low-level inference convertor
Project description
model2c
About the Project
Python API and Command Line tool to convert ML models into low-level inference for embedded platforms
Getting Started
Prerequisites
Make sure you have onnx2c
installed and added to PATH
.
Make sure you have ProtocolBuffers libraries installed, e.g.:
- Ubuntu:
apt install libprotobuf-dev protobuf-compiler
- MacOS:
brew install protobuf
Get the sources:
git clone https://github.com/kraiskil/onnx2c.git
cd onnx2c
git submodule update --init
Then run a standard CMake build
mkdir build
cd build
cmake ..
make onnx2c
And finally add to path
export PATH=$PATH:/path/to/onnx2c/folder
Installation
You can can install the package through pypi
:
pip install model2c
Or you can clone the repo and build directly from source:
git clone git@github.com:h3x4g0ns/model2c.git
cd model2c
make install
Usage
Train a model with correponding data until sufficient metrics are obtained.
import torch
import model2c.pytorch import convert
# run convertor
convert(model=torch_model, input_shape=(batch_size, 1, 224, 224))
print(f"size of output model: {os.path.getsize('model.c')/1024} kilobytes")
Support
model2c
currently supports the following ML frameworks
torch
tf/keras
To Do
-
torch
convert -
tf
convert - make command line utility
- include dynamic axis for batch size
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
model2c-1.0.0.tar.gz
(3.5 kB
view hashes)