Skip to main content

ML model to low-level inference convertor

Project description


About the Project

Python API and Command Line tool to convert ML models into low-level inference for embedded platforms

Getting Started


Make sure you have tensorflow tf2onnx or torch installed.

Furthermore, make sure you have onnx2c installed and added to PATH.

Lastly you need ProtocolBuffers libraries installed, e.g.:

  • Ubuntu: apt install libprotobuf-dev protobuf-compiler
  • MacOS: brew install protobuf

Get the sources:

git clone
cd onnx2c
git submodule update --init

Then run a standard CMake build

mkdir build
cd build
cmake ..
make onnx2c

And finally add to path

export PATH=$PATH:/path/to/onnx2c/folder


You can can install the package through pypi:

pip install model2c

Or you can clone the repo and build directly from source:

git clone
cd model2c
make install


Train a model with correponding data until sufficient metrics are obtained.

import torch
import model2c.pytorch import convert

# run convertor
convert(model=torch_model, input_shape=(batch_size, 1, 224, 224))
print(f"size of output model: {os.path.getsize('model.c')/1024} kilobytes")


model2c currently supports the following ML frameworks

  • torch
  • tf/keras

To Do

  • torch convert
  • tf convert
  • make command line utility
  • include dynamic axis for batch size

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

model2c-1.0.1.tar.gz (3.6 kB view hashes)

Uploaded Source

Built Distribution

model2c-1.0.1-py3-none-any.whl (3.9 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page