Skip to main content

ONNXRuntime Extensions

Project description

Introduction

ONNXRuntime Extensions is a comprehensive package to extend the capability of the ONNX conversion and inference.

  1. The CustomOp C++ library for ONNX Runtime on ONNXRuntime CustomOp API.
  2. Support PyOp feature to implement the custom op with a Python function.
  3. Build all-in-one ONNX model from the pre/post processing code, go to docs/pre_post_processing.md for details.
  4. Support Python per operator debugging, checking hook_model_op in onnxruntime_extensions Python package.

Quick Start

The following code shows how to run ONNX model and ONNXRuntime customop more straightforwardly.

import numpy
from onnxruntime_extensions import PyOrtFunction, VectorToString
# <ProjectDir>/tutorials/data/gpt-2/gpt2_tok.onnx
encode = PyOrtFunction.from_model('gpt2_tok.onnx')
# https://github.com/onnx/models/blob/master/text/machine_comprehension/gpt-2/model/gpt2-lm-head-10.onnx
gpt2_core = PyOrtFunction.from_model('gpt2-lm-head-10.onnx')
decode = PyOrtFunction.from_customop(VectorToString, map={' a': [257]}, unk='<unknown>')

input_text = ['It is very cool to have']
output, *_ = gpt2_core(input_ids)
next_id = numpy.argmax(output[:, :, -1, :], axis=-1)
print(input_text[0] + decode(next_id).item())

This is a simplified version of GPT-2 inference for the demonstration only, The comprehensive solution on the GPT-2 model and its deviants are under development, and here is the link to the experimental.

Android/iOS

The previous processing python code can be translated into all-in-one model to be run in Android/iOS mobile platform, without any Python runtime and the 3rd-party dependencies requirement. Here is the tutorial

CustomOp Conversion

The mainstream ONNX converters support the custom op generation if there is the operation from the original framework cannot be interpreted as ONNX standard operators. Check the following two examples on how to do this.

  1. CustomOp conversion by pytorch.onnx.exporter
  2. CustomOp conversion by tf2onnx

Inference with CustomOp library

The CustomOp library was written with C++, so that it supports run the model in the native binaries. The following is the example of C++ version.

  // The line loads the customop library into ONNXRuntime engine to load the ONNX model with the custom op
  Ort::ThrowOnError(Ort::GetApi().RegisterCustomOpsLibrary((OrtSessionOptions*)session_options, custom_op_library_filename, &handle));

  // The regular ONNXRuntime invoking to run the model.
  Ort::Session session(env, model_uri, session_options);
  RunSession(session, inputs, outputs);

Of course, with Python language, the thing becomes much easier since PyOrtFunction will directly translate the ONNX model into a python function. But if the ONNXRuntime Custom Python API want to be used, the inference process will be

import onnxruntime as _ort
from onnxruntime_extensions import get_library_path as _lib_path

so = _ort.SessionOptions()
so.register_custom_ops_library(_lib_path())

# Run the ONNXRuntime Session.
# sess = _ort.InferenceSession(model, so)
# sess.run (...)

More CustomOp

Welcome to contribute the customop C++ implementation directly in this repository, which will widely benefit other users. Besides C++, if you want to quickly verify the ONNX model with some custom operators with Python language, PyOp will help with that

import numpy
from onnxruntime_extensions import PyOp, onnx_op

# Implement the CustomOp by decorating a function with onnx_op
@onnx_op(op_type="Inverse", inputs=[PyOp.dt_float])
def inverse(x):
    # the user custom op implementation here:
    return numpy.linalg.inv(x)

# Run the model with this custom op
# model_func = PyOrtFunction(model_path)
# outputs = model_func(inputs)
# ...

Build and Development

This project supports Python and can be built from source easily, or a simple cmake build without Python dependency.

Python package

  • Install Visual Studio with C++ development tools on Windows, or gcc for Linux or xcode for MacOS, and cmake on the unix-like platform. (hints: in Windows platform, if cmake bundled in Visual Studio was used, please specify the set VCVARS=%ProgramFiles(x86)%\Microsoft Visual Studio\2019<Edition>\VC\Auxiliary\Build\vcvars64.bat)
  • Prepare Python env and install the pip packages in the requirements.txt.
  • python setup.py install to build and install the package.
  • OR python setup.py develop to install the package in the development mode, which is more friendly for the developer since (re)installation is not needed with every build.

Test:

  • run pytest test in the project root directory.

The share library for non-Python

If only DLL/shared library is needed without any Python dependencies, please run build.bat or bash ./build.sh to build the library. By default the DLL or the library will be generated in the directory out/<OS>/<FLAVOR>. There is a unit test to help verify the build.

The static library and link with ONNXRuntime

For sake of the binary size, the project can be built as a static library and link into ONNXRuntime. Here is the script to this, which is especially usefully on building the mobile release.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

onnxruntime_extensions-0.3.1-cp39-cp39-win_amd64.whl (474.0 kB view hashes)

Uploaded CPython 3.9 Windows x86-64

onnxruntime_extensions-0.3.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (988.2 kB view hashes)

Uploaded CPython 3.9 manylinux: glibc 2.17+ x86-64

onnxruntime_extensions-0.3.1-cp39-cp39-macosx_10_14_x86_64.whl (733.1 kB view hashes)

Uploaded CPython 3.9 macOS 10.14+ x86-64

onnxruntime_extensions-0.3.1-cp38-cp38-win_amd64.whl (473.9 kB view hashes)

Uploaded CPython 3.8 Windows x86-64

onnxruntime_extensions-0.3.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (988.1 kB view hashes)

Uploaded CPython 3.8 manylinux: glibc 2.17+ x86-64

onnxruntime_extensions-0.3.1-cp38-cp38-macosx_10_14_x86_64.whl (733.1 kB view hashes)

Uploaded CPython 3.8 macOS 10.14+ x86-64

onnxruntime_extensions-0.3.1-cp37-cp37m-win_amd64.whl (475.0 kB view hashes)

Uploaded CPython 3.7m Windows x86-64

onnxruntime_extensions-0.3.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (990.6 kB view hashes)

Uploaded CPython 3.7m manylinux: glibc 2.17+ x86-64

onnxruntime_extensions-0.3.1-cp37-cp37m-macosx_10_14_x86_64.whl (731.6 kB view hashes)

Uploaded CPython 3.7m macOS 10.14+ x86-64

onnxruntime_extensions-0.3.1-cp36-cp36m-win_amd64.whl (475.0 kB view hashes)

Uploaded CPython 3.6m Windows x86-64

onnxruntime_extensions-0.3.1-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (990.4 kB view hashes)

Uploaded CPython 3.6m manylinux: glibc 2.17+ x86-64

onnxruntime_extensions-0.3.1-cp36-cp36m-macosx_10_14_x86_64.whl (731.6 kB view hashes)

Uploaded CPython 3.6m macOS 10.14+ x86-64

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page