Skip to main content

Acumos ONNX client library for pushing Onnx models in Acumos

Project description

onnx4acumos

Build Status

onnx4acumos is a client library that allows modelers to on-board their onnx models on an Acumos platform and also to test and run their onnx models.

For more informations on Acumos see : Acumos AI Linux Fondation project , his Acumos AI Wiki and his Documentation.

Based on the acumos python client, we built onnx4acumos client able to create the onnx model bundle with all the required files needed by Acumos platform. When you used onnx4acumos, you can choose to on-board your onnx model directly in Acumos with or whithout micro-service creation (CLI on-boarding). Or you can choose to save your Acumos model bundle locally for later manual on-boarding (Web-onboarding). It that case onnx4acumos will create a ModelName Directory in which you will find the Acumos model bundle and all the necessary files to test and run the Acumos onnx model bundle locally.

Micro-service generation in Acumos will transform your onnx model as a serving model, based on docker, ready to be deployed.

Installation

The main requirements to install onnx4acumos is to install first the following dependancies :

onnx, zipp, acumos, acumos-model-runner, numpy, requests, protobuf, dill, appdirs, filelock, typing-inspect, grpcio, onnxruntime

Once it is done, you can install onnx4acumos with pip:

pip install onnx4acumos

remark : if you used Acumos CLIO you must used python3.6 with acumos 0.8.0 and acumos_model_runner 0.2.3

onnx4acumos Tutorial

This tutorial explains how to on-board an onnx model in an Acumos platform with microservice creation. It’s meant to be followed linearly, and some code snippets depend on earlier imports and objects. Full onnx python client examples are available in the /acumos-onnx-client/acumos-package/onnx4acumos directory of the Acumos onnx client repository.

We assume that you have already installed onnx4acumos package.

  1. On-boarding Onnx Model on Acumos Platform

  2. How to test & run your ONNX model

  3. More Examples

On-boarding Onnx Model on Acumos Platform

Clone the acumos-onnx-client from gerrit (or from Github)

git clone "ssh://your_gerrit_login@gerrit.acumos.org:29418/acumos-onnx-client" && scp -p -P 29418 your_gerrit_login@gerrit.acumos.org:hooks/commit-msg "acumos-onnx-client/.git/hooks/"
or
git clone "ssh://your_gerrit_login@gerrit.acumos.org:29418/acumos-onnx-client"

You will need the two following files for this tutorial :

  • The model located at /acumos-onnx-client/acumos-package/onnx4acumos/OnnxModels/super_resolution_zoo.onnx

  • A configuration file located at /acumos-onnx-client/acumos-package/onnx4acumos/Templates/onnx4acumos.ini

For the first version of onnx4acumos client, this configuration file is mandatory whatever the kind of on-boarding you used (CLI or WEB)

onnx4acumos.ini looks like :

[certificates]
CURL_CA_BUNDLE: /etc/ssl/certs/ca-certificates.crt

[proxy]
https_proxy: socks5h://127.0.0.1:8886/
http_proxy:  socks5h://127.0.0.1:8886/

[session]
push_api: https://acumos/onboarding-app/v2/models

certificates : location of acumos certificates generated during the installation, you can also let this parameter empty (CURL_CA_BUNDLE:), in that case you will just receive a warning.

proxy : The proxy you used to reach your acumos platform.

session : The on-boarding model push API URL, available in Acumos GUI in the ON-BOARDING MODEL page.

To on-board, by CLI, the super_resolution_zoo model in Acumos platform with micro-service activation, use the following command line :

onnx4acumos super_resolution_zoo.onnx onnx4acumos.ini -push -ms

In this command line the -push parameter is used to on-board the onnx model directly in Acumos (CLI on-boarding). You will be prompted to enter your on-boarding token : onboarding token = “your Acumos login”:”authentication token” (example : acumos_user:a2a6a9e8f4gbg3c147eq9g3h). The “authentication token” can be retrieved in the ACUMOS GUI in your personal settings. The -ms parameter is used to launch the micro-service creation in Acumos right after the on-boarding. If -ms is omitted, the model will be on-boarded whithout micro-service generation. (don’t worry, you can create the micro-service later in Acumos))

To on-board by web the super_resolution_zoo model in Acumos platform, follow the next step :

First you have to dump the super_resolution_zoo model locally :

onnx4acumos super_resolution_zoo.onnx onnx4acumos.ini -dump -f input/cat.jpg

Thanks to the command line above a “ModelName” directory (“super_resolution_zoo” directory in our case) is created and it contains all the files needed to test the onnx model locally, the -f parameter is optional and is used to add an input data file in the ModelName_OnnxClient folder.

An Acumos model bundle is also created locally and ready to be on-boarded in Acumos manually (Web onboarding). The default parameter -dump (can be omitted) allows the bundle to be saved locally.

You can find the “ModelName” directory contents description below :

https://gerrit.acumos.org/r/gitweb?p=acumos-onnx-client.git;a=blob_plain;f=docs/images/Capture2.png

In this directory, you cand find :

  • ModelName_OnnxModelOnboarding.py : Python file used to onboard a model in Acumos by CLI and/or to dump the model bundle locally.

  • Dumped Model directory(model bundle) : Directory that contains all the required files nedded by an Acumos platform.

  • Zipped model bundle(ModelName.zip) : zip file (built from Dumped Model directory) ready to be onboarded in Acumos.

  • ModelName_OnnxClient directory : Directory that contains all the necessary files to create a client/server able to test & run your model.

Then The last thing to do is to drag and drop the Zipped model bundle in the “ON-BOARDING BY WEB” page of Acumos or use the browse function to on-board your model.

How to test & run your ONNX model

This on-boarding client can also be used to test and run your onnx model, regardless of whether you want to on-board it or not in Acumos. You have to follow the two main steps, first Launch the model runner server and then fill the skeleton client file to create the onnx client.

We assume that:

  • You have installed acumos_model_runner package.

  • You have dumped the model bundle locally as explained above.

We use a client-server architecture to test and run onnx models, first you have to launch your model runner locally to create the server, then you have to use a python sript as an onnx client to interact with the server.

Launch model runner server

The local server part can be started quite simply as follows :

acumos_model_runner super_resolution_zoo/dumpedModel/super_resolution_zoo

The acumos model runner will also create a swagger interface available at localhost:3330.

Fill skeleton client file to create the ONNX client

You can find the python client skeleton file desciptions below :

https://gerrit.acumos.org/r/gitweb?p=acumos-onnx-client.git;a=blob_plain;f=docs/images/Capture4.png

This python client skeleton file is available in the following folder super_resolution_zoo/super_resolution_zoo_OnnxClient

All steps, in order to fill this python client skeleton, are described below. You must filled the part between two lines of “*******” You just have to copy/paste the following code snipsets below in the right place of your skeleton file.

First import your own needed libraries:

# Import your own needed library below
"**************************************"
from numpy import clip
import PIL
# torch imports
import torchvision.transforms as transforms
"**************************************"

Second, define your own needed methods:

# Define your own needed method below
"**************************************"
def to_numpy(tensor):
     return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()
"**************************************"

Third, define Preprocessing method:

# Import the management of the Onnx data preprocessing below.
# The "preProcessingOutput" variable must contain the preprocessing result with type found in run_xx_OnnxModel method signature below
"*************************************************************************************************"
global img_cb, img_cr
img = PIL.Image.open(preProcessingInput)
resize = transforms.Resize([224, 224])
img = resize(img)
img.show()
img_ycbcr = img.convert('YCbCr')
img_y, img_cb, img_cr = img_ycbcr.split()
to_tensor = transforms.ToTensor()
img_y = to_tensor(img_y)
img_y.unsqueeze_(0)
preprocessingResult = to_numpy(img_y)
"**************************************************************************************************"

# "PreProcessingOutput" variable affectation with the preprocessing result

Fourth, define Postprocessing method:

# Import the management of the Onnx data postprocessing below.
# The "postProcessingInput" variable must contain the data of the Onnx model result with type found in method signature below
"*************************************************************************************************"
global img_cb, img_cr
img_out_y = output[0]
img_out_y = np.array((img_out_y[0] * 255.0))
img_out_y = clip(img_out_y,0, 255)
img_out_y = PIL.Image.fromarray(np.uint8(img_out_y), mode='L')
final_img = PIL.Image.merge(
    "YCbCr", [
    img_out_y,
    img_cb.resize(img_out_y.size, PIL.Image.BICUBIC),
    img_cr.resize(img_out_y.size, PIL.Image.BICUBIC),
  ]).convert("RGB")
f=io.BytesIO()
final_img.save(f,format='jpeg')
imageOutputData = f.getvalue()
final_img.show()
postProcessingResult = imageOutputData
"*************************************************************************************************"

And finally :

Redefine the REST URL if necessary (by default, localhost on port 3330):

restURL = "http://localhost:3330/model/methods/run_super_resolution_zoo_OnnxModel"

The final name of the filled skeleton ModelName_OnnxClientSkeleton.py could be ModelName_OnnxClient.py (the same name without Skeleton, super_resolution_zoo_OnnxClient.py for our example).

The filled python client skeleton file can be retrieved in the acumos-onnx-client folder : acumos-onnx-client/acumos-package/onnx4acumos/FilledClientSkeletonsExemples/super_resolution_zoo_OnnxClient.py.

Remark : To test super_resolution_zoo you must have a server X running on your local system.

Command lines

You can find all command lines to test and/ run onnx model super_resolution_zoo below :

onnx4acumos super_resolution_zoo.onnx onnx4acumos.ini -f InputData/cat.jpg
acumos_model_runner super_resolution_zoo/dumpedModel/super_resolution_zoo/ ## Launch the model runner server
python super_resolution_zoo_OnnxClient.py -f input/cat.jpg ## Launch client and send input data

super_resolution_zoo_Model example

https://gerrit.acumos.org/r/gitweb?p=acumos-onnx-client.git;a=blob_plain;f=docs/images/superResoZoo.png

More Examples

Below are some additional examples. Post and Pre-processing methods are available in the Github folder : onnx/models

GoogLeNet

You can find all command lines for GoogleNetexample below :

https://gerrit.acumos.org/r/gitweb?p=acumos-onnx-client.git;a=blob_plain;f=docs/images/Commandes.png
onnx4acumos OnnxModels/GoogleNet.onnx onnx4acumos.ini -f InputData/car4.jpg
acumos_model_runner GoogLeNet/dumpedModel/GoogleNet/ ## Lanch the model runner server
cd  GoogLeNet/GoogLeNet_OnnxClient
python GoogLeNet_OnnxClient.py -f input/car4.jpg ## Launch client and send input data
https://gerrit.acumos.org/r/gitweb?p=acumos-onnx-client.git;a=blob_plain;f=docs/images/bvlc.png

In our example above :

python GoogLeNet_OnnxClient.py -f input/car4.jpg
python GoogLeNet_OnnxClient.py -f input/BM4.jpeg
python GoogLeNet_OnnxClient.py -f input/espresso.jpeg
python GoogLeNet_OnnxClient.py -f input/cat.jpg
python GoogLeNet_OnnxClient.py -f input/pesan3.jpg

Emotion Ferplus Model example

https://gerrit.acumos.org/r/gitweb?p=acumos-onnx-client.git;a=blob_plain;f=docs/images/emotionFerPlus.png
python emotion_ferplus_model_OnnxClient.py -f input/angryMan.png
python emotion_ferplus_model_OnnxClient.py -f input/sadness.png
python emotion_ferplus_model_OnnxClient.py -f input/happy.jpg
python emotion_ferplus_model_OnnxClient.py -f input/joker.jpg

That’s all :-)

Acumos ONNX Client Release Notes

v1.0.0, 22 January 2021


Acumos onnx Client Developer Guide

Testing

We use a combination of tox, pytest, and flake8 to test acumos_onnx_client. Code which is not PEP8 compliant (aside from E501) will be considered a failing test. You can use tools like autopep8 to “clean” your code as follows:

$ pip install autopep8
$ cd acumos-onnx-client
$ autopep8 -r --in-place --ignore E501 acumos_onnx_client/ testing/ examples/

Run tox directly:

$ cd acumos-onnx-client
$ export WORKSPACE=$(pwd)  # env var normally provided by Jenkins
$ tox

You can also specify certain tox environments to test:

$ tox -e py36  # only test against Python 3.6
$ tox -e flake8  # only lint code

A set of integration test is also available in acumos-package/testing/integration_tests. To run those, use acumos-package/testing/tox-integration.ini as tox config (-c flag), onboarding tests will be ran with python 3.6 to 3.9. You will need to set your user credentials and platform configuration in tox-integration.ini.

$ tox -c acumos-package/testing/integration_tests

Packaging

The RST files in the docs/ directory are used to publish HTML pages to ReadTheDocs.io and to build the package long description in setup.py. The symlink from the subdirectory acumos-package to the docs/ directory is required for the Python packaging tools. Those tools build a source distribution from files in the package root, the directory acumos-package. The MANIFEST.in file directs the tools to pull files from directory docs/, and the symlink makes it possible because the tools only look within the package root.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

onnx4acumos-1.0.0.tar.gz (25.0 kB view hashes)

Uploaded Source

Built Distribution

onnx4acumos-1.0.0-py2.py3-none-any.whl (18.1 kB view hashes)

Uploaded Python 2 Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page