Acumos ONNX client library for pushing Onnx models in Acumos
Project description
onnx4acumos
onnx4acumos is a client library that allows modelers to on-board their onnx models on an Acumos platform and also to test and run their onnx models.
For more informations on Acumos see : Acumos AI Linux Fondation project , his Acumos AI Wiki and his Documentation.
Based on the acumos python client, we built onnx4acumos client able to create the onnx model bundle with all the required files needed by Acumos platform. When you used onnx4acumos, you can choose to on-board your onnx model directly in Acumos with or whithout micro-service creation (CLI on-boarding). Or you can choose to save your Acumos model bundle locally for later manual on-boarding (Web-onboarding). It that case onnx4acumos will create a ModelName Directory in which you will find the Acumos model bundle and all the necessary files to test and run the Acumos onnx model bundle locally.
Micro-service generation in Acumos will transform your onnx model as a serving model, based on docker, ready to be deployed.
Installation
The main requirements to install onnx4acumos is to install first the following dependancies :
onnx, zipp, acumos, acumos-model-runner, numpy, requests, protobuf, dill, appdirs, filelock, typing-inspect, grpcio, onnxruntime
Once it is done, you can install onnx4acumos with pip:
pip install onnx4acumos
remark : if you used Acumos CLIO you must used python3.6 with acumos 0.8.0 and acumos_model_runner 0.2.3
onnx4acumos Tutorial
This tutorial explains how to on-board an onnx model in an Acumos platform with microservice creation. It’s meant to be followed linearly, and some code snippets depend on earlier imports and objects. Full onnx python client examples are available in the /acumos-onnx-client/acumos-package/onnx4acumos directory of the Acumos onnx client repository.
We assume that you have already installed onnx4acumos package.
On-boarding Onnx Model on Acumos Platform
Clone the acumos-onnx-client from gerrit
git clone "ssh://your_gerrit_login@gerrit.acumos.org:29418/acumos-onnx-client" && scp -p -P 29418 your_gerrit_login@gerrit.acumos.org:hooks/commit-msg "acumos-onnx-client/.git/hooks/"
or
git clone "ssh://your_gerrit_login@gerrit.acumos.org:29418/acumos-onnx-client"
or from Github
You will need the two following files for this tutorial :
The model located at /acumos-onnx-client/acumos-package/onnx4acumos/OnnxModels/super_resolution_zoo.onnx
A configuration file located at /acumos-onnx-client/acumos-package/onnx4acumos/Templates/onnx4acumos.ini
This configuration file is mandatory if you want to push your model in Acumos by CLI (CLI on-boarding). You must copy this file locally if you want to onboard your own models (sometimes onnx4acumos folder name can be confused with onnx4acumos command).
onnx4acumos.ini looks like :
[certificates]
CURL_CA_BUNDLE: /etc/ssl/certs/ca-certificates.crt
[proxy]
https_proxy: socks5h://127.0.0.1:8886/
http_proxy: socks5h://127.0.0.1:8886/
[session]
push_api: https://acumos/onboarding-app/v2/models
certificates : location of acumos certificates generated during the installation, you can also let this parameter empty (CURL_CA_BUNDLE:), in that case you will just receive a warning.
proxy : The proxy you used to reach your acumos platform. If you don’t use proxy, you can also let this parameter empty (https:proxy:).
session : The on-boarding model push API URL, available in Acumos GUI in the ON-BOARDING MODEL page.
To on-board, by CLI, the super_resolution_zoo model in Acumos platform with micro-service activation, use the following command line :
onnx4acumos super_resolution_zoo.onnx onnx4acumos.ini -push -ms -li "path to your license file" -deploy
In this command line the -push parameter is used to on-board the onnx model directly in Acumos (CLI on-boarding). You will be prompted to enter your on-boarding token : onboarding token = “your Acumos login”:”authentication token” (example : acumos_user:a2a6a9e8f4gbg3c147eq9g3h). The “authentication token” can be retrieved in the ACUMOS GUI in your personal settings.
The -ms parameter is used to launch the micro-service creation in Acumos right after the on-boarding. If -ms is omitted, the model will be on-boarded whithout micro-service generation. (don’t worry, you can create the micro-service later in Acumos)).
The -li parameter is used to onboard a license file alongside your model in Acumos in order to protect the model’s copyright. This parameter is optional. Please refers to the licence management project in the Acumos wiki. You can find a license template in the doc folder of the acumos4onnx repo in github.
The -deploy parameter is used to deploy the model automatically after the microservice generation (based in Jenkins server configuration set up in Acumos/SITE ADMIN/model deployment automation), by default deploy=False, so if deploy is not mentionned in the command line the model will not be deployed. If -deploy is added in the comand line and -ms has been ommitted, the microservice will be created and deployed.
To on-board by web the super_resolution_zoo model in Acumos platform, follow the next step :
First you have to dump the super_resolution_zoo model locally :
onnx4acumos super_resolution_zoo.onnx onnx4acumos.ini -dump -f input/cat.jpg
The onnx4acumos.ini configuration file is optionnal when you dump your model bundle localy for WEB on-boarding purpose, however it can be provided, in the command line, in order to copy it in “ModelName” directory for later use (push using ModelName/ModelName_OnnxModelOnBoarding.py).
Thanks to the command line above a “ModelName” directory (“super_resolution_zoo” directory in our case) is created and it contains all the files needed to test the onnx model locally, the -f parameter is optional and is used to add an input data file in the ModelName_OnnxClient folder.
An Acumos model bundle is also created locally and ready to be on-boarded in Acumos manually (Web onboarding). The default parameter -dump (can be omitted) allows the bundle to be saved locally.
You can find the “ModelName” directory contents description below :
In this directory, you cand find :
ModelName_OnnxModelOnboarding.py : Python file used to onboard a model in Acumos by CLI and/or to dump the model bundle locally.
Dumped Model directory(model bundle) : Directory that contains all the required files nedded by an Acumos platform.
Zipped model bundle(ModelName.zip) : zip file (built from Dumped Model directory) ready to be onboarded in Acumos.
ModelName_OnnxClient directory : Directory that contains all the necessary files to create a client/server able to test & run your model.
Then The last thing to do is to drag and drop the Zipped model bundle in the “ON-BOARDING BY WEB” page of Acumos or use the browse function to on-board your model.
How to test & run your ONNX model
This on-boarding client can also be used to test and run your onnx model, regardless of whether you want to on-board it or not in Acumos. You have to follow the two main steps, first Launch the model runner server and then fill the skeleton client file to create the onnx client.
We assume that:
You have installed acumos_model_runner package.
You have dumped the model bundle locally as explained above.
We use a client-server architecture to test and run onnx models, first you have to launch your model runner locally to create the server, then you have to use a python sript as an onnx client to interact with the server.
Launch model runner server
The local server part can be started quite simply as follows :
acumos_model_runner super_resolution_zoo/dumpedModel/super_resolution_zoo
The acumos model runner will also create a swagger interface available at localhost:3330.
Fill skeleton client file to create the ONNX client
You can find the python client skeleton file desciptions below :
This python client skeleton file is available in the following folder super_resolution_zoo/super_resolution_zoo_OnnxClient
All steps, in order to fill this python client skeleton, are described below. You must filled the part between two lines of “*******” You just have to copy/paste the following code snipsets below in the right place of your skeleton file.
First import your own needed libraries:
# Import your own needed library below
"**************************************"
from numpy import clip
import PIL
# torch imports
import torchvision.transforms as transforms
"**************************************"
Second, define your own needed methods:
# Define your own needed method below
"**************************************"
def to_numpy(tensor):
return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()
"**************************************"
Third, define Preprocessing method:
# Import the management of the Onnx data preprocessing below.
# The "preProcessingOutput" variable must contain the preprocessing result with type found in run_xx_OnnxModel method signature below
"*************************************************************************************************"
global img_cb, img_cr
img = PIL.Image.open(preProcessingInput)
resize = transforms.Resize([224, 224])
img = resize(img)
img.show()
img_ycbcr = img.convert('YCbCr')
img_y, img_cb, img_cr = img_ycbcr.split()
to_tensor = transforms.ToTensor()
img_y = to_tensor(img_y)
img_y.unsqueeze_(0)
preprocessingResult = to_numpy(img_y)
"**************************************************************************************************"
# "PreProcessingOutput" variable affectation with the preprocessing result
Fourth, define Postprocessing method:
# Import the management of the Onnx data postprocessing below.
# The "postProcessingInput" variable must contain the data of the Onnx model result with type found in method signature below
"*************************************************************************************************"
global img_cb, img_cr
img_out_y = output[0]
img_out_y = np.array((img_out_y[0] * 255.0))
img_out_y = clip(img_out_y,0, 255)
img_out_y = PIL.Image.fromarray(np.uint8(img_out_y), mode='L')
final_img = PIL.Image.merge(
"YCbCr", [
img_out_y,
img_cb.resize(img_out_y.size, PIL.Image.BICUBIC),
img_cr.resize(img_out_y.size, PIL.Image.BICUBIC),
]).convert("RGB")
f=io.BytesIO()
final_img.save(f,format='jpeg')
imageOutputData = f.getvalue()
final_img.show()
postProcessingResult = imageOutputData
"*************************************************************************************************"
And finally :
Redefine the REST URL if necessary (by default, localhost on port 3330):
restURL = "http://localhost:3330/model/methods/run_super_resolution_zoo_OnnxModel"
The final name of the filled skeleton ModelName_OnnxClientSkeleton.py could be ModelName_OnnxClient.py (the same name without Skeleton, super_resolution_zoo_OnnxClient.py for our example).
The filled python client skeleton file can be retrieved in the acumos-onnx-client folder : acumos-onnx-client/acumos-package/onnx4acumos/FilledClientSkeletonsExamples/super_resolution_zoo_OnnxClient.py.
Remark : To test super_resolution_zoo you must have a server X running on your local system.
Command lines
You can find all command lines to test and run onnx model super_resolution_zoo below :
onnx4acumos super_resolution_zoo.onnx onnx4acumos.ini -f InputData/cat.jpg
acumos_model_runner super_resolution_zoo/dumpedModel/super_resolution_zoo/ ## Launch the model runner server
python super_resolution_zoo_OnnxClient.py -f input/cat.jpg ## Launch client and send input data
super_resolution_zoo_Model example
More Examples
Below are some additional examples. Post and Pre-processing methods are available in the Github folder : onnx/models
GoogLeNet
You can find all command lines for GoogleNetexample below :
onnx4acumos OnnxModels/GoogleNet.onnx onnx4acumos.ini -f InputData/car4.jpg
acumos_model_runner GoogLeNet/dumpedModel/GoogleNet/ ## Lanch the model runner server
cd GoogLeNet/GoogLeNet_OnnxClient
python GoogLeNet_OnnxClient.py -f input/car4.jpg ## Launch client and send input data
In our example above :
python GoogLeNet_OnnxClient.py -f input/car4.jpg
python GoogLeNet_OnnxClient.py -f input/BM4.jpeg
python GoogLeNet_OnnxClient.py -f input/espresso.jpeg
python GoogLeNet_OnnxClient.py -f input/cat.jpg
python GoogLeNet_OnnxClient.py -f input/pesan3.jpg
Emotion Ferplus Model example
python emotion_ferplus_model_OnnxClient.py -f input/angryMan.png
python emotion_ferplus_model_OnnxClient.py -f input/sadness.png
python emotion_ferplus_model_OnnxClient.py -f input/happy.jpg
python emotion_ferplus_model_OnnxClient.py -f input/joker.jpg
That’s all :-)
Acumos ONNX Client Release Notes
v1.0.5 30 June 2021
Minor documentation change ACUMOS-4337
v1.0.4 08 June 2021
Fix failed with onnx new version 1_9_0 ACUMOS-4337
v1.0.3, 27 April 2021
Adding deploy management ACUMOS-4308
Adding licence file management ACUMOS-4319
Avoid the use of configuration file when model bundle is dumped ACUMOS-4317
fix typo “Exemples” in folder ACUMOS-4318
fix typo “and/” in index file ACUMOS-4320
v1.0.0, 22 January 2021
Creation of onnx4acumos ‘ACUMOS-3101 <https://jira.acumos.org/browse/ACUMOS-3101>’_
Acumos onnx Client Developer Guide
Testing
We use a combination of tox, pytest, and flake8 to test acumos_onnx_client. Code which is not PEP8 compliant (aside from E501) will be considered a failing test. You can use tools like autopep8 to “clean” your code as follows:
$ pip install autopep8
$ cd acumos-onnx-client
$ autopep8 -r --in-place --ignore E501 acumos_onnx_client/ testing/ examples/
Run tox directly:
$ cd acumos-onnx-client
$ export WORKSPACE=$(pwd) # env var normally provided by Jenkins
$ tox
You can also specify certain tox environments to test:
$ tox -e py36 # only test against Python 3.6
$ tox -e flake8 # only lint code
A set of integration test is also available in acumos-package/testing/integration_tests. To run those, use acumos-package/testing/tox-integration.ini as tox config (-c flag), onboarding tests will be ran with python 3.6 to 3.9. You will need to set your user credentials and platform configuration in tox-integration.ini.
$ tox -c acumos-package/testing/integration_tests
Packaging
The RST files in the docs/ directory are used to publish HTML pages to ReadTheDocs.io and to build the package long description in setup.py. The symlink from the subdirectory acumos-package to the docs/ directory is required for the Python packaging tools. Those tools build a source distribution from files in the package root, the directory acumos-package. The MANIFEST.in file directs the tools to pull files from directory docs/, and the symlink makes it possible because the tools only look within the package root.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file onnx4acumos-1.0.5.tar.gz
.
File metadata
- Download URL: onnx4acumos-1.0.5.tar.gz
- Upload date:
- Size: 27.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.6.0 importlib_metadata/4.8.2 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.6.8
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e1e986f98ae9dfcf613f45ef1186f7107c6e93752fc9df0f8045a9f04b99ab50 |
|
MD5 | ef96db0c06ffbc61261c61571c7dbdf9 |
|
BLAKE2b-256 | fe031b906d6ca81ebb447bff6d7a5c791969c6dca30756481c55cb64de497115 |
File details
Details for the file onnx4acumos-1.0.5-py2.py3-none-any.whl
.
File metadata
- Download URL: onnx4acumos-1.0.5-py2.py3-none-any.whl
- Upload date:
- Size: 19.5 kB
- Tags: Python 2, Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.6.0 importlib_metadata/4.8.2 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.6.8
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | d3b4fdd2fa339358fa392b9ef78554db034f0c0a55a843f54e1db1535aa211e8 |
|
MD5 | ebe31e3b84a343022ce9d677f609ee1d |
|
BLAKE2b-256 | 0954f916164e87973082700bc218f33bf942b0606a9f829787703e2fd7d4c0f7 |