Skip to main content

ai_power base stone

Project description

Introduction

Base stone of AI_power, maintain all inference of AI_Power models.

Wrapper

  • Supply different model infer wrapper, including ONNX/TensorRT/Torch JIT;
  • Support onnx different Execution Providers (EP) , including cpu/gpu/trt/trt16/int8;
  • High level mmlab model (converted) infer wrapper, including MMPose/MMDet;

Model Convert

  • torch2jit torch2onnx etc.
  • detectron2 to onnx
  • modelscope to onnx
  • onnx2simple2trt
  • tf2pb2onnx

Model Tools

  • torch model edit
  • onnx model shape/speed test (different EP)
  • common scripts from onnxruntime

Usage

onnx model speed test

from apstone import ONNXModel

onnx_p = 'pretrain_models/sr_lib/realesr-general-x4v3-dynamic.onnx'
input_dynamic_shape = (1, 3, 96, 72)  # None
# cpu gpu trt trt16 int8
ONNXModel(onnx_p, provider='cpu', debug=True, input_dynamic_shape=input_dynamic_shape).speed_test()

Install

pip install apstone

Envs

Execution Providers Needs
cpu pip install onnxruntime
gpu pip install onnxruntime-gpu
trt/trt16/int8 onnxruntime-gpu compiled with tensorrt EP
TensorRT pip install tensorrt pycuda
torch JIT install pytorch

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

apstone-0.0.8.tar.gz (41.3 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page