Skip to main content

A lightweight http client library for communicating with Nvidia Triton Inference Server (with Pyodide support in the browser)

Project description

Triton HTTP Client for Pyodide

A Pyodide python http client library and utilities for communicating with Triton Inference Server (based on tritonclient from NVIDIA).

This is a simplified implemetation of the triton client from NVIDIA, it works both in the browser with Pyodide Python or the native Python. It only implement the http client, and most of the API remains the similar but changed into async and with additional utility functions.

Installation

To use it in native CPython, you can install the package by running:

pip install pyotritonclient

For Pyodide-based Python environment, for example: JupyterLite or Pyodide console, you can install the client by running the following python code:

import micropip
micropip.install("pyotritonclient")

Usage

Basic example

To execute the model, we provide utility functions to make it much easier:

import numpy as np
from pyotritonclient import execute

# create fake input tensors
input0 = np.zeros([2, 349, 467], dtype='float32')
# run inference
results = await execute(inputs=[input0, {"diameter": 30}], server_url='https://ai.imjoy.io/triton', model_name='cellpose-python')

The above example assumes you are running the code in a jupyter notebook or an environment supports top-level await, if you are trying the example code in a normal python script, please wrap the code into an async function and run with asyncio as follows:

import asyncio
import numpy as np
from pyotritonclient import execute

async def run():
    results = await execute(inputs=[np.zeros([2, 349, 467], dtype='float32'), {"diameter": 30}], server_url='https://ai.imjoy.io/triton', model_name='cellpose-python')
    print(results)

loop = asyncio.get_event_loop()
loop.run_until_complete(run())

You can access the lower level api, see the test example.

You can also find the official client examples demonstrate how to use the package to issue request to triton inference server. However, please notice that you will need to change the http client code into async style. For example, instead of doing client.infer(...), you need to do await client.infer(...).

The http client code is forked from triton client git repo since commit b3005f9db154247a4c792633e54f25f35ccadff0.

Using the sequence executor with stateful models

To simplify the manipulation on stateful models with sequence, we also provide the SequenceExecutor to make it easier to run models in a sequence.

from pyotritonclient import SequenceExcutor


seq = SequenceExcutor(
  server_url='https://ai.imjoy.io/triton',
  model_name='cellpose-train',
  sequence_id=100
)
inputs = [
  image.astype('float32'),
  labels.astype('float32'),
  {"steps": 1, "resume": True}
]
for (image, labels, info) in train_samples:
  result = await seq.step(inputs)

result = await seq.end(inputs)

Note that above example called seq.end() by sending the last inputs again to end the sequence. If you want to specify the inputs for the execution, you can run result = await se.end(inputs).

For a small batch of data, you can also run it like this:

from pyotritonclient import SequenceExcutor

seq = SequenceExcutor(
  server_url='https://ai.imjoy.io/triton',
  model_name='cellpose-train',
  sequence_id=100
)

# a list of inputs
inputs_batch = [[
  image.astype('float32'),
  labels.astype('float32'),
  {"steps": 1, "resume": True}
] for (image, labels, _) in train_samples]

def on_step(i, result):
  """Function called on every step"""
  print(i)

results = await seq(inputs_batch, on_step=on_step)

Server setup

Since we access the server from the browser environment which typically has more security restrictions, it is important that the server is configured to enable browser access.

Please make sure you configured following aspects:

  • The server must provide HTTPS endpoints instead of HTTP
  • The server should send the following headers:
    • Access-Control-Allow-Headers: Inference-Header-Content-Length,Accept-Encoding,Content-Encoding,Access-Control-Allow-Headers
    • Access-Control-Expose-Headers: Inference-Header-Content-Length,Range,Origin,Content-Type
    • Access-Control-Allow-Methods: GET,HEAD,OPTIONS,PUT,POST
    • Access-Control-Allow-Origin: * (This is optional depending on whether you want to support CORS)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyotritonclient-0.2.0a1.tar.gz (26.1 kB view details)

Uploaded Source

Built Distribution

pyotritonclient-0.2.0a1-py3-none-any.whl (23.1 kB view details)

Uploaded Python 3

File details

Details for the file pyotritonclient-0.2.0a1.tar.gz.

File metadata

  • Download URL: pyotritonclient-0.2.0a1.tar.gz
  • Upload date:
  • Size: 26.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.8.11

File hashes

Hashes for pyotritonclient-0.2.0a1.tar.gz
Algorithm Hash digest
SHA256 f50ebc7b9437bde0e1c3740bd609ecf522194c5601378a0271fc31dc14160624
MD5 0624622cc479759c08f9934ea49ffaa6
BLAKE2b-256 7276e17e032046e5e494f25f88ffd239a2813106ebf6d9a12973150e6ee1b633

See more details on using hashes here.

File details

Details for the file pyotritonclient-0.2.0a1-py3-none-any.whl.

File metadata

File hashes

Hashes for pyotritonclient-0.2.0a1-py3-none-any.whl
Algorithm Hash digest
SHA256 f3bde8dd67a669c13139b81299df9e8f75c8b3b95090c89d40fada4e8b3c6a3e
MD5 2067a853ea2db6a428d8a3d1027c8917
BLAKE2b-256 840e836cd39f83ab19030b44458736d10d914b9326765b32a7edc2fe6f713064

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page