Tiny configuration for Triton Inference Server
Project description
tritony - Tiny configuration for Triton Inference Server
Key Features
- Simple configuration. Only
$host:$port
and$model_name
are required. - Generating asynchronous requests with
asyncio.Queue
Requirements
$ pip install tritonclient[all]
Install
$ pip install tritony
Test
With Triton
docker run --rm \
-v ${PWD}:/models \
nvcr.io/nvidia/tritonserver:22.01-pyt-python-py3 \
tritonserver --model-repo=/models
pytest -m -s tests/test_tritony.py
Example with image_client.py
- Follow steps in the official triton server documentation
# Download Images from https://github.com/triton-inference-server/server.git
python ./example/image_client.py --image_folder "./server/qa/images"
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
tritony-0.0.8.tar.gz
(8.6 kB
view hashes)
Built Distribution
Close
Hashes for tritony-0.0.8-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 71c77adefe6b2bc45c4fcc9ed4eeee6ccede9d1c61f33d9f4fd3c5bf8c644030 |
|
MD5 | ffc43ed7de092c3c509b76dcc657c774 |
|
BLAKE2b-256 | bd6dff8e80cf6c8d4da2f7601c357c39843c688f81fb907993bbe3bc180c2f31 |