Inference with Triton Inference Server easily.
Project description
Triton Inference Server Model
Simple package to run inference with Triton Inference Server easily.
pip install trism
# Or
pip install https://github.com/hieupth/trism
How to use
# Create triton model.
model = TritonModel(
model="my_model", # Model name.
version=0, # Model version.
url="localhost:8001", # Triton Server URL.
grpc=True # Use gRPC or Http.
)
# View metadata.
for inp in model.inputs:
print(f"name: {inp.name}, shape: {inp.shape}, datatype: {inp.dtype}\n")
for out in model.outputs:
print(f"name: {out.name}, shape: {out.shape}, datatype: {out.dtype}\n")
# Inference.
outputs = model.run(data = [np.array(...)])
License
GNU AGPL v3.0.
Copyright © 2024 Hieu Pham. All rights reserved.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
trism-0.0.1.post1.tar.gz
(16.0 kB
view hashes)
Built Distribution
Close
Hashes for trism-0.0.1.post1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1d41753f238ea75e268f63d0570af8d4084fd00ffccf02b13f06b48a7456b185 |
|
MD5 | a3948483c6c100dcd678b2812e4df23c |
|
BLAKE2b-256 | b99bf31f5c5b3eec051bf0bd5d85c8c72aa1aee489cde33d62138866dc3fdee0 |