Skip to main content

Inference with Triton Inference Server easily.

Project description

Triton Inference Server Model

Simple package to run inference with Triton Inference Server easily.

pip install trism
# Or
pip install https://github.com/hieupth/trism

How to use

# Create triton model.
model = TritonModel(
  model="my_model",     # Model name.
  version=0,            # Model version.
  url="localhost:8001", # Triton Server URL.
  grpc=True             # Use gRPC or Http.
)
# View metadata.
for inp in model.inputs:
  print(f"name: {inp.name}, shape: {inp.shape}, datatype: {inp.dtype}\n")
for out in model.outputs:
  print(f"name: {out.name}, shape: {out.shape}, datatype: {out.dtype}\n")
# Inference.
outputs = model.run(data = [np.array(...)])

License

GNU AGPL v3.0.
Copyright © 2024 Hieu Pham. All rights reserved.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

trism-0.0.1.post1.tar.gz (16.0 kB view hashes)

Uploaded Source

Built Distribution

trism-0.0.1.post1-py3-none-any.whl (16.4 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page