Generated OpenAPI client library for the open-inference-protocol
Project description
Open Inference Protocol OpenAPI Client
open-inference-openapi
is a generated client library based on the OpenAPI protocol definition tracked in the open-inference/open-inference-protocol/ repository.
Installation
This package requires Python 3.8 or greater.
Install with your favorite tool from pypi.org/project/open-inference-openapi/
$ pip install open-inference-openapi
$ poetry add open-inference-openapi
A gRPC-based python client (
open-inference-grpc
) also exists for the Open Inference Protocol, and can be installed alongside this gRPC client, as both are distributed as namespace packages.
Example
from open_inference.openapi.client import OpenInferenceClient, InferenceRequest
client = OpenInferenceClient(base_url='http://localhost:5002')
# Check that the server is live, and it has the iris model loaded
client.check_server_readiness()
client.read_model_metadata('mlflow-model')
# Make an inference request with two examples
pred = client.model_infer(
"mlflow-model",
request=InferenceRequest(
inputs=[
{
"name": "input",
"shape": [2, 4],
"datatype": "FP64",
"data": [
[5.0, 3.3, 1.4, 0.2],
[7.0, 3.2, 4.7, 1.4],
],
}
]
),
)
print(repr(pred))
# InferenceResponse(
# model_name="mlflow-model",
# model_version=None,
# id="580c30e3-f835-418f-bb17-a3074d42ad21",
# parameters={"content_type": "np", "headers": None},
# outputs=[
# ResponseOutput(
# name="output-1",
# shape=[2, 1],
# datatype="INT64",
# parameters={"content_type": "np", "headers": None},
# data=TensorData(__root__=[0.0, 1.0]),
# )
# ],
# )
Async versions of the same APIs are also available. Import AsyncOpenInfereClient
instead, then await
and requests made.
from open_inference.openapi.client import AsyncOpenInferenceClient
client = AsyncOpenInferenceClient(base_url="http://localhost:5002")
await client.check_server_readiness()
Dependencies
The open-inference-openapi
python package relies on:
pydantic
- Message formatting, structure, and validation.httpx
- Implementation of the underlying HTTP transport.
Contribute
This client is largely generated automatically by fern
, with a small amount of build post-processing in build.py.
Run
python build.py
to build this package, it will:
- If
fern/openapi/open_inference_rest.yaml
is not found, download it from open-inference/open-inference-protocol/- Run
fern generate
to create the python client (fern-api must be installednpm install --global fern-api
)- Postprocess to correctly implement the recursive TensorData model.
- Prepend the Apache 2.0 License preamble
- Format with black
If you want to contribute to the open-inference-protocol itself, please create an issue or PR in the open-inference/open-inference-protocol repository.
License
By contributing to Open Inference Protocol Python client repository, you agree that your contributions will be licensed under its Apache 2.0 License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for open_inference_openapi-2.0.0a1.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 09f78ebc0f5490c23493a1cda927aa57c16bbcd302d623b633e75a37d153409b |
|
MD5 | 57d9bf8b1b5448ff3f181ab083932e26 |
|
BLAKE2b-256 | 1b8281efaf0739b7e09bc8cf16673587587865968c3fa78073c03b5894373f6b |
Hashes for open_inference_openapi-2.0.0a1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | beecf96090f191c93b80d6cb06754f342a0f4347e1ddc3b7430e62f89bd1dd14 |
|
MD5 | 8bc28b4bd782b3f7eb5d0987f28b414e |
|
BLAKE2b-256 | eb707bfd34a672c1ed6189a7cb1d6f0454df3e1e75a8d40577a422be3dbfb0c7 |