Skip to main content

Deepomatic RPC python client

Project description

Table of contents

Deepomatic Remote Procedure Call

Deepomatic Remote Procedure Call.

This remote procedure call has been made to help you interacting with our on-premises inference service. You might also want to use our command line interface deepomatic-cli.

Installation

Online

pip install deepomatic-rpc

Offline

On a machine with internet access you will need to download the package and its dependencies with the command below:

mkdir deepomatic-rpc
# --platform force to get the packages compatibles with all OS
pip download --platform any --only-binary=:all: -d ./deepomatic-rpc deepomatic-rpc

Then save the deepomatic-rpc directory on the storage device of your choice.

Now retrieve this directory on the offline machine and install the package:

pip install --no-index --find-links ./deepomatic-rpc ./deepomatic-rpc/deepomatic_rpc-*-py2.py3-none-any.whl

Usage

Getting started

Instanciate client and queues

from deepomatic.rpc.client import Client

# Replace placeholder variables with yours
command_queue_name = 'my_command_queue'
recognition_version_id = 123
amqp_url = 'amqp://myuser:mypassword@localhost:5672/myvhost'

# Instanciate client
client = Client(amqp_url)

# Do the following for each stream

# Declare lasting command queue
command_queue = client.new_queue(command_queue_name)

# Declare response queue and consumer to get responses
# consumer is linked to the response_queue
# If queue_name parameter is provided, will declare a durable queue
# Otherwise it is an uniq temporary queue.
response_queue, consumer = client.new_consuming_queue()

# Don't forget to cleanup when you are done sending requests !

Send recognition request

from deepomatic.rpc import v07_ImageInput
from deepomatic.rpc.response import wait
from deepomatic.rpc.helpers.v07_proto import create_images_input_mix, create_recognition_command_mix

# Create a recognition command mix
command_mix = create_recognition_command_mix(recognition_version_id, max_predictions=100, show_discarded=False)

# Create one image input
image_input = v07_ImageInput(source='https://static.wamiz.fr/images/animaux/chats/large/bengal.jpg'.encode())

# Wrap it inside a generic input mix
input_mix = create_images_input_mix([image_input])

# Send the request
correlation_id = client.command(command_queue_name, response_queue.name, command_mix, input_mix)

# Wait for response, `timeout=float('inf')` or `timeout=-1` for infinite wait, `timeout=None` for non blocking
response = consumer.get(correlation_id, timeout=5)

# get_labelled_output() is a shortcut that give you the corresponding predictions depending on the command mix you used
# and raise a ServerError in case of error on the worker side. It should cover most cases but if it doesn't fit your needs, see the Response class. You might want to handle result and errors by yourself using `response.to_result_buffer()`.
labels = response.get_labelled_output()
predicted = labels.predicted[0]  # Predicted is ordered by score
print("Predicted label {} with score {}".format(predicted.label_name, predicted.score))
# if show_discarded was True, you might want to read `labels.discarded` to see which labels have a low confidence.

Stream and cleanup

When you are done with a stream you should cleanup your consuming queues.

  • If your program stops right after, the consumer get cancelled and the queue will automatically be removed after 2 hours of inactivity (only if the queue is a uniq temporary queue).
  • If your program is a long running job, after 2 hours of inactivity the queue might be removed and the consumer cancelled by the broker, but the client might consider redeclaring both in case of a broker error.

Thus calling client.remove_consuming_queue() remove the queue and makes sure the consumer is cancelled and not redeclared later:

client.remove_consuming_queue(response_queue, consumer)

You might also want to remove a queue without consumer using:

client.remove_queue(queue)

Also instead of using new_consuming_queue() with no queue_name parameter and remove_consuming_queue() you might want to use the contextmanager version:

with client.tmp_consuming_queue() as (response_queue, consumer):
    # this creates a temporary queue alive for the rest of this scope
    # do your inference requests

If you don't want to care about the response queue and consumer, we provide a high level class RPCStream. By default it saves all correlation_ids so that you can call get_next_response() to get responses in the same order that you pushed the requests:

from deepomatic.rpc.helpers.proto import create_v07_images_command

serialized_buffer = create_v07_images_command([image_input], command_mix)

with client.new_stream(command_queue_name) as stream:
    # it internally saves the correlation_id so that it can retrieve responses in order
    # You need to call as many time get_next_response() as send_binary(), or the internal correlation_ids list will keep growing up
    stream.send_binary(serialized_buffer)
    response = stream.get_next_response(timeout=1)

Also you might want to handle response order by yourself, in this case you can create the stream in the following way:

# with keep_response_order=False, the stream will not buffer correlation_ids
with client.new_stream(command_queue_name, keep_response_order=False):
    correlation_id = stream.send_binary(serialized_buffer)
    # directly access the stream's consumer to retrieve a specific response
    response = stream.consumer.get(correlation_id, timeout=1)

IMPORTANT: If you don't use the with statement, you will have to call stream.close() at the end to clean consumer and response queue.

Advanced

Shortcuts

  • You can avoid calling create_images_input_mix and directly sending the image_input list via the method client.v07_images_command which will call internally create_images_input_mix:
correlation_id = client.v07_images_command(command_queue_name, response_queue.name, [image_input], command_mix)
  • Create a workflow command mix. The recognition_version_id is deduced but the command queue name must match the recognition in the workflows.json. Note that it doesn't allow to specify show_discarded or max_predictions:
from deepomatic.rpc.helpers.v07_proto import create_workflow_command_mix
command_mix = create_workflow_command_mix()
  • Create an inference command mix; the response will be a raw tensor :
from deepomatic.rpc.helpers.v07_proto import create_inference_command_mix
output_tensors = ['prod']
command_mix = create_inference_command_mix(output_tensors)
  • Wait multiples correlation ids at once:
from deepomatic.rpc.response import wait_responses
# Wait for responses, `timeout=float('inf')` or `timeout=-1` for infinite wait
responses, pending = wait_responses(consumer, correlation_ids, timeout=10)

print(responses)
# will print [(0, response), (1, response), (2, response)]
# 0, 1, 2 are the position in correlation_ids list in case you want to retrieve their original correlation_id
# the list is sorted by positions to keep the same order as the correlation_ids list
# if no timeout reached len(response) == len(correlation_ids)

print(pending)
# should be empty if timeout has not been reached
# otherwise should print a list of correlation_id position that didn't get a response (the list is sorted)
# If print [3, 5], then correlations_ids[3] and correlation_id[5] didn't get a response on time

Image input examples

  • Create an image input with a bounding box:
from deepomatic.rpc import v07_ImageInput
from deepomatic.rpc import BBox
# Coordinates between 0 and 1
bbox = BBox(xmin=0.3, xmax=0.8, ymin=0.1, ymax=0.9)
image_input = v07_ImageInput(source='https://static.wamiz.fr/images/animaux/chats/large/bengal.jpg'.encode(),
                             bbox=bbox)
  • Create an image input with a polygon selection:
from deepomatic.rpc import v07_ImageInput
from deepomatic.rpc import Point
# Coordinates between 0 and 1, minimum 3 points needed
polygon = [Point(x=0.1, y=0.1), Point(x=0.9, y=0.1), Point(x=0.5, y=0.9)]
image_input = v07_ImageInput(source='https://static.wamiz.fr/images/animaux/chats/large/bengal.jpg'.encode(),
                             polygon=polygon)
  • Create an image input from a file on the disk:
from deepomatic.rpc import v07_ImageInput
from deepomatic.rpc.helpers.proto import binary_source_from_img_file
binary_content = binary_source_from_img_file(filename)  # Also works if you give a fileobj
image_input = v07_ImageInput(source=binary_content)

Bugs

Please send bug reports to support@deepomatic.com

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

deepomatic-rpc-0.8.4.dev2.tar.gz (37.7 kB view details)

Uploaded Source

Built Distribution

deepomatic_rpc-0.8.4.dev2-py2.py3-none-any.whl (56.8 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file deepomatic-rpc-0.8.4.dev2.tar.gz.

File metadata

  • Download URL: deepomatic-rpc-0.8.4.dev2.tar.gz
  • Upload date:
  • Size: 37.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/50.3.2 requests-toolbelt/0.9.1 tqdm/4.50.2 CPython/3.6.9

File hashes

Hashes for deepomatic-rpc-0.8.4.dev2.tar.gz
Algorithm Hash digest
SHA256 3452e61e4e3c8ee0304ac2aad40d33453e19f3cdd5d9edce037cab289caaac05
MD5 c28cf755121bf5e2562a700944eb883e
BLAKE2b-256 f327c6bce0d9b32a7f704e935da0b531d61ab283f0695e7096dd7d1677c1e818

See more details on using hashes here.

File details

Details for the file deepomatic_rpc-0.8.4.dev2-py2.py3-none-any.whl.

File metadata

  • Download URL: deepomatic_rpc-0.8.4.dev2-py2.py3-none-any.whl
  • Upload date:
  • Size: 56.8 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/50.3.2 requests-toolbelt/0.9.1 tqdm/4.50.2 CPython/3.6.9

File hashes

Hashes for deepomatic_rpc-0.8.4.dev2-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 5ea1027ed1110d9ff72b483fa235ca57f77729229d2cb00f32bec81a98702dc9
MD5 abd05cb07dec08af471c788d6d25e749
BLAKE2b-256 1e8d280467ab3c5cffc30b44c0aedc21a5274cda6e8ce282f901397926847f6c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page