Skip to main content

a simple tool to turn your function into a background service

Project description

# as_a_service

A simple package that transforms a batch function {inputs->results} into a service that
- groups inputs into batches - you specify max batch size and time waiting
- processes them - and returns results back to whoever was asking

### Usage:
__[notebook version](

Here's how it feels
@as_batched_service(batch_size=3, max_delay=0.1)
def square(batch_xs):
print("processing...", batch_xs)
return [x_i ** 2 for x_i in batch_xs]

# submit many queries
futures = square.submit_many(range(10))
print([f.result() for f in futures])
This will print
processing... [0, 1, 2]
processing... [3, 4, 5]
processing... [6, 7, 8]
processing... [9]
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]

You can also use it as a drop-in replacement for a function that processes one input at a time
* `square(2.0)` will return 4.0 as if a normal function
* Under the hood, it submits a request and waits for it to finish

This package contains three objects
- BatchedService(batch_process_func, batch_size, max_delay) - main object
- @as_batched_service(batch_size, max_delay) - same thing as a decorator
- @as_service(max_delay) - decorator for a function without batches (single input/output)

Use help(BatchedService) and "Why should I care?" for more details.

### Install:
* ```pip install as_a_service```
* No dependencies apart from standard libraries
* Works with both python2 and python3 (pip3 install)

### Why should I care?

This primitive is useful for a number of scenarios like:
1) You are building a web-based demo around your neural network. You want your network to process
a stream of user queries, but doing so one query at a time is slow. Batch-parallel processing is way better.

@as_batched_service(batch_size=32, max_delay=1.0)
def service_predict(input_images_list):
predictions_list = my_network_predict_batch(input_images_list)
return predictions_list

def handle_user_query(query):
input_image = get_image(query)
return service_predict(input_image)

2) You are experimenting with a reinforcement learning agent. The agent itself is a neural network
that predicts actions. You want to play 100 parallel game sessions to train on.
Playing one session at a time is slow. If only we could run multiple sessions on one GPU

my_network = make_keras_network_on_gpu()
service = BatchedService(my_network.predict, batch_size=32, max_delay=1.0)
threads = [
GamePlayingThread(predict_action=lambda x: service(x)) for i in range(100)
for thread in threads:
for thread in threads:

And many other scenarios where you want to use a single resource
(GPU / device /DB) concurrently and utilize batch-parallelism

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

as_a_service-1.0.3.tar.gz (5.2 kB view hashes)

Uploaded source

Supported by

AWS AWS Cloud computing Datadog Datadog Monitoring Facebook / Instagram Facebook / Instagram PSF Sponsor Fastly Fastly CDN Google Google Object Storage and Download Analytics Huawei Huawei PSF Sponsor Microsoft Microsoft PSF Sponsor NVIDIA NVIDIA PSF Sponsor Pingdom Pingdom Monitoring Salesforce Salesforce PSF Sponsor Sentry Sentry Error logging StatusPage StatusPage Status page