a simple tool to turn your function into a background service
Project description
# as_a_service
A simple package that transforms a batch function {inputs->results} into a service that
- groups inputs into batches - you specify max batch size and time waiting
- processes them - and returns results back to whoever was asking
### Usage:
__[notebook version](https://github.com/justheuristic/as_a_service/blob/master/example.ipynb)__
Here's how it feels
```python
@as_batched_service(batch_size=3, max_delay=0.1)
def square(batch_xs):
print("processing...", batch_xs)
return [x_i ** 2 for x_i in batch_xs]
# submit many queries
futures = square.submit_many(range(10))
print([f.result() for f in futures])
```
This will print
```
processing... [0, 1, 2]
processing... [3, 4, 5]
processing... [6, 7, 8]
processing... [9]
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
```
You can also use it as a drop-in replacement for a function that processes one input at a time
* `square(2.0)` will return 4.0 as if a normal function
* Under the hood, it submits a request and waits for it to finish
This package contains three objects
- BatchedService(batch_process_func, batch_size, max_delay) - main object
- @as_batched_service(batch_size, max_delay) - same thing as a decorator
- @as_service(max_delay) - decorator for a function without batches (single input/output)
Use help(BatchedService) and "Why should I care?" for more details.
### Install:
* ```pip install as_a_service```
* No dependencies apart from standard libraries
* Works with both python2 and python3 (pip3 install)
### Why should I care?
This primitive is useful for a number of scenarios like:
1) You are building a web-based demo around your neural network. You want your network to process
a stream of user queries, but doing so one query at a time is slow. Batch-parallel processing is way better.
```python
@as_batched_service(batch_size=32, max_delay=1.0)
def service_predict(input_images_list):
predictions_list = my_network_predict_batch(input_images_list)
return predictions_list
@my_web_framework.run_as_a_thread_for_every_query
def handle_user_query(query):
input_image = get_image(query)
return service_predict(input_image)
```
2) You are experimenting with a reinforcement learning agent. The agent itself is a neural network
that predicts actions. You want to play 100 parallel game sessions to train on.
Playing one session at a time is slow. If only we could run multiple sessions on one GPU
```python
my_network = make_keras_network_on_gpu()
service = BatchedService(my_network.predict, batch_size=32, max_delay=1.0)
threads = [
GamePlayingThread(predict_action=lambda x: service(x)) for i in range(100)
]
for thread in threads:
thread.start()
for thread in threads:
thread.join()
service.stop()
```
And many other scenarios where you want to use a single resource
(GPU / device /DB) concurrently and utilize batch-parallelism
A simple package that transforms a batch function {inputs->results} into a service that
- groups inputs into batches - you specify max batch size and time waiting
- processes them - and returns results back to whoever was asking
### Usage:
__[notebook version](https://github.com/justheuristic/as_a_service/blob/master/example.ipynb)__
Here's how it feels
```python
@as_batched_service(batch_size=3, max_delay=0.1)
def square(batch_xs):
print("processing...", batch_xs)
return [x_i ** 2 for x_i in batch_xs]
# submit many queries
futures = square.submit_many(range(10))
print([f.result() for f in futures])
```
This will print
```
processing... [0, 1, 2]
processing... [3, 4, 5]
processing... [6, 7, 8]
processing... [9]
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
```
You can also use it as a drop-in replacement for a function that processes one input at a time
* `square(2.0)` will return 4.0 as if a normal function
* Under the hood, it submits a request and waits for it to finish
This package contains three objects
- BatchedService(batch_process_func, batch_size, max_delay) - main object
- @as_batched_service(batch_size, max_delay) - same thing as a decorator
- @as_service(max_delay) - decorator for a function without batches (single input/output)
Use help(BatchedService) and "Why should I care?" for more details.
### Install:
* ```pip install as_a_service```
* No dependencies apart from standard libraries
* Works with both python2 and python3 (pip3 install)
### Why should I care?
This primitive is useful for a number of scenarios like:
1) You are building a web-based demo around your neural network. You want your network to process
a stream of user queries, but doing so one query at a time is slow. Batch-parallel processing is way better.
```python
@as_batched_service(batch_size=32, max_delay=1.0)
def service_predict(input_images_list):
predictions_list = my_network_predict_batch(input_images_list)
return predictions_list
@my_web_framework.run_as_a_thread_for_every_query
def handle_user_query(query):
input_image = get_image(query)
return service_predict(input_image)
```
2) You are experimenting with a reinforcement learning agent. The agent itself is a neural network
that predicts actions. You want to play 100 parallel game sessions to train on.
Playing one session at a time is slow. If only we could run multiple sessions on one GPU
```python
my_network = make_keras_network_on_gpu()
service = BatchedService(my_network.predict, batch_size=32, max_delay=1.0)
threads = [
GamePlayingThread(predict_action=lambda x: service(x)) for i in range(100)
]
for thread in threads:
thread.start()
for thread in threads:
thread.join()
service.stop()
```
And many other scenarios where you want to use a single resource
(GPU / device /DB) concurrently and utilize batch-parallelism
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
as_a_service-1.0.3.tar.gz
(5.2 kB
view details)
File details
Details for the file as_a_service-1.0.3.tar.gz
.
File metadata
- Download URL: as_a_service-1.0.3.tar.gz
- Upload date:
- Size: 5.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.19.1 setuptools/39.1.0 requests-toolbelt/0.8.0 tqdm/4.19.4 CPython/3.6.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7dda2cd6ca6079499511f2700d1dc7c6ec87eed83ed3f64cf0a81689b0905cbd |
|
MD5 | eabc67ef195a45cb60cc8d20f7c205b3 |
|
BLAKE2b-256 | cdb4c9572669b430d9db0bbdb4be7ea8684bc234084eb59d0dcb9a1a15a7dc49 |