This package provides an easy API for moving the work out of the tornado process / event loop.
Project description
This package provides an easy API for moving the work out of the tornado process / event loop.
Currently implemented methods are:
execute the code in another server’s http hook (django implementation is included);
execute the code in a separate thread (thread pool is used);
dummy immediate execution.
API example:
from django.contrib.auth.models import User from slacker import adisp from slacker import Slacker from slacker.workers import DjangoWorker AsyncUser = Slacker(User, DjangoWorker()) @adisp.process def process_data(): # all the django ORM is supported; the query will be executed # on remote end, this will not block the IOLoop users = yield AsyncUser.objects.filter(is_staff=True)[:5] print users
(pep-342 syntax and adisp library are optional, callback-style code is also supported)
Installation
pip install tornado-slacker
Slackers and workers
In order to execute some code in non-blocking manner:
Create a Slacker (configured with the desired worker) for some python object:
from slacker import Slacker from slacker.workers import ThreadWorker class Foo(object): # ... worker = ThreadWorker() AsyncFoo = Slacker(Foo, worker)
build a query (you can access attributes, do calls and slicing):
query = AsyncFoo('foo').do_blocking_operation(param1, param2)[0]
execute the query:
def callback(result): # ... query.proceed(callback)
or, using pep-342 style:
from slacker import adisp @adisp.process def handler(): result = yield query # ...
Slackers
Slackers are special objects that are collecting operations (attribute access, calls, slicing) without actually executing them:
>>> from slacker import Slacker >>> class Foo(): ... pass ... >>> FooSlacker = Slacker(Foo) >>> FooSlacker.hello.world() __main__.Foo: [('hello',), ('world', (), {})] >>> FooSlacker(name='me').hello.world(1, y=3)[:3] __main__.Foo: [(None, (), {'name': 'me'}), ('hello',), ('world', (1,), {'y': 3}), (slice(None, 3, None), None)]
Callables arguments must be picklable. Slackers also provide a method to apply the collected operations to a base object.
Any picklable object (including top-level functions and classes) can be wrapped into Slacker, e.g.:
from slacker import adisp from slacker import Slacker from slacker.workers import ThreadWorker def task(param1, param2): # do something blocking and io-bound return results async_task = Slacker(task, ThreadWorker()) # pep-342-style @adisp.process def process_data(): results = yield async_task('foo', 'bar') print results # callback style def process_data2(): async_task('foo', 'bar').proceed(on_result) def on_result(results): print results
Python modules also can be Slackers:
import shutil from slacker import Slacker from slacker.workers import ThreadWorker shutil_async = Slacker(shutil, ThreadWorker()) op = shutil_async.copy('file1.txt', 'file2.txt') op.proceed()
Workers
Workers are classes that decides how and where the work should be done:
slacker.workers.DummyWorker executes code in-place (this is blocking);
slacker.workers.ThreadWorker executes code in a thread from a thread pool;
slacker.workers.HttpWorker pickles the slacker, makes an async http request with this data to a given server hook and expects it to execute the code and return pickled results;
slacker.workers.DjangoWorker is just a HttpWorker with default values for use with bundled django remote server hook implementation (slacker.django_backend).
In order to enable django hook, include ‘slacker.django_backend.urls’ into urls.py and add SLACKER_SERVER option with server address to settings.py.
SLACKER_SERVER is ‘127.0.0.1:8000’ by default so this should work for development server out of box.
Parallel execution
Parallel task execution is supported by adisp library:
def _task1(param1, param2): # do something blocking return results def _task2(): # do something blocking return results # worker can be reused worker = ThreadWorker() task1 = Slacker(_task1, worker) task2 = Slacker(_task2, worker) @adisp.process def process_data(): # this will execute task1 and task2 in parallel # and return the result after all data is ready res1, res2 = yield task1('foo', 'bar'), task2() print res1, res2
Contributing
If you have any suggestions, bug reports or annoyances please report them to the issue tracker:
Source code:
Both hg and git pull requests are welcome!
Credits
Inspiration:
Third-party software:
License
The license is MIT.
Bundled adisp library uses Simplified BSD License.
slacker.serialization is under BSD License.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file tornado-slacker-0.1.tar.gz
.
File metadata
- Download URL: tornado-slacker-0.1.tar.gz
- Upload date:
- Size: 13.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | bec84659658a24b1c9076bb2e605786f278911f50db6cdb3949b6c598058f359 |
|
MD5 | 4c161c8c0b1e9605355a59412771468b |
|
BLAKE2b-256 | f4c00c6623e4b11705911479ff2d0b1c96672a54e2a5f327b15ea9328666e762 |