Background worker based on pickle and sqlite
Project description
kerground
Background worker based on pickle, sqlite and multiprocessing.
Quickstart
Install
pip install kerground
Mark your file workers by naming them with _worker.py
prefix
my_worker.py
Kerground will look in *_worker.py
and will consider each function an event.
Functions from *_worker.py
files must be unique
Import Kerground
, instantiate it and start sending events
from kerground import Kerground
ker = Kerground()
@app.route('/some-task')
def long_wait():
id = ker.send('long_task')
# 'long_task' is a function name from *_worker.py files
return {'id': id}
Your api's and workers must be in the same package/directory
root
├── api
│ ├── __init__.py
│ └── my_api.py
└── worker
├── __init__.py
└── my_worker.py
You are free to use any folder structure.
Open 2 cmd/terminal windows in the example directory:
- in one start your api
python3 api/my_api.py
- in the other one type
kerground
API
ker.send('func_name', *func_args, timeout=None)
Send event to kerground worker. send
function will return the id of the task sent to the worker.
You have hot reload on your workers by default! (as long you don't change function names)
Parameter timeout
will warn you if function takes longer than expected.
ker.status(id)
Check status of a task with status
. Kerground has the folowing statuses:
- pending - event is added to kerground queue
- running - event is running
- finished - event was executed succesfully
- failed - event failed to be executed
Also you can check at any time the statuses of your tasks without specifing the id's:
ker.pending()
ker.running()
ker.finished()
ker.failed()
Or check all statuses with:
ker.stats()
ker.get_response(id)
Get the response from event (will be None
if event didn't ran yet).
You can see functions collected from *_worker.py
files with:
ker.events
Why
Under the hood kerground uses pickle for serialization of input/output data, a combination of inspect
methods and built-in getattr
function for dynamically calling the "events"
(functions) from *_worker.py
files.
It's resource frendly (it doesn't use RAM to hold queue), easy to use (import kerground, mark your worker files with _worker.py
prefix and you are set), has hot reload for workers (no need to restart workers each time you make a change) works on multiple cores (uses multiprocessing).
Submit any questions/issues you have! Fell free to fork it and improve it!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for kerground-0.0.5-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | e3ef7166eb4c35f21992f858e529a03b77725a10087e6de157ef38ea392df8db |
|
MD5 | 820c955be3f3b9b760d9320ae7ae92a9 |
|
BLAKE2b-256 | 6e0bc549c2e46ab4b8db81f5f7ba57d5f6245db7903915969a4df50fb2733b31 |