Stupid simple background worker based on python.
Project description
kerground
Stupid simple background worker based on python.
Quickstart
Install
pip install kerground
Mark your file workers by naming them with _worker.py
prefix
my_worker.py
Kerground will look in *_worker.py
and will consider each function an event.
Functions from *_worker.py
files must be unique.
Import Kerground
, instantiate it and start sending events:
#my_api.py
from kerground import Kerground
ker = Kerground()
@app.route('/some-task')
def long_wait():
id = ker.send('long_task')
return {'id': id}
Event long_task
is a function name from *_worker.py files
#my_worker.py
import time
def long_task():
# heavy workoad, more than a few seconds job
time.sleep(2)
Your api's and workers must be in the same package/directory
root
├── api
│ ├── __init__.py
│ └── my_api.py
└── worker
├── __init__.py
└── my_worker.py
You are free to use any folder structure.
Open 2 cmd/terminal windows in the example directory:
- in one start your api
python3 api/my_api.py
- in the other one type
kerground
Multiple Kerground instance and Workers
Optionally, when instantiating the Kerground (or start the worker process from console), you can pass the optional argument workers_path
, in this case, Kerground will look in *_worker.py
of each folder passed and will consider each function an event.
Functions from *_worker.py
files must be unique.
Due to that, according the path given to workers_path
parameter Kerworker will create a separated folder with a separated db in the tempfile
system directory for every instance of Kerworker.
This will allow you to define multiple queue and workers on separated db and manage the processing of those workers separately.
#my_api.py from example_multiple_workers
from kerground import Kerground
pri_ker = Kerground(workers_path="../main_worker")
sec_ker = Kerground(workers_path="../secondary_worker")
@app.route('/main-worker/add-long-task')
def f1():
id = pri_ker.send('long_task')
# you will receive an id which you can use howerver you want
# here we send it to frontend to ask later if task is done
return {'id': id}
@app.route('/secondary-worker/add-very-long-task')
def f2():
id = sec_ker.send('long_task')
# you will receive an id which you can use howerver you want
# here we send it to frontend to ask later if task is done
return {'id': id}
Remember when you will start the workers, you need to give also the parameter --workers-path
#kerground/example_multiple_workers
python3 kerground.py --workers-path='./main_worker'
python3 kerground.py --workers-path='./secondary_worker'
#tmp folder
>> ls -lah
total 0
drwxr-xr-x 4 simone staff 128B Jul 16 09:28 .
drwx------@ 194 simone staff 6.1K Jul 16 09:28 ..
drwxr-xr-x 3 simone staff 96B Jul 16 09:28 main_worker
drwxr-xr-x 3 simone staff 96B Jul 16 09:28 secondary_worker
#inside main_worker folder
>> ls
tasks.db
a473dcda-d6e0-4fe1-9944-d708148ef1fb.pickle
#inside secondary_worker folder
>> ls
tasks.db
d03237c6-83a9-45c4-a91b-46c6fb2b090c.pickle
API
ker.send('func_name', *func_args, timeout=None, purge=True)
Send event to kerground worker. send
function will return the id of the task sent to the worker.
You have hot reload on your workers by default! (as long you don't change function names)
timeout
: will show in kerground logs a warning if function takes longer than expected;purge
: ifTrue
when function is executed event will be deleted, ifFalse
event will be deleted after aker.get_response(id)
call.
ker.status(id)
Check status of a task with status
. Kerground has the folowing statuses:
- pending - event is added to kerground queue
- running - event is running
- finished - event was executed succesfully
- failed - event failed to be executed
Also you can check at any time the statuses of your tasks without specifing the id's:
ker.pending()
ker.running()
ker.finished()
ker.failed()
Or check all statuses with:
ker.stats()
ker.get_response(id)
Get the response from event (will be None
if event didn't ran yet).
You can see functions collected from *_worker.py
files with:
ker.events
Why
Under the hood kerground uses pickle for serialization of input/output data, a combination of inspect
methods and built-in getattr
function for dynamically calling the "events"
(functions) from *_worker.py
files.
It's resource frendly (it doesn't use RAM to hold queue), easy to use (import kerground, mark your worker files with _worker.py
prefix and you are set), has hot reload for workers (no need to restart workers each time you make a change) works on multiple cores (uses multiprocessing).
Submit any questions/issues you have! Fell free to fork it and improve it!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for kerground-0.0.8-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 02191a5a0a597c84e7e5c7632c73c72cc4437bd2a44c4af3106ac82e9b7664ba |
|
MD5 | 7d77ea6386613896dd55adcd7f6c4f2d |
|
BLAKE2b-256 | f48022b5f97877a3cf523edbda2117d11928072344c3dabafd86bbcc316ba5bd |