A Python library for executing tasks in parallel with threads and queues
Project description
executor
fast exec task with python and less mem ops
Why we need executor?
Python threading module is a good structure, it's help developer folk a thread to run some background task. Python have Queue mechanic to connect thread data. But what problem??
-
First, threading module folk threads but python not commit late time. Then know your thread can run, but you don't know when? It's oke fast with small traffic but when server high load you will have some problem, high pressure on memory because when you create too many thread cause slowness.
waste of time
-
Second, when you create and release thread many times, it'll increase memory and CPU time of system. Sometime, developers did not handle exceptions and release thread, It can put more pressure to the application.
waste of resource
How to resolve problem??
This's my resolver.
-
We create
exact
ordynamic
number of threads. Then usingJob
is a unit bring data information toWorker
to process. Workers don't need to release, and you only create 1 time or reset it when you update config. -
Job bring 2 importance field is:
func
andargs
and you can call them likefunc(*args)
and get all the results and return oncallback
is optional. -
Your app doesn't need to create and release threads continuously
-
Easy to access and using when coding.
Disadvance?
- If you use callback then remembered to add try catch to handle thread leak.
- If queue is full you need to wait for available queue slot. set
max_queue_size=0
to avoid this.
Installtion
Now it's time you can install lib and experience
pip install thread-executor
Usage : Interface list
type ISafeQueue interface {
Info() SafeQueueInfo // engine info
Close() error // close all anything
RescaleUp(numWorker uint) // increase worker
RescaleDown(numWorker uint) error // reduce worker
Run() // start
Send(jobs ...*Job) error // push job to hub
Wait() // keep block thread
Done() // Immediate stop wait
}
Initial
engine = CreateSafeQueue(&SafeQueueConfig{
NumberWorkers: 3,
Capacity: 500,
WaitGroup: &sync.WaitGroup{},
})
defer engine.Close() // flush engine
// go engine.Wait() // folk to other thread
engine.Wait() // block current thread
Send Simple Job
// simple job
j := &Job{
Exectutor: func(in ...interface{}) {
// any thing
},
Params: []interface{1, "abc"}
}
engine.Send(j)
// send mutiple job
jobs := []*Job{
{
Exectutor: func(in ...interface{}) {
// any thing
},
Params: []interface{1, "abc"}
},
Exectutor: func(in ...interface{}) {
// any thing
},
Params: []interface{2, "abc"}
}
engine.Send(jobs...)
Send Job complicated
// wait for job completed
j := &Job{
Exectutor: func(in ...interface{}) {
// any thing
},
Params: []interface{1, "abc"},
Wg: &sync.WaitGroup{},
}
engine.Send(j)
// wait for job run success
j.Wait()
// callback handle async
// you can sync when use with waitgroup
j := &Job{
Exectutor: func(in ...interface{}) {
// any thing
},
CallBack: func(out interface{}, err error) {
// try some thing here
}
Params: []interface{1, "abc"}
}
engine.Send(j)
Send Job with groups
// prepaire a group job.
group1 := make([]*Job, 0)
for i := 0; i < 10; i++ {
group1 = append(group1, &Job{
Exectutor: func(in ...interface{}) {
// any thing
},
Params: []interface{1, "abc"},
Wg: &sync.WaitGroup{},
})
}
// wait for job completed
engine.SendWithGroup(group1...)
engine.Wait()
safequeue scale up/down
engine.ScaleUp(5)
engine.ScaleDown(2)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for thread_executor-0.1.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 49cffc3f15c72c64851f6dc5376cf4c03ce2cefcbf3ee8e12c857d447132b3e4 |
|
MD5 | 6f3c2ae529e3465d16b00628ba92857d |
|
BLAKE2b-256 | b687274ae9abd9555056fcd4eb1d5f98b9fa539189a5f96e78a2f1c9a8260b5b |