submitting cpu-bound tasks to processes and io-bound tasks to threads
Write a classic sequential program. Then convert it into a parallel one.
It runs faster.
What if not?
Don’t use it.
for image in images: create_thumbnail(image)
from fork import fork for image in images: fork(create_thumbnail, image)
What about return values?
result = fork(my_func, *args, **kwargs)
It’s a proxy object that behaves almost exactly like the real return value of my_func except that it’s very lazy. You can even add/multiply/etc. such proxy results without blocking which come in quite handy in loops.
Use fork.await to force evaluation and get the real and non-lazy value back.
Original (sequential) tracebacks are preserved. That should make debugging easier. However, don’t try to catch exceptions. You better want to exit and see them. Use fork.await to force evaluation in order to raise potential exceptions.
Speaking of threads …
and processes? fork will take care of that for you.
You can assist fork by decorating your functions (not decorating defaults to fork.cpu_bound):
@io_bound def call_remote_webservice(): # implementation @cpu_bound def heavy_computation(n): # implementation
Advanced Feature: Force Specific Type of Execution
If you really need more control over the type of execution, use fork.process or fork.thread. They work just like fork.fork but enforce the corresponding type of background execution.
import pkg_resources for worker_function in pkg_resources.iter_entry_points(group='worker'): process(worker_function)
Advanced Feature: Multiple Execution At Once
You can shorten your programs by using fork.map. It works like fork.fork but submits a function multiple times for each item given by an iterable.
results = fork.map(create_thumbnail, images)
fork.map_process and fork.map_thread work accordingly and force a specific type of execution. Use those if really necessary. Otherwise, just use fork.map. fork take care of that for you in this case again.
In order to wait for the completion of a set of result proxies, use fork.await_all. If you want to unblock by the first unblocking result proxy, call fork.await_any.
There are also blocking variants available: fork.block_map, fork.block_map_process and fork.block_map_thread; in case you need some syntactic sugar:
fork.await_all(fork.map(create_thumbnail, images)) # equals fork.block_map(create_thumbnail, images)
- easy to give it a try / easy way from sequential to parallel and back
- results evaluate lazily
- sequential tracebacks are preserved
- it’s thread-safe / cascading forks possible
- compatible with Python 2 and 3
- weird calling syntax (no syntax support)
- type(result) == ResultProxy
- not working with lambdas due to PickleError
- needs fix:
- not working with coroutines (asyncio) yet (working on it)
- cannot fix efficiently:
- exception handling (force evaluation when entering and leaving try blocks)
- ideas are welcome :-)
Release history Release notifications
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size xfork-0.34-py2-none-any.whl (5.1 kB)||File type Wheel||Python version py2||Upload date||Hashes View hashes|
|Filename, size xfork-0.34-py3-none-any.whl (5.1 kB)||File type Wheel||Python version py3||Upload date||Hashes View hashes|
|Filename, size xfork-0.34.tar.gz (6.2 kB)||File type Source||Python version None||Upload date||Hashes View hashes|