CPU parallelism for Trio
Project description
Do you have CPU-bound work that just keeps slowing down your Trio event loop no matter what you try? Do you need to get all those cores humming at once? This is the library for you!
The aim of trio-parallel is to use the lightest-weight, lowest-overhead, lowest-latency method to achieve CPU parallelism of arbitrary Python code with a dead-simple API.
Resources
License |
|
Documentation |
|
Chat |
|
Forum |
|
Issues |
|
Repository |
|
Tests |
|
Coverage |
|
Style |
|
Distribution |
Example
import functools
import multiprocessing
import trio
import trio_parallel
def loop(n):
# Arbitrary CPU-bound work
for _ in range(n):
pass
print("Loops completed:", n)
async def amain():
t0 = trio.current_time()
async with trio.open_nursery() as nursery:
# Do CPU-bound work in parallel
for i in [6, 7, 8] * 4:
nursery.start_soon(trio_parallel.run_sync, loop, 10 ** i)
# Event loop remains responsive
t1 = trio.current_time()
await trio.sleep(0)
print("Scheduling latency:", trio.current_time() - t1)
# This job could take far too long, make it cancellable!
nursery.start_soon(
functools.partial(
trio_parallel.run_sync, loop, 10 ** 20, cancellable=True
)
)
await trio.sleep(2)
# Only explicitly cancellable jobs are killed on cancel
nursery.cancel_scope.cancel()
print("Total runtime:", trio.current_time() - t0)
if __name__ == "__main__":
multiprocessing.freeze_support()
trio.run(amain)
Additional examples and the full API are available in the documentation.
Features
Bypasses the GIL for CPU-bound work
Minimal API complexity
looks and feels like Trio threads
Minimal internal complexity
No reliance on multiprocessing.Pool, ProcessPoolExecutor, or any background threads
Cross-platform
print just works
Seamless interoperation with
Automatic LIFO caching of subprocesses
Cancel seriously misbehaving code via SIGKILL/TerminateProcess
Convert segfaults and other scary things to catchable errors
FAQ
How does trio-parallel run Python code in parallel?
Currently, this project is based on multiprocessing subprocesses and has all the usual multiprocessing caveats (freeze_support, pickleable objects only, executing the __main__ module). The case for basing these workers on multiprocessing is that it keeps a lot of complexity outside of the project while offering a set of quirks that users are likely already familiar with.
The pickling limitations can be partially alleviated by installing cloudpickle.
Can I have my workers talk to each other?
This is currently possible through the use of multiprocessing.Manager, but we don’t and will not officially support it.
This package focuses on providing a flat hierarchy of worker subprocesses to run synchronous, CPU-bound functions. If you are looking to create a nested hierarchy of processes communicating asynchronously with each other, while preserving the power, safety, and convenience of structured concurrency, look into tractor. Or, if you are looking for a more customized solution, try using trio.run_process to spawn additional Trio runs and have them talk to each other over sockets.
Can I let my workers outlive the main Trio process?
No. Trio’s structured concurrency strictly bounds job runs to within a given trio.run call, while cached idle workers are shutdown and killed if necessary by our atexit handler, so this use case is not supported.
How should I map a function over a collection of arguments?
This is fully possible but we leave the implementation of that up to you. Think of us as a loky for your joblib, but natively async and Trionic. We take care of the worker handling so that you can focus on the best concurrency for your application. That said, some example parallelism patterns can be found in the documentation.
Also, look into aiometer?
Contributing
If you notice any bugs, need any help, or want to contribute any code, GitHub issues and pull requests are very welcome! Please read the code of conduct.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file trio-parallel-1.2.0.tar.gz
.
File metadata
- Download URL: trio-parallel-1.2.0.tar.gz
- Upload date:
- Size: 52.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.10.8
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 856b2bed2cd0bc6ffd786fdbc4d9b124231b076f6b96a99ed79b6d8d6edde84d |
|
MD5 | 15744fd45a7243687b725bf4fe9f139b |
|
BLAKE2b-256 | 55150ce1f2e01a60a41f563b735f528ef0b00b07e1e48380d7a8b147b7e82956 |
File details
Details for the file trio_parallel-1.2.0-py3-none-any.whl
.
File metadata
- Download URL: trio_parallel-1.2.0-py3-none-any.whl
- Upload date:
- Size: 36.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.10.8
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 638ab6ff324b53c91333b4015e44cedc9f553b5974fd21b9ce643ff253b01946 |
|
MD5 | 04dec970756205c06c1e7cdeb5ee24b5 |
|
BLAKE2b-256 | 3834794e3ba10917a4b911a87d24a790fef28a7caf7da73bb7d0e2d411821b57 |