Library helps easy write concurrent executed code blocks
Project description
Concurrently
Library helps easy write concurrent executed code blocks.
Quick example:
import asyncio from concurrently import concurrently async def amain(loop): """ How to fetch some web pages with concurrently. """ urls = [ # define pages urls 'http://test/page_1', 'http://test/page_2', 'http://test/page_3', 'http://test/page_4', ] results = {} # immediately run wrapped function concurrent # in 2 thread (asyncio coroutines) @concurrently(2) async def fetch_urls(): for url in urls: page = await fetch_page(url) # some function for download page results[url] = page # wait until all concurrent threads finished await fetch_urls() print(results) if __name__ == '__main__': loop = asyncio.get_event_loop() loop.run_until_complete(amain(loop))
Concurrently supports specific different concurrent engines.
Engines
AsyncIOEngine
Default engine for concurrently run code as asyncio coroutines:
from concurrently import concurrently, AsyncIOEngine ... @concurrently(2, engine=AsyncIOEngine, loop=loop) # loop is option async def fetch_urls(): ... await fetch_urls()
AsyncIOExecutorEngine
Concurrently run code by asyncio executor:
from concurrent.futures import ThreadPoolExecutor from concurrently import concurrently, AsyncIOExecutorEngine ... my_pool = ThreadPoolExecutor() @concurrently(2, engine=AsyncIOExecutorEngine, loop=loop, executor=my_pool) def fetch_urls(): # not async def ... await fetch_urls()
If executor is None or not set will using default asyncio executor.
Note: ProcessPoolExecutor is not supported.
ThreadEngine
Concurrently run code in system threads:
from concurrently import concurrently, ThreadEngine ... @concurrently(2, engine=ThreadEngine) def fetch_urls(): # not async def ... fetch_urls() # not await
ThreadPoolEngine
Concurrently run code in system threads by use concurrent.futures.ThreadPoolExecutor:
from concurrent.futures import ThreadPoolExecutor from concurrently import concurrently, ThreadPoolEngine ... pool = ThreadPoolExecutor() @concurrently(2, engine=ThreadPoolEngine, pool=pool) def fetch_urls(): ... fetch_urls()
pool is option for specifying custom pool otherwise will using default pool.
Note: with this engine stop() is not work correctly.
ProcessEngine
Concurrently run code in system process:
from concurrently import concurrently, ProcessEngine ... @concurrently(2, engine=ProcessEngine) def fetch_urls(): ... fetch_urls()
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file concurrently-0.5.0.tar.gz
.
File metadata
- Download URL: concurrently-0.5.0.tar.gz
- Upload date:
- Size: 9.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 201694fee964ae9cab1e820fd34bfe071b25a5042db9ebff5af9bb4ccf7decaa |
|
MD5 | 77375c8fff1d2dcd98274c24d6be5aa3 |
|
BLAKE2b-256 | e225aed8a73383c21b8099a615243e3f9974c908045853a684f370ed6114382a |