Skip to main content

Library helps easy write concurrent executed code blocks. Supports asyncio coroutines, threads and processes

Project description

Build Status Code Coverage

Concurrently

Library helps easy write concurrent executed code blocks.

Quick example:

import asyncio
from concurrently import concurrently


async def amain(loop):
    """
    How to fetch some web pages with concurrently.
    """
    urls = [  # define pages urls
        'http://test/page_1',
        'http://test/page_2',
        'http://test/page_3',
        'http://test/page_4',
    ]
    results = {}

    # immediately run wrapped function concurrent
    # in 2 thread (asyncio coroutines)
    @concurrently(2)
    async def fetch_urls():
        for url in urls:
            page = await fetch_page(url)  # some function for download page
            results[url] = page

    # wait until all concurrent threads finished
    await fetch_urls()
    print(results)


if __name__ == '__main__':
    loop = asyncio.get_event_loop()
    loop.run_until_complete(amain(loop))

Concurrently supports specific different concurrent engines.

Engines

AsyncIOEngine

Default engine for concurrently run code as asyncio coroutines:

from concurrently import concurrently, AsyncIOEngine

...
@concurrently(2, engine=AsyncIOEngine, loop=loop)  # loop is option
async def fetch_urls():
    ...

await fetch_urls()

AsyncIOThreadEngine

Concurrently run code in system threads by use asyncio executor:

from concurrently import concurrently, AsyncIOThreadEngine

...
@concurrently(2, engine=AsyncIOThreadEngine)
def fetch_urls():  # not async def
    ...

await fetch_urls()

ThreadEngine

Concurrently run code in system threads:

from concurrently import concurrently, ThreadEngine

...
@concurrently(2, engine=ThreadEngine)
def fetch_urls():  # not async def
    ...

fetch_urls()  # not await

ThreadPoolEngine

Concurrently run code in system threads by use concurrent.futures.ThreadPoolExecutor:

from concurrently import concurrently, ThreadPoolEngine

...
@concurrently(2, engine=ThreadPoolEngine)
def fetch_urls():
    ...

fetch_urls()

Note: with this engine stop() is not work correctly.

ProcessEngine

Concurrently run code in system process:

from concurrently import concurrently, ProcessEngine

...
@concurrently(2, engine=ProcessEngine)
def fetch_urls():
    ...

fetch_urls()

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

concurrently-0.2.0.tar.gz (5.0 kB view details)

Uploaded Source

File details

Details for the file concurrently-0.2.0.tar.gz.

File metadata

File hashes

Hashes for concurrently-0.2.0.tar.gz
Algorithm Hash digest
SHA256 25f53fb35f03d2ec934b353a2531f6f3621a5f7a0c2772dbdb8ed301d8ab00ea
MD5 ea69fc7d55bdd143a152c8acd8561ff0
BLAKE2b-256 b295fd0ef334f4d3a9fde884580c3dd56c0f98f3b93e8480fabb443cc2bf8c94

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page