Memoize concurrent asyncio Python function calls
Memoize concurrent asyncio Python coroutine calls. This offers short-lived memoization: for any given set of arguments, the cache lasts only for the length of a single call.
pip install aiomemoizeconcurrent
For a coroutine whose arguments are hashable, you can create a memoized version by passing it to
memoize_concurrent. Any concurrent calls to this version that have the same arguments will result in only a single run of original coroutine.
For example, creating 3 concurrent invocations of a coroutine where 2 of them have identical arguments
import asyncio from aiomemoizeconcurrent import memoize_concurrent async def main(): memoized_coro = memoize_concurrent(coro) results = await asyncio.gather(*[ memoized_coro('a'), memoized_coro('a'), memoized_coro('b'), ]) print(results) await memoized_coro('a') async def coro(value): print('Inside coro', value) await asyncio.sleep(1) return value loop = asyncio.get_event_loop() loop.run_until_complete(main()) loop.close()
will only run
coro twice, as shown by the output
Inside coro a Inside coro b ['a', 'a', 'b']
This can be used to memoize a function making calls to an API, and especially if
- you expect many concurrent calls;
- identical concurrent calls are idempotent;
- there are enough such calls that are identical to justify such a caching layer.
It can also be used to avoid concurrency edge cases/race conditions with multiple tasks accessing shared resources. For example, multiple tasks may need to dynamically create shared UDP sockets. To ensure that this dynamic generation isn't called by multiple tasks at the same time for the same address, it can be wrapped with
memoize_concurrent works with both coroutines, and functions that return a future.
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Hashes for aiomemoizeconcurrent-0.0.8.tar.gz
Hashes for aiomemoizeconcurrent-0.0.8-py3-none-any.whl