Skip to main content

Simple LRU cache for asyncio

Project description

info:

Simple lru cache for asyncio

GitHub Actions CI/CD workflows status async-lru @ PyPI https://codecov.io/gh/aio-libs/async-lru/branch/master/graph/badge.svg Matrix Room — #aio-libs:matrix.org Matrix Space — #aio-libs-space:matrix.org

Installation

pip install async-lru

Usage

This package is a port of Python’s built-in functools.lru_cache function for asyncio. To better handle async behaviour, it also ensures multiple concurrent calls will only result in 1 call to the wrapped function, with all awaits receiving the result of that call when it completes.

import asyncio

import aiohttp
from async_lru import alru_cache


@alru_cache(maxsize=32)
async def get_pep(num):
    resource = 'http://www.python.org/dev/peps/pep-%04d/' % num
    async with aiohttp.ClientSession() as session:
        try:
            async with session.get(resource) as s:
                return await s.read()
        except aiohttp.ClientError:
            return 'Not Found'


async def main():
    for n in 8, 290, 308, 320, 8, 218, 320, 279, 289, 320, 9991:
        pep = await get_pep(n)
        print(n, len(pep))

    print(get_pep.cache_info())
    # CacheInfo(hits=3, misses=8, maxsize=32, currsize=8)

    # closing is optional, but highly recommended
    await get_pep.cache_close()


asyncio.run(main())

TTL (time-to-live in seconds, expiration on timeout) is supported by accepting ttl configuration parameter (off by default):

@alru_cache(ttl=5)
async def func(arg):
    return arg * 2

To prevent thundering herd issues when many cache entries expire simultaneously, you can add jitter to randomize the TTL for each entry:

@alru_cache(ttl=3600, jitter=1800)
async def func(arg):
    return arg * 2

With ttl=3600, jitter=1800, each cache entry will have a random TTL between 3600 and 5400 seconds, spreading out invalidations over time.

The library supports explicit invalidation for specific function call by cache_invalidate():

@alru_cache(ttl=5)
async def func(arg1, arg2):
    return arg1 + arg2

func.cache_invalidate(1, arg2=2)

The method returns True if corresponding arguments set was cached already, False otherwise.

To check whether a specific set of arguments is present in the cache without affecting hit/miss counters or LRU ordering, use cache_contains():

@alru_cache(maxsize=32)
async def func(arg1, arg2):
    return arg1 + arg2

await func(1, arg2=2)

func.cache_contains(1, arg2=2)  # True
func.cache_contains(3, arg2=4)  # False

The method returns True if the result for the given arguments is cached, False otherwise.

Limitations

Event Loop Affinity: alru_cache enforces that a cache instance is used with only one event loop. If you attempt to use a cached function from a different event loop than where it was first called, a RuntimeError will be raised:

RuntimeError: alru_cache is not safe to use across event loops: this cache
instance was first used with a different event loop.
Use separate cache instances per event loop.

For typical asyncio applications using a single event loop, this is automatic and requires no configuration. If your application uses multiple event loops, create separate cache instances per loop:

import threading

_local = threading.local()

def get_cached_fetcher():
    if not hasattr(_local, 'fetcher'):
        @alru_cache(maxsize=100)
        async def fetch_data(key):
            ...
        _local.fetcher = fetch_data
    return _local.fetcher

You can also reuse the logic of an already decorated function in a new loop by accessing __wrapped__:

@alru_cache(maxsize=32)
async def my_task(x):
    ...

# In Loop 1:
# my_task() uses the default global cache instance

# In Loop 2 (or a new thread):
# Create a fresh cache instance for the same logic
cached_task_loop2 = alru_cache(maxsize=32)(my_task.__wrapped__)
await cached_task_loop2(x)

Benchmarks

async-lru uses CodSpeed for performance regression testing.

To run the benchmarks locally:

pip install -r requirements-dev.txt
pytest --codspeed benchmark.py

The benchmark suite covers both bounded (with maxsize) and unbounded (no maxsize) cache configurations. Scenarios include:

  • Cache hit

  • Cache miss

  • Cache fill/eviction (cycling through more keys than maxsize)

  • Cache clear

  • TTL expiry

  • Cache invalidation

  • Cache info retrieval

  • Concurrent cache hits

  • Baseline (uncached async function)

On CI, benchmarks are run automatically via GitHub Actions on Python 3.13, and results are uploaded to CodSpeed (if a CODSPEED_TOKEN is configured). You can view performance history and detect regressions on the CodSpeed dashboard.

Thanks

The library was donated by Ocean S.A.

Thanks to the company for contribution.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

async_lru-2.3.0.tar.gz (16.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

async_lru-2.3.0-py3-none-any.whl (8.4 kB view details)

Uploaded Python 3

File details

Details for the file async_lru-2.3.0.tar.gz.

File metadata

  • Download URL: async_lru-2.3.0.tar.gz
  • Upload date:
  • Size: 16.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for async_lru-2.3.0.tar.gz
Algorithm Hash digest
SHA256 89bdb258a0140d7313cf8f4031d816a042202faa61d0ab310a0a538baa1c24b6
MD5 11ff9ba26a6a52d3ec751a57dcacbde5
BLAKE2b-256 e81f989ecfef8e64109a489fff357450cb73fa73a865a92bd8c272170a6922c2

See more details on using hashes here.

Provenance

The following attestation bundles were made for async_lru-2.3.0.tar.gz:

Publisher: ci-cd.yml on aio-libs/async-lru

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file async_lru-2.3.0-py3-none-any.whl.

File metadata

  • Download URL: async_lru-2.3.0-py3-none-any.whl
  • Upload date:
  • Size: 8.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for async_lru-2.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 eea27b01841909316f2cc739807acea1c623df2be8c5cfad7583286397bb8315
MD5 8ba8acc81ae54f66c6318d683d20a258
BLAKE2b-256 e5e2c2e3abf398f80732e58b03be77bde9022550d221dd8781bf586bd4d97cc1

See more details on using hashes here.

Provenance

The following attestation bundles were made for async_lru-2.3.0-py3-none-any.whl:

Publisher: ci-cd.yml on aio-libs/async-lru

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page