Skip to main content

A faster fork of async-lru: Simple LRU cache for asyncio. Written in Python and compiled to C using mypyc.

Project description

info:

Simple lru cache for asyncio

GitHub Actions CI/CD workflows status async-lru @ PyPI https://codecov.io/gh/aio-libs/async-lru/branch/master/graph/badge.svg Matrix Room — #aio-libs:matrix.org Matrix Space — #aio-libs-space:matrix.org

Installation

pip install async-lru

Usage

This package is a port of Python’s built-in functools.lru_cache function for asyncio. To better handle async behaviour, it also ensures multiple concurrent calls will only result in 1 call to the wrapped function, with all awaits receiving the result of that call when it completes.

import asyncio

import aiohttp
from faster_async_lru import alru_cache


@alru_cache(maxsize=32)
async def get_pep(num):
    resource = 'http://www.python.org/dev/peps/pep-%04d/' % num
    async with aiohttp.ClientSession() as session:
        try:
            async with session.get(resource) as s:
                return await s.read()
        except aiohttp.ClientError:
            return 'Not Found'


async def main():
    for n in 8, 290, 308, 320, 8, 218, 320, 279, 289, 320, 9991:
        pep = await get_pep(n)
        print(n, len(pep))

    print(get_pep.cache_info())
    # CacheInfo(hits=3, misses=8, maxsize=32, currsize=8)

    # closing is optional, but highly recommended
    await get_pep.cache_close()


asyncio.run(main())

TTL (time-to-live in seconds, expiration on timeout) is supported by accepting ttl configuration parameter (off by default):

@alru_cache(ttl=5)
async def func(arg):
    return arg * 2

The library supports explicit invalidation for specific function call by cache_invalidate():

@alru_cache(ttl=5)
async def func(arg1, arg2):
    return arg1 + arg2

func.cache_invalidate(1, arg2=2)

The method returns True if corresponding arguments set was cached already, False otherwise.

Benchmarks

async-lru uses CodSpeed for performance regression testing.

To run the benchmarks locally:

pip install -r requirements-dev.txt
pytest --codspeed benchmark.py

The benchmark suite covers both bounded (with maxsize) and unbounded (no maxsize) cache configurations. Scenarios include:

  • Cache hit

  • Cache miss

  • Cache fill/eviction (cycling through more keys than maxsize)

  • Cache clear

  • TTL expiry

  • Cache invalidation

  • Cache info retrieval

  • Concurrent cache hits

  • Baseline (uncached async function)

On CI, benchmarks are run automatically via GitHub Actions on Python 3.13, and results are uploaded to CodSpeed (if a CODSPEED_TOKEN is configured). You can view performance history and detect regressions on the CodSpeed dashboard.

Thanks

The library was donated by Ocean S.A.

Thanks to the company for contribution.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

faster-async-lru-2.0.5.tar.gz (12.2 kB view details)

Uploaded Source

File details

Details for the file faster-async-lru-2.0.5.tar.gz.

File metadata

  • Download URL: faster-async-lru-2.0.5.tar.gz
  • Upload date:
  • Size: 12.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.12.1.2 requests/2.28.1 setuptools/75.3.2 requests-toolbelt/1.0.0 tqdm/4.64.1 CPython/3.8.10

File hashes

Hashes for faster-async-lru-2.0.5.tar.gz
Algorithm Hash digest
SHA256 951c718c2735066ba53b634b3b2d4c76d451189b662629e243b24dce688b0661
MD5 b037dba8e5d00e4474a8f302e4a16bf0
BLAKE2b-256 9a22f09a5343674ee1f3c382eb43167d9d62e8e60f5effce9619ef492dbda0d9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page