Skip to main content

Explicit, persistent caching for expensive Python functions and methods

Project description

hypercache

Explicit, persistent caching for expensive Python functions and methods.

What it does

  • Caches expensive calls (API calls, embeddings, LLM generations)
  • Works with sync and async methods
  • Persists across restarts (disk, extensible to Redis)
  • Normalizes non-hashable inputs (dicts, Pydantic models, dataclasses, bytes)
  • Supports TTL, stale windows, and background refresh

Why not functools.lru_cache or cachetools

lru_cache cachetools hypercache
Async support No No Yes
Persistent storage No No Yes
Non-hashable inputs No No Yes (normalize)
Instance state in key N/A Manual config=
TTL / stale / refresh No TTL only Yes

Install

pip install hypercache

Basic usage

from datetime import timedelta
from hypercache import CachePolicy, CacheService, MemoryStore, cached


def _embedder_config(self) -> dict:
    return {"model": self.model, "dimensions": self.dimensions}


class Embedder:
    def __init__(self, model: str = "text-embedding-3-large", dimensions: int = 1536):
        self._cache = CacheService(MemoryStore(max_entries=512))
        self.model = model
        self.dimensions = dimensions

    @cached(
        version="embed:v1",
        policy=CachePolicy(
            ttl=timedelta(hours=6),
            stale=timedelta(minutes=30),
            refresh_in_background=True,
        ),
        config=_embedder_config,
    )
    async def embed(self, text: str) -> dict:
        return await call_embedding_api(text)
  • Inputs are auto-captured from the function signature. No duplicate parameter lists.
  • config= explicitly declares which instance state affects the cache key. No hidden method lookups.
  • version= lets you invalidate all cached values when the implementation changes.

Sharing config across methods

Define the config function once, reference it from multiple decorators:

def _llm_config(self) -> dict:
    return {"model": self.model, "temperature": self.temperature}


class LLM:
    def __init__(self, model: str, temperature: float):
        self._cache = CacheService(MemoryStore())
        self.model = model
        self.temperature = temperature

    @cached(version="generate:v1", policy=CachePolicy(), config=_llm_config)
    async def generate(self, prompt: str) -> dict:
        ...

    @cached(version="structured:v1", policy=CachePolicy(), config=_llm_config)
    async def generate_structured(self, prompt: str, schema: dict) -> dict:
        ...

Excluding inputs from the key

Use exclude= to drop arguments that shouldn't affect caching:

@cached(
    version="embed:v1",
    policy=CachePolicy(),
    config=_embedder_config,
    exclude=frozenset({"request_id", "trace_id"}),
)
async def embed(self, text: str, request_id: str | None = None, trace_id: str | None = None):
    ...

Persistent cache

Swap the store — everything else stays the same:

from pathlib import Path
from hypercache import DiskCacheStore

cache = CacheService(DiskCacheStore(Path("./cache")))

Direct usage (no decorator)

result = cache.run(
    instance=embedder,
    operation="embed",
    version="embed:v1",
    inputs={"text": "hello"},
    config={"model": embedder.model},
    policy=CachePolicy(),
    compute=lambda: embedder.embed_uncached("hello"),
)

Invalidation

Embedder.embed.key_for(embedder, "hello")     # inspect key
Embedder.embed.invalidate(embedder, "hello")   # delete one entry
Embedder.embed.clear(embedder)                 # delete all entries for this method

Design principles

  • No magic: no hidden method lookups, no Protocols that silently match by name
  • Explicit: config= in the decorator, not a convention on the class
  • DRY: inputs auto-captured from signature, no duplicate parameter lists
  • IDE-friendly: named functions, not lambdas; errors surface at import time

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hypercache-0.2.1.tar.gz (9.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

hypercache-0.2.1-py3-none-any.whl (12.3 kB view details)

Uploaded Python 3

File details

Details for the file hypercache-0.2.1.tar.gz.

File metadata

  • Download URL: hypercache-0.2.1.tar.gz
  • Upload date:
  • Size: 9.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for hypercache-0.2.1.tar.gz
Algorithm Hash digest
SHA256 bf0c13a60556291648bee856f009776a341aa663c4ce0cf4e11da219eec77b5a
MD5 7bed96526c684d0a034095dd2b808e69
BLAKE2b-256 185af01e8a22e9bf388c1c6e6f3fd5aa864fbc79fbe6625e49f61a37c23a0dbb

See more details on using hashes here.

File details

Details for the file hypercache-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: hypercache-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 12.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for hypercache-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 c20686aabcd876db485c8c5292c839fba2e4813ea9705c8a7a67fa9211231a38
MD5 f5cfe5edd9060db38103350a08bcc728
BLAKE2b-256 d6d06b0613117712c120e4870166ade847a175d5f31f40bb42af65d126afcbdf

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page