Skip to main content

A modern, async-first, multi-backend Python caching library.

Project description

⚡ CacheStack

Modern · Async-First · Multi-Backend Python Caching

MIT License Python 3.11+ BlackDuck Clean Zero Required Dependencies

CacheStack is a modern Python caching library with a unified API across Memory, File, Redis, PostgreSQL, and Memcached backends. Built async-first, designed for production, and clean for enterprise security scans (BlackDuck, FOSSA, Snyk).


✨ Features

  • Unified API — same interface across all backends
  • Async-native — built for asyncio from the ground up
  • Tiered caching — L1 memory → L2 file/Redis → L3 Postgres with automatic backfill
  • Stampede protection — per-key async locks prevent thundering herd
  • Flexible decorators@cached and @invalidate for sync and async functions
  • Pluggable serializers — Pickle (default), JSON, Msgpack — supported across all backends
  • File cache — zero-dependency persistent cache, BlackDuck safe, atomic writes, Windows-safe
  • BlackDuck-friendly — MIT license, clean SPDX metadata, no ambiguous dependencies
  • Observability — per-backend and per-layer hit/miss/error stats

📦 Installation

Core (no dependencies)

pip install cachestack

With backends

pip install cachestack[memory]      # In-memory (LRU/LFU/TTL via cachetools)
pip install cachestack[redis]       # Redis backend
pip install cachestack[postgres]    # PostgreSQL via asyncpg
pip install cachestack[memcached]   # Memcached backend
pip install cachestack[msgpack]     # Msgpack serializer
pip install cachestack[all]         # Everything

Note: FileBackend requires no extra install — it uses Python stdlib only.


🚀 Quick Start

Memory Cache

from cachestack import MemoryBackend, MemoryConfig

cache = MemoryBackend(MemoryConfig(policy="lru", maxsize=1024))

await cache.set("user:1", {"name": "Sarthak"}, ttl=300)
user = await cache.get("user:1")

File Cache

from cachestack import FileBackend, FileConfig

# Zero dependencies — pure Python stdlib, BlackDuck safe
# Supports any Python object via Pickle (default serializer)
cache = FileBackend(FileConfig(
    directory="./cache",
    ttl=3600,
    namespace="myapp",
))

# Cache any Python object — datetime, sets, custom classes all work
import datetime
await cache.set("ts", datetime.datetime.now())
await cache.set("report", report_data, ttl=86400)

# Proactively clean up expired files to reclaim disk space
deleted = await cache.purge_expired()

# Or enable automatic background purging every hour during writes
cache = FileBackend(FileConfig(
    directory="./cache",
    auto_purge_interval=3600,  # purge expired files every 3600s passively
))

Redis Cache

from cachestack import RedisBackend, RedisConfig

cache = RedisBackend(RedisConfig(
    dsn="redis://localhost:6379/0",
    ttl=600,
    namespace="myapp",
))

await cache.set("session:abc", {"user_id": 1})
session = await cache.get("session:abc")

PostgreSQL Cache

from cachestack import PostgresBackend, PostgresConfig

cache = PostgresBackend(PostgresConfig(
    dsn="postgresql://user:password@localhost/mydb",
    ttl=3600,
    table="cache",
))

# Table is auto-created on first use
await cache.set("report:q3", report_data, ttl=86400)
report = await cache.get("report:q3")

# Proactively purge expired entries
deleted = await cache.purge_expired()

Memcached Cache

from cachestack import MemcachedBackend, MemcachedConfig

cache = MemcachedBackend(MemcachedConfig(host="localhost", port=11211))

await cache.set("key", "value", ttl=60)
value = await cache.get("key")

🗂️ Tiered Caching

CacheStack's most powerful feature. Stack backends from fastest to slowest — reads check L1 first and automatically backfill faster layers on a miss.

from cachestack import (
    TieredCache, WriteStrategy,
    MemoryBackend, MemoryConfig,
    FileBackend, FileConfig,
    RedisBackend, RedisConfig,
    PostgresBackend, PostgresConfig,
)

# 2-layer: Memory + File (zero infrastructure required)
cache = TieredCache([
    MemoryBackend(MemoryConfig(maxsize=512)),                  # L1: RAM
    FileBackend(FileConfig(directory="./cache", ttl=3600)),   # L2: Disk
])

# 3-layer: Memory + Redis + Postgres
cache = TieredCache(
    backends=[
        MemoryBackend(MemoryConfig(maxsize=512)),            # L1: RAM (fastest)
        RedisBackend(RedisConfig(dsn="redis://localhost")),  # L2: Redis
        PostgresBackend(PostgresConfig(dsn="postgresql://localhost/db")),  # L3: Postgres
    ],
    write_strategy=WriteStrategy.WRITE_THROUGH,  # or WRITE_BACK
)

await cache.set("key", "value", ttl=300)
value = await cache.get("key")  # Checks L1 → L2 → L3, backfills on miss

Write Strategies

Strategy Behaviour Best For
WRITE_THROUGH Writes to all layers immediately Read-heavy, consistency matters
WRITE_BACK Writes to L1 only Write-heavy, eventual consistency

🎨 Decorators

@cached — Cache function return values

Works on both sync and async functions.

from cachestack import cached, MemoryBackend

cache = MemoryBackend()

# Basic usage
@cached(cache=cache, ttl=60)
async def get_user(user_id: int):
    return await db.fetch(user_id)

# Custom key builder + condition
@cached(
    cache=cache,
    ttl=300,
    key_builder=lambda fn, args, kw: f"user:{args[0]}",
    condition=lambda v: v is not None,  # only cache non-None results
)
async def get_profile(user_id: int):
    ...

# Works on sync functions too
@cached(cache=cache, ttl=60)
def compute_expensive(x: int):
    return x ** 3

@invalidate — Bust the cache on writes

from cachestack import invalidate

key_fn = lambda fn, args, kw: f"user:{args[0]}"

@cached(cache=cache, ttl=300, key_builder=key_fn)
async def get_user(user_id: int):
    ...

@invalidate(cache=cache, key_builder=key_fn)
async def update_user(user_id: int, data: dict):
    await db.update(user_id, data)  # Cache auto-invalidated after this

🔧 Serializers

All backends support pluggable serializers. FileBackend defaults to PickleSerializer so any Python object can be cached safely. Network backends (Redis, Memcached) also default to Pickle.

Serializer Best For Type Support Speed
PickleSerializer (default) Any Python object All picklable types (datetime, set, custom classes) Fast
JsonSerializer Human-readable files / APIs str, int, list, dict, bool Medium
MsgpackSerializer High-throughput systems Most primitive types Fastest
from cachestack import FileBackend, FileConfig, RedisBackend, RedisConfig
from cachestack import JsonSerializer, MsgpackSerializer

# File cache with JSON — human-readable files on disk
cache = FileBackend(FileConfig(directory="./cache", serializer=JsonSerializer()))

# Redis with Msgpack — compact binary, fast for high throughput
cache = RedisBackend(RedisConfig(serializer=MsgpackSerializer()))  # needs cachestack[msgpack]

📊 Observability

Every backend exposes a .stats() method.

stats = await cache.stats()
# Memory backend:
# {
#   "backend": "memory",
#   "policy": "lru",
#   "hits": 142,
#   "misses": 8,
#   "errors": 0,
#   "size": 58,
#   "maxsize": 1024
# }

# File backend:
# {
#   "backend": "file",
#   "directory": "/abs/path/to/cache",
#   "namespace": "myapp",
#   "serializer": "PickleSerializer",
#   "hits": 98,
#   "misses": 12,
#   "errors": 0,
#   "files_on_disk": 43,
#   "writes": 110
# }

# TieredCache returns per-layer breakdown:
stats = await tiered.stats()
# {
#   "backend": "tiered",
#   "layers": 3,
#   "hits": 142,
#   "misses": 8,
#   "layer_stats": [
#     {"layer": 0, "backend": "memory", "hits": 130, "misses": 12},
#     {"layer": 1, "backend": "file",   "hits": 10,  "misses": 2},
#     {"layer": 2, "backend": "redis",  "hits": 2,   "misses": 0},
#   ]
# }

🔍 Backend Comparison

Backend Async Persistent Cross-Process BlackDuck Safe Install Extra
Memory [memory]
File none (stdlib only)
Redis [redis]
PostgreSQL [postgres]
Memcached [memcached]

🛡️ File Cache — Production Notes

FileBackend was built with four specific production concerns addressed:

1. Any Python object (not just JSON) The default PickleSerializer supports datetime, set, custom classes — anything picklable. Switch to JsonSerializer only if you need human-readable files.

# ✅ All of these work out of the box
await cache.set("ts", datetime.datetime.now())
await cache.set("tags", {"python", "caching", "async"})
await cache.set("obj", my_custom_object)

2. Disk bloat prevention Files accumulate until TTL expires or you explicitly purge. Two options:

# Option A: call manually (e.g. via a cron job or scheduler)
deleted = await cache.purge_expired()

# Option B: passive auto-purge every N seconds during writes
cache = FileBackend(FileConfig(directory="./cache", auto_purge_interval=3600))

3. Windows concurrent write safety All writes use atomic temp file → os.replace() to avoid PermissionError under concurrent access on Windows.

4. Windows path safety The cache directory is resolved to an absolute path at startup — no silent failures from deeply nested relative paths.


🚨 Error Handling

CacheStack wraps all backend errors in a clean exception hierarchy.

from cachestack import CacheError, BackendUnavailableError

try:
    value = await cache.get("key")
except BackendUnavailableError:
    return default_value  # Redis/Postgres is down
except CacheError as e:
    logger.warning(f"Cache error: {e}")

Use silent=True on any backend config to suppress errors and return None instead of raising:

# Never raises — returns None on any error
cache = FileBackend(FileConfig(directory="./cache", silent=True))
cache = RedisBackend(RedisConfig(dsn="redis://localhost", silent=True))

🔌 Writing a Custom Backend

Implement BaseCache and CacheStack will treat your backend like any built-in one — including full TieredCache support.

from cachestack.base import BaseCache
from typing import Any, Dict, Optional

class MyCustomBackend(BaseCache):
    async def get(self, key: str) -> Optional[Any]: ...
    async def set(self, key: str, value: Any, ttl: Optional[int] = None) -> None: ...
    async def delete(self, key: str) -> None: ...
    async def exists(self, key: str) -> bool: ...
    async def clear(self) -> None: ...
    async def stats(self) -> Dict[str, Any]: ...

# Drop it into TieredCache like any other backend
cache = TieredCache([MemoryBackend(), MyCustomBackend()])

🧪 Running Tests

pip install pytest pytest-asyncio cachetools
pytest tests/ -v

# Expected: 36 passed
# (24 core tests + 12 file backend tests)

📤 Publishing to PyPI

pip install build twine

# Build the distribution
python -m build

# Upload to PyPI
twine upload dist/*

🤝 Contributing

Contributions are welcome!

  1. Fork the repo on GitHub
  2. Create a branch: git checkout -b feature/my-feature
  3. Add tests for your change
  4. Run pytest tests/ -v and ensure all 36 tests pass
  5. Open a Pull Request with a clear description

📄 License

MIT — see LICENSE.


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cachestack-0.1.0.tar.gz (20.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cachestack-0.1.0-py3-none-any.whl (24.0 kB view details)

Uploaded Python 3

File details

Details for the file cachestack-0.1.0.tar.gz.

File metadata

  • Download URL: cachestack-0.1.0.tar.gz
  • Upload date:
  • Size: 20.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.10

File hashes

Hashes for cachestack-0.1.0.tar.gz
Algorithm Hash digest
SHA256 a9533a4e689a47f0ee2ff19eb0161ac02823874569124ecd5ec25b791d9503b3
MD5 4476965ab738305a43a6b4548aef8a6f
BLAKE2b-256 d1284afd2ff255044288dacae6deb0a0e1979d82630ebd8a4970d8532400c5d8

See more details on using hashes here.

File details

Details for the file cachestack-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: cachestack-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 24.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.10

File hashes

Hashes for cachestack-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6b28c9ba6b5fd4ceb0d2deb14f6c00d7e28e47a8e0e6aacfdc9636efaefa6570
MD5 8334084c13af26ad7aa5dd5773f84e42
BLAKE2b-256 fe328c5761676dc1d15d7bc0e96b8627ad6ef74180ad3c1444521c8895848774

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page