Intelligent caching library for FastAPI with automatic invalidation
Project description
FastAPI Intelligent Cache
A production-ready, intelligent caching library for FastAPI applications with automatic invalidation, Redis support, and hierarchical cache management.
Features
- ๐ Zero-Config Caching: Simple
@cache_configdecorator - ๐ง Intelligent Invalidation: Automatic hierarchical cache clearing on writes
- ๐ Non-Blocking: Redis failures don't crash your app
- ๐ Admin Dashboard: Built-in cache management API
- ๐ฏ Deterministic Keys: Query parameter order doesn't matter
- โก High Performance: <5ms cache hit latency
- ๐ง Flexible Backends: Redis, In-Memory, or custom backends
- ๐ Metrics: Built-in hit/miss rate tracking
- ๐ก๏ธ Production-Ready: Comprehensive error handling and logging
Installation
pip install fastapi-intelligent-cache
For Redis support:
pip install fastapi-intelligent-cache[redis]
Quick Start
from fastapi import FastAPI
from fastapi_intelligent_cache import CacheManager, cache_config
from fastapi_intelligent_cache.backends import RedisBackend
app = FastAPI()
# Initialize cache
cache_manager = CacheManager(
backend=RedisBackend(url="redis://localhost:6379"),
default_ttl=60,
include_admin_routes=True,
max_response_size=10 * 1024 * 1024, # 10MB limit (default)
)
cache_manager.init_app(app)
# Cache a GET endpoint
@app.get("/items")
@cache_config(ttl_seconds=3600) # Cache for 1 hour
async def list_items(page: int = 1, limit: int = 10):
# Expensive operation here
return {"items": [...]}
# Automatically invalidates related caches
@app.post("/items")
async def create_item(item: dict):
# After creation, GET /items cache is automatically cleared
return {"item": item}
How It Works
1. Declarative Caching
Use the @cache_config decorator to cache any GET endpoint:
@app.get("/spaces/{space_id}")
@cache_config(ttl_seconds=86400) # 24 hours
async def get_space(space_id: str):
return {"space": fetch_space(space_id)}
2. Automatic Invalidation
Write operations (POST, PUT, PATCH, DELETE) automatically clear related caches:
# This endpoint is cached
@app.get("/spaces")
@cache_config(ttl_seconds=3600)
async def list_spaces():
return {"spaces": [...]}
# This automatically clears the list cache above
@app.post("/spaces")
async def create_space(space: dict):
return {"space": space}
3. Hierarchical Invalidation
Nested resources are intelligently invalidated:
# PATCH /spaces/123/items/456
# Automatically clears:
# - GET:spaces:123:items:456* (current resource)
# - GET:spaces:123:items:* (parent list)
# - GET:spaces:123:* (grandparent)
# - GET:spaces:* (base collection)
4. Cache Key Generation
Keys are deterministic and order-independent:
# These generate the SAME cache key:
GET /items?page=1&limit=10&sort=name
GET /items?sort=name&page=1&limit=10
# Key format: METHOD:path:sorted_params
# Result: GET:items:limit=10:page=1:sort=name
Configuration
Backend Options
Redis Backend (Recommended for Production)
from fastapi_intelligent_cache.backends import RedisBackend
backend = RedisBackend(
url="redis://localhost:6379",
db=0,
password=None,
max_connections=50,
socket_timeout=5,
key_prefix="myapp", # Namespace for keys
)
Memory Backend (Development/Testing)
from fastapi_intelligent_cache.backends import MemoryBackend
backend = MemoryBackend()
Advanced Configuration
from fastapi_intelligent_cache import CacheManager
cache_manager = CacheManager(
backend=backend,
default_ttl=60, # Default cache duration
enabled=True, # Global enable/disable
include_admin_routes=True, # Mount admin API
max_response_size=10 * 1024 * 1024, # 10MB limit (default)
)
[!IMPORTANT] Memory Safety: If a response exceeds
max_response_size, the library will automatically abort caching and stream the response to the client to prevent memory exhaustion. These large responses will not be stored in the cache.
Admin API
When include_admin_routes=True, you get these endpoints:
GET /api/cache/keys # List cache keys (paginated)
GET /api/cache/keys?pattern=... # List keys matching pattern (paginated)
DELETE /api/cache/keys/{key} # Delete specific key
POST /api/cache/clear # Clear all cache
POST /api/cache/clear?pattern=... # Clear by pattern
GET /api/cache/stats # Get hit/miss statistics
GET /api/cache/health # Health check
GET /api/cache/keys returns:
{
"keys": [{"key": "GET:items:page=1", "ttl": 42}],
"cursor": 0 // use cursor for pagination; 0 means done
}
Example Usage
# List all keys
curl http://localhost:8000/api/cache/keys
# Clear all space-related caches
curl -X POST "http://localhost:8000/api/cache/clear?pattern=GET:spaces:*"
# View cache statistics
curl http://localhost:8000/api/cache/stats
# {"hits": 1234, "misses": 56, "hit_rate": 0.957}
Advanced Usage
Manual Cache Control
from fastapi import Depends
from fastapi_intelligent_cache import get_cache_service, CacheService
@app.post("/admin/warm-cache")
async def warm_cache(cache: CacheService = Depends(get_cache_service)):
# Manually set cache
await cache.set("custom:key", {"data": "value"}, ttl=300)
return {"status": "cache warmed"}
@app.post("/admin/invalidate/{resource}")
async def invalidate_resource(
resource: str,
cache: CacheService = Depends(get_cache_service)
):
# Clear specific pattern
count = await cache.clear_pattern(f"GET:{resource}:*")
return {"cleared": count}
Custom Cache Keys (vary_by)
@app.get("/user/profile")
@cache_config(ttl_seconds=300, vary_by=["user_id"])
async def get_profile(request: Request):
# `request.state.user_id` must be set by your auth middleware
# The generated cache key will include `user_id=...` so different users
# get separate cached responses.
return {"profile": {...}}
Conditional Caching
# Bypass cache with header
curl -H "Cache-Control: no-cache" http://localhost:8000/items
Response Headers
Every response includes cache status:
X-Cache: HIT # Served from cache
X-Cache: MISS # Generated fresh
X-Cache: SKIP # Explicitly bypassed (no-cache header or size limit)
Caching Constraints (read this)
- Only GET requests are cached.
Cache-Control: no-cacheforces a bypass.- Bodies larger than the configured
max_response_size(default 10MB) are not cached. - Streaming responses are not cached; bodies are buffered up to the size cap.
- Cached headers are replayed as-is โ avoid caching responses that set cookies or per-user headers unless you strip them first.
See docs/CACHING_LIMITS.md and docs/SECURITY.md for deeper guidance.
Architecture
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ @cache_config Decorator (metadata) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Middleware (read/write interception) โ
โ - CacheMiddleware โ
โ - InvalidationMiddleware โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ CacheService (business logic) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Backend (Redis/Memory) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Performance
- Cache Hit Latency: <5ms (p99)
- Cache Miss Overhead: <10ms (p99)
- Invalidation: <50ms for 1000 keys (p99)
- Memory Overhead: <100MB for 10K cached responses
Testing
# Install dev dependencies
poetry install
# Run tests
pytest
# Run with coverage
pytest --cov=fastapi_intelligent_cache --cov-report=html
# Type checking
mypy src/
# Linting
ruff check src/
black --check src/
Examples
See the examples/ directory for complete working examples:
basic_usage.py- Simple cache setupwith_redis.py- Redis backend configurationwith_auth.py- Secured admin routesvary_by_user.py- Per-user caching withvary_bycustom_backend.py- Custom backend implementation
Best Practices
1. Choose Appropriate TTLs
# Frequently changing data
@cache_config(ttl_seconds=60) # 1 minute
# Stable data
@cache_config(ttl_seconds=3600) # 1 hour
# Nearly static data
@cache_config(ttl_seconds=86400) # 24 hours
2. Use Key Prefixes for Namespacing
backend = RedisBackend(
url="redis://localhost:6379",
key_prefix="myapp" # All keys: myapp:GET:...
)
3. Monitor Cache Performance
@app.get("/metrics")
async def metrics(cache: CacheService = Depends(get_cache_service)):
stats = cache.get_stats()
# Monitor hit_rate - aim for >80%
return stats
4. Handle Cache Failures Gracefully
The library automatically handles Redis failures - your app continues without caching. Monitor logs for connection errors.
5. Secure Admin Routes
from fastapi import Depends, HTTPException
from fastapi.security import HTTPBearer
security = HTTPBearer()
async def verify_admin(token = Depends(security)):
if not is_valid_admin(token):
raise HTTPException(403)
return token
from fastapi import Depends
# Protect admin routes with a single dependency applied to all endpoints
cache_manager = CacheManager(
backend=backend,
include_admin_routes=True,
admin_auth_dependency=Depends(verify_admin), # or admin_dependencies=[Depends(verify_admin)]
)
# NOTE: By default, admin routes have no authentication. You *must* provide
# appropriate dependencies if these endpoints are exposed in production.
Troubleshooting
Cache Not Working
- Check if caching is enabled globally
- Verify
@cache_configdecorator is present - Ensure only GET requests (POST/PUT/etc. are not cached)
- Check for
Cache-Control: no-cacheheaders
Cache Not Invalidating
- Verify write operation returns 2xx status
- Check logs for invalidation patterns
- Ensure path structure follows REST conventions
Redis Connection Issues
- Check Redis URL and credentials
- Verify network connectivity
- Check Redis server logs
- App will continue without caching on Redis failure
Contributing
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
Development Setup
# Clone repository
git clone https://github.com/your-org/fastapi-intelligent-cache.git
cd fastapi-intelligent-cache
# Install with dev dependencies
poetry install
# Setup pre-commit hooks
pre-commit install
# Run tests
pytest
# Format code
black src/ tests/
ruff check --fix src/ tests/
License
MIT License - see LICENSE file for details.
Credits
Inspired by the caching implementation in the Cloud11 Platform.
Support
- Documentation: https://fastapi-intelligent-cache.readthedocs.io
- Issues: GitHub Issues
- Discussions: GitHub Discussions
Changelog
See CHANGELOG.md for version history.
Made with โค๏ธ by the Cloud11 Team
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file fastapi_intelligent_cache-0.1.2.tar.gz.
File metadata
- Download URL: fastapi_intelligent_cache-0.1.2.tar.gz
- Upload date:
- Size: 22.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.2.1 CPython/3.11.14 Linux/6.11.0-1018-azure
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
dbf040f4d12642aae5c18dbda157141bf26a4d553b2abbf2936d0050fe2a4bd5
|
|
| MD5 |
b090a35ce482b28c490a5a421ce06aa4
|
|
| BLAKE2b-256 |
33f4923cdf04d4a2ae063c4d2bd1b87c3c6ae624ad18ee5971e8e29d371feb3c
|
File details
Details for the file fastapi_intelligent_cache-0.1.2-py3-none-any.whl.
File metadata
- Download URL: fastapi_intelligent_cache-0.1.2-py3-none-any.whl
- Upload date:
- Size: 25.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.2.1 CPython/3.11.14 Linux/6.11.0-1018-azure
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
52534249f8fbcb802f8cf351330443c5f08e4751e9d282ffe9792ad21f6524d9
|
|
| MD5 |
0d456d7e94350b94c18505dbdfb6377e
|
|
| BLAKE2b-256 |
e15feda1242c059a110dd91f7ca93013a9f6c28627a1ea793814a1fdba7a7760
|