Lightweight automatic caching for LLM API responses
Project description
ai-cache
Lightweight automatic caching for LLM API responses
Save time, tokens, and API costs by caching LLM responses locally. One line to enable, zero code changes needed.
Features
- ✅ One-line activation -
ai_cache.enable() - ✅ Multi-provider support - OpenAI, Anthropic, Gemini
- ✅ Local SQLite storage - All data stays on your machine
- ✅ Zero dependencies - Only Python standard library
- ✅ Cache expiration - Optional TTL support
- ✅ Cache statistics - Monitor hits and savings
Installation
pip install ai-cache
Quick Start
import ai_cache
ai_cache.enable()
# Use any LLM API as normal - responses are automatically cached
import openai
response = openai.ChatCompletion.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}]
)
# Second identical call returns instantly from cache
response = openai.ChatCompletion.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}]
)
API
# Enable caching
ai_cache.enable() # Default: ~/.ai-cache/
ai_cache.enable(cache_dir="./cache") # Custom directory
ai_cache.enable(ttl=3600) # With 1-hour expiration
# Manage cache
stats = ai_cache.get_stats() # Get hit/miss statistics
ai_cache.clear() # Clear all cache
ai_cache.invalidate(provider="openai") # Clear specific provider
ai_cache.invalidate(model="gpt-4") # Clear specific model
ai_cache.disable() # Disable caching
Supported Providers
- OpenAI (ChatGPT, GPT-4, etc.)
- Anthropic (Claude)
- Google Gemini
How It Works
- Call
ai_cache.enable()to activate - Library intercepts LLM API calls
- Requests are fingerprinted (SHA256 of model + prompt + params)
- Cached responses returned instantly, new requests cached automatically
- All data stored locally in SQLite
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
ai_cache-0.1.0.tar.gz
(9.5 kB
view details)
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
ai_cache-0.1.0-py3-none-any.whl
(10.9 kB
view details)
File details
Details for the file ai_cache-0.1.0.tar.gz.
File metadata
- Download URL: ai_cache-0.1.0.tar.gz
- Upload date:
- Size: 9.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6382855a3a3dbfa2637af493877c444fcb826b1b4e0dc9e6c977f0c1fb67c4e4
|
|
| MD5 |
f923eedc8dc4d5e1e4bf79f16620fcca
|
|
| BLAKE2b-256 |
1cca115c8cc4a45b9820e582ef5c23be3ca0aba56924bbdca7d3863ad69a843d
|
File details
Details for the file ai_cache-0.1.0-py3-none-any.whl.
File metadata
- Download URL: ai_cache-0.1.0-py3-none-any.whl
- Upload date:
- Size: 10.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e55a26b0412c4986a62ee29d4c9ee25109e48bffe580c3433f48bb4d99f761de
|
|
| MD5 |
b73a5065654c66ffc0396524a0d15736
|
|
| BLAKE2b-256 |
f5f16b73f34c0ef3f5b970ce5b25f4a55ea358e467f7454c013e182ce6ff77c1
|