Lightweight MCP server for semantic file caching with 80%+ token reduction
Project description
Semantic Cache MCP
Reduce Claude Code token usage by 80%+ with intelligent file caching.
Semantic Cache MCP is a Model Context Protocol server that eliminates redundant token consumption when Claude reads files. Instead of sending full file contents on every request, it returns diffs for changed files, suppresses unchanged files entirely, and intelligently summarizes large files — all transparently through 12 purpose-built MCP tools.
Features
- 80%+ Token Reduction — Unchanged files cost ~0 tokens; changed files return diffs only
- Three-State Read Model — First read (full + cache), unchanged (message only, 99% savings), modified (diff, 80–95% savings)
- Semantic Search — Hybrid BM25 + HNSW vector search via local ONNX embeddings (configurable model, default BAAI/bge-small-en-v1.5), no API keys, works offline
- Batch Embedding —
batch_smart_readpre-scans all new/changed files and embeds them in a single model call (N calls → 1) - Content Hash Freshness — BLAKE3 hash detects when mtime changes but content is identical (touch, git checkout) — returns cached instead of re-reading
- Grep — Regex/literal pattern search across cached files with line numbers and context
- Semantic Summarization — 50–80% token savings on large files, structure preserved
- DoS Protection — Write size, edit size, and match count limits enforced at every boundary
Installation
Add to Claude Code settings (~/.claude/settings.json):
Option 1 — uvx (always runs latest version):
{
"mcpServers": {
"semantic-cache": {
"command": "uvx",
"args": ["semantic-cache-mcp"]
}
}
}
Option 2 — uv tool install (recommended for multiple clients):
uv tool install semantic-cache-mcp
{
"mcpServers": {
"semantic-cache": {
"command": "semantic-cache-mcp"
}
}
}
Restart Claude Code. Done.
Why Option 2? —
uvxspawns an isolated process per invocation, each loading its own embedding model (~200MB). If you run multiple Claude Code instances concurrently (e.g. across different projects), each one loads a separate copy, multiplying RAM usage.uv tool installputs the binary on yourPATHso all projects share one installed copy and the model is loaded once per process.
GPU Acceleration (Optional)
For NVIDIA GPU acceleration, install with the gpu extra:
uv tool install "semantic-cache-mcp[gpu]"
# or with uvx: uvx "semantic-cache-mcp[gpu]"
Then set EMBEDDING_DEVICE=gpu in your MCP config env block. Falls back to CPU automatically if CUDA is unavailable.
Block Native File Tools (Recommended)
Disable the client's built-in file tools so all file I/O routes through semantic-cache.
Claude Code — add to ~/.claude/settings.json:
{
"permissions": {
"deny": ["Read", "Edit", "Write"]
}
}
OpenCode — add to ~/.config/opencode/opencode.json:
{
"$schema": "https://opencode.ai/config.json",
"permission": {
"read": "deny",
"edit": "deny",
"write": "deny"
}
}
CLAUDE.md Configuration
Add to ~/.claude/CLAUDE.md to enforce semantic-cache globally:
## Tools
- MUST use `semantic-cache` instead of native Read/Write/Edit (80%+ token savings)
- `read` / `batch_read` → file reading with diff-mode (set diff_mode=false after context compression)
- `write` → new files or full rewrites; `append=true` for large files
- `edit` / `batch_edit` → find/replace (full-file / scoped / line-replace)
- `search` / `similar` → semantic search (seed cache first with read/batch_read)
- `grep` → regex/literal pattern search across cached files
- `glob` → find files by pattern; `cached_only=true` to filter to cached files
- `diff` → compare two files with semantic similarity score
- `stats` / `clear` → cache metrics and reset
Tools
Core
| Tool | Description |
|---|---|
read |
Smart file reading with diff-mode. Three states: first read (full + cache), unchanged (99% savings), modified (diff, 80–95% savings). Use offset/limit for line ranges. |
write |
Write files with cache integration. auto_format=true runs formatter. append=true enables chunked writes for large files. Returns diff on overwrite. |
edit |
Find/replace using cached reads — three modes: full-file, scoped to a line range, or direct line replacement. dry_run=true previews. replace_all=true handles multiple matches. Returns unified diff. |
batch_edit |
Up to 50 edits per call with partial success. Each entry can be find/replace, scoped, or line-range replacement. auto_format=true and dry_run=true supported. |
Discovery
| Tool | Description |
|---|---|
search |
Semantic/embedding search across cached files by meaning — not keywords. Seed cache first with read or batch_read. |
similar |
Finds semantically similar cached files to a given path. Start with k=3–5. Only searches cached files. |
glob |
Pattern matching with cache status per file. cached_only=true filters to already-cached files. Max 1000 matches, 5s timeout. |
batch_read |
Read 2+ files in one call. Supports glob expansion in paths, priority ordering, token budget, and per-file diff suppression for unchanged files. Pre-scans and batch-embeds all new/changed files in a single model call. Set diff_mode=false after context compression. |
grep |
Regex or literal pattern search across cached files with line numbers and optional context lines. Like ripgrep for the cache. |
diff |
Compare two files. Returns unified diff plus semantic similarity score. Large diffs are auto-summarized to stay within token budget. |
Management
| Tool | Description |
|---|---|
stats |
Cache metrics, session usage (tokens saved, tool calls), and lifetime aggregates. |
clear |
Reset all cache entries. |
Tool Reference
read — Single file with diff-mode
read path="/src/app.py"
read path="/src/app.py" diff_mode=true # default
read path="/src/app.py" diff_mode=false # full content (use after context compression)
read path="/src/app.py" offset=120 limit=80 # lines 120–199 only
Three states:
| State | Response | Token cost |
|---|---|---|
| First read | Full content + cached | Normal |
| Unchanged | "File unchanged (1,234 tokens cached)" |
~5 tokens |
| Modified | Unified diff only | 5–20% of original |
Set diff_mode=false after context compression — Claude has lost its cached copy and needs full content.
write — Create or overwrite files
write path="/src/new.py" content="..."
write path="/src/new.py" content="..." auto_format=true
write path="/src/large.py" content="...chunk1..." append=false # first chunk
write path="/src/large.py" content="...chunk2..." append=true # subsequent chunks
- Returns diff on overwrite, confirms creation on new files
append=trueappends content rather than replacing — use for writing large files in chunks- Cache is updated immediately after write
edit — Find/replace with three modes
# Mode A — find/replace: searches entire file
edit path="/src/app.py" old_string="def foo():" new_string="def foo(x: int):"
edit path="/src/app.py" old_string="..." new_string="..." replace_all=true auto_format=true
# Mode B — scoped find/replace: search only within line range (shorter old_string suffices)
edit path="/src/app.py" old_string="pass" new_string="return x" start_line=42 end_line=42
# Mode C — line replace: replace entire range, no old_string needed (maximum token savings)
edit path="/src/app.py" new_string=" return result\n" start_line=80 end_line=83
Mode selection:
| Mode | Parameters | Best for |
|---|---|---|
| Find/replace | old_string + new_string |
Unique strings, no line numbers known |
| Scoped | old_string + new_string + start_line/end_line |
Shorter context when read gave you line numbers |
| Line replace | new_string + start_line/end_line (no old_string) |
Maximum token savings when line numbers are known |
- Uses cached content — no token cost for the read
- Returns unified diff of the change
- Multiple matches in scope: fails with hint to add context or use
replace_all=true - Use
batch_editwhen applying 2+ independent changes to the same file
batch_edit — Multiple edits in one call
# Mode A — find/replace: [old, new]
batch_edit path="/src/app.py" edits='[["old1","new1"],["old2","new2"]]'
# Mode B — scoped: [old, new, start_line, end_line]
batch_edit path="/src/app.py" edits='[["pass","return x",42,42]]'
# Mode C — line replace: [null, new, start_line, end_line]
batch_edit path="/src/app.py" edits='[[null," return result\n",80,83]]'
# Mixed modes in one call (object syntax also supported)
batch_edit path="/src/app.py" edits='[
["old1", "new1"],
{"old": "pass", "new": "return x", "start_line": 42, "end_line": 42},
{"old": null, "new": " return result\n", "start_line": 80, "end_line": 83}
]' auto_format=true
- Up to 50 edits per call — each entry can use any mode independently
- Partial success: individual edit failures don't block others
- Single round-trip, single cache update
- Failures reported per-entry so you can retry only what failed
search — Semantic search across cached files
search query="authentication middleware logic" k=5
search query="database connection pooling" k=3
- Embedding-based semantic search — finds meaning, not keywords
- Only searches files that have been previously cached via
readorbatch_read - Seed the cache first, then search
similar — Find semantically related files
similar path="/src/auth.py" k=3
similar path="/tests/test_auth.py" k=5
- Finds cached files most similar to the given file
- Useful for discovering related tests, implementations, or documentation
- Only considers cached files; start with
k=3–5
glob — Pattern matching with cache awareness
glob pattern="**/*.py" directory="./src"
glob pattern="**/*.py" directory="./src" cached_only=true
- Shows cache status (cached/uncached) for each matched file
cached_only=truereturns only files already in cache — useful for scoping searches- Max 1000 matches, 5-second timeout
batch_read — Multiple files with token budget
batch_read paths="/src/a.py,/src/b.py" max_total_tokens=50000
batch_read paths='["/src/a.py","/src/b.py"]' diff_mode=true priority="/src/main.py"
batch_read paths="/src/*.py" max_total_tokens=30000 diff_mode=false
- Glob expansion:
src/*.pyexpanded inline (max 50 files per glob) - Priority ordering:
prioritypaths read first, remainder sorted smallest-first - Token budget: stops reading new files once
max_total_tokensreached; skipped files includeest_tokenshint - Unchanged suppression: unchanged files appear in
summary.unchangedwith no content (zero tokens) - Batch embedding: pre-scans all new/changed files and embeds them in a single model call before reading — N model calls reduced to 1
- Context compression recovery: set
diff_mode=falsewhen Claude needs full content after losing context
diff — Compare two files
diff path1="/src/v1.py" path2="/src/v2.py"
- Returns unified diff between two files
- Includes semantic similarity score (cosine distance of embeddings)
- Large diffs auto-summarized to stay within token budget
Configuration
Environment Variables
| Variable | Default | Description |
|---|---|---|
LOG_LEVEL |
INFO |
Logging verbosity (DEBUG, INFO, WARNING, ERROR) |
TOOL_OUTPUT_MODE |
compact |
Response detail (compact, normal, debug) |
TOOL_MAX_RESPONSE_TOKENS |
0 |
Global response token cap (0 = disabled) |
MAX_CONTENT_SIZE |
100000 |
Max bytes returned by read operations |
MAX_CACHE_ENTRIES |
10000 |
Max cache entries before LRU-K eviction |
EMBEDDING_DEVICE |
cpu |
Embedding hardware: cpu, cuda (GPU), auto (detect) |
EMBEDDING_MODEL |
BAAI/bge-small-en-v1.5 |
FastEmbed model for search/similarity (options) |
SEMANTIC_CACHE_DIR |
(platform) | Override cache/database directory path |
See docs/env_variables.md for detailed descriptions, model selection guidance, and examples.
Safety Limits
| Limit | Value | Protects Against |
|---|---|---|
MAX_WRITE_SIZE |
10 MB | Memory exhaustion via large writes |
MAX_EDIT_SIZE |
10 MB | Memory exhaustion via large file edits |
MAX_MATCHES |
10,000 | CPU exhaustion via unbounded replace_all |
MCP Server Config
{
"mcpServers": {
"semantic-cache": {
"command": "uvx",
"args": ["semantic-cache-mcp"],
"env": {
"LOG_LEVEL": "INFO",
"TOOL_OUTPUT_MODE": "compact",
"MAX_CONTENT_SIZE": "100000",
"EMBEDDING_DEVICE": "cpu",
"EMBEDDING_MODEL": "BAAI/bge-small-en-v1.5"
}
}
}
}
Embeddings: Uses FastEmbed with BAAI/bge-small-en-v1.5 by default (33M params, 384-dimensional, 512 token context). Runs entirely locally via ONNX Runtime — no API keys, no network calls during search. Set EMBEDDING_MODEL to use a different model, and EMBEDDING_DEVICE to control hardware: cpu (default), cuda (GPU), or auto (detect available).
Cache location: Platform-specific (~/.cache/semantic-cache-mcp/ on Linux, ~/Library/Caches/semantic-cache-mcp/ on macOS, %LOCALAPPDATA%\semantic-cache-mcp\ on Windows). Override with SEMANTIC_CACHE_DIR.
How It Works
┌─────────────┐ ┌──────────────┐ ┌──────────────────┐
│ Claude │────▶│ smart_read │────▶│ Cache Lookup │
│ Code │ │ │ │ (VectorStorage) │
└─────────────┘ └──────────────┘ └──────────────────┘
│
┌─────────────────┼─────────────────┐
▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────────┐
│Unchanged │ │ Changed │ │ New / Large │
│ ~0 tok │ │ diff │ │ summarize or │
│ (99%) │ │ (80-95%) │ │ full content │
└──────────┘ └──────────┘ └──────────────┘
Read pipeline (in priority order):
- File unchanged — mtime matches cache entry → return "no changes" message (~5 tokens)
- File changed — compute unified diff → return diff only (80–95% savings)
- Semantically similar cached file — return diff from nearest neighbor (HNSW vector search)
- Large file — semantic summarization preserving docstrings and key function signatures
- New file — full content returned and embedded;
batch_readpre-scans and embeds all new files in a single model call
Performance
Measured on this project's 30 source files (~136K tokens). Benchmarks run on a standard dev machine (CPU embeddings).
Token Savings
| Phase | Scenario | Savings |
|---|---|---|
| Cold read | First read, no cache | 0% (baseline) |
| Unchanged re-read | Same files, no modifications | 99.1% |
| Content hash | Touch files (mtime changed, content identical) | 99.1% |
| Small edits | ~5% of lines changed in 30% of files | 98.1% |
| Batch read | All files via batch_read |
99.1% |
| Search | 5 queries × k=5, previews vs full reads | 98.4% |
| Overall (cached) | Phases 2–6 combined | 98.8% |
Operation Latency
| Operation | Time |
|---|---|
| Unchanged read (single file) | 2 ms |
| Unchanged re-read (29 files) | 25 ms |
| Batch read (29 files, diff mode) | 35 ms |
| Cold read (29 files, incl. embed) | 2,554 ms |
| Write (200-line file) | 47 ms |
| Edit (scoped find/replace) | 48 ms |
| Semantic search (k=5) | 4 ms |
| Semantic search (k=10) | 5 ms |
| Find similar (k=3) | 49 ms |
| Grep (literal) | 1 ms |
| Grep (regex) | 2 ms |
| Embedding model warmup | 206 ms |
| Single embedding (largest file) | 47 ms |
| Batch embedding (10 files) | 469 ms |
Run benchmarks yourself:
uv run python benchmarks/benchmark_token_savings.py # token savings
uv run python benchmarks/benchmark_performance.py # operation latency
See docs/performance.md for full benchmarks and methodology.
Documentation
| Guide | Description |
|---|---|
| Architecture | Component design, algorithms, data flow |
| Performance | Optimization techniques, benchmarks |
| Security | Threat model, input validation, size limits |
| Advanced Usage | Programmatic API, custom storage backends |
| Troubleshooting | Common issues, debug logging |
| Environment Variables | All configurable env vars with defaults and examples |
Contributing
git clone https://github.com/CoderDayton/semantic-cache-mcp.git
cd semantic-cache-mcp
uv sync
uv run pytest
This project uses Python 3.12+, strict type hints throughout, Ruff for formatting and linting, and pytest for testing. See CONTRIBUTING.md for commit conventions, pre-commit hooks, and code standards.
License
MIT License — use freely in personal and commercial projects.
Credits
Built with FastMCP 3.0 and:
- FastEmbed — local ONNX embeddings (configurable, default BAAI/bge-small-en-v1.5)
- SimpleVecDB — HNSW vector storage with FTS5 keyword search
- Semantic summarization based on TCRA-LLM (arXiv:2310.15556)
- BLAKE3 cryptographic hashing for content freshness
- LRU-K frequency-aware cache eviction
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file semantic_cache_mcp-0.3.1.tar.gz.
File metadata
- Download URL: semantic_cache_mcp-0.3.1.tar.gz
- Upload date:
- Size: 396.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
98e829b475f1b2f87b9f718b168bb0b87f52fb4919ac2cb37a00de69dae6c37f
|
|
| MD5 |
342b354e0355c45b4db5174978d9732f
|
|
| BLAKE2b-256 |
388de98a0eb82ca6d72fdfe6efe2c4693a6cfd391db0d896d48e37320cb9abf0
|
Provenance
The following attestation bundles were made for semantic_cache_mcp-0.3.1.tar.gz:
Publisher:
release.yml on CoderDayton/semantic-cache-mcp
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
semantic_cache_mcp-0.3.1.tar.gz -
Subject digest:
98e829b475f1b2f87b9f718b168bb0b87f52fb4919ac2cb37a00de69dae6c37f - Sigstore transparency entry: 1063517004
- Sigstore integration time:
-
Permalink:
CoderDayton/semantic-cache-mcp@26b6ea83461a2b582f87678437f350beaa7fe4d0 -
Branch / Tag:
refs/tags/v0.3.1 - Owner: https://github.com/CoderDayton
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@26b6ea83461a2b582f87678437f350beaa7fe4d0 -
Trigger Event:
push
-
Statement type:
File details
Details for the file semantic_cache_mcp-0.3.1-py3-none-any.whl.
File metadata
- Download URL: semantic_cache_mcp-0.3.1-py3-none-any.whl
- Upload date:
- Size: 95.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e25eb2e53ead3f60082f4f1ee46cad897f45d9c05f6187b72e0ef4048c9c0027
|
|
| MD5 |
c336aa8461b7da42381b3db584781768
|
|
| BLAKE2b-256 |
deba1d6a978c32fead4649631cbbc70ffb183d7cec117b0a3f9136e745a7b50a
|
Provenance
The following attestation bundles were made for semantic_cache_mcp-0.3.1-py3-none-any.whl:
Publisher:
release.yml on CoderDayton/semantic-cache-mcp
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
semantic_cache_mcp-0.3.1-py3-none-any.whl -
Subject digest:
e25eb2e53ead3f60082f4f1ee46cad897f45d9c05f6187b72e0ef4048c9c0027 - Sigstore transparency entry: 1063517041
- Sigstore integration time:
-
Permalink:
CoderDayton/semantic-cache-mcp@26b6ea83461a2b582f87678437f350beaa7fe4d0 -
Branch / Tag:
refs/tags/v0.3.1 - Owner: https://github.com/CoderDayton
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@26b6ea83461a2b582f87678437f350beaa7fe4d0 -
Trigger Event:
push
-
Statement type: