Skip to main content

Lightweight MCP server for semantic file caching with 80%+ token reduction

Project description

Semantic Cache MCP Logo

Semantic Cache MCP

Python 3.12+ FastMCP 3.0 License: MIT


Reduce Claude Code token usage by 80%+ with intelligent file caching.

Semantic Cache MCP is a Model Context Protocol server that eliminates redundant token consumption when Claude reads files. Instead of sending full file contents on every request, it returns diffs for changed files, suppresses unchanged files entirely, and intelligently summarizes large files — all transparently through 12 purpose-built MCP tools.


Features

  • 80%+ Token Reduction — Unchanged files cost ~0 tokens; changed files return diffs only
  • Three-State Read Model — First read (full + cache), unchanged (message only, 99% savings), modified (diff, 80–95% savings)
  • Semantic Search — Hybrid BM25 + HNSW vector search via local ONNX embeddings (configurable model, default BAAI/bge-small-en-v1.5), no API keys, works offline
  • Batch Embeddingbatch_smart_read pre-scans all new/changed files and embeds them in a single model call (N calls → 1)
  • Content Hash Freshness — BLAKE3 hash detects when mtime changes but content is identical (touch, git checkout) — returns cached instead of re-reading
  • Grep — Regex/literal pattern search across cached files with line numbers and context
  • Semantic Summarization — 50–80% token savings on large files, structure preserved
  • DoS Protection — Write size, edit size, and match count limits enforced at every boundary

Installation

Add to Claude Code settings (~/.claude/settings.json):

Option 1uvx (always runs latest version):

{
  "mcpServers": {
    "semantic-cache": {
      "command": "uvx",
      "args": ["semantic-cache-mcp"]
    }
  }
}

Option 2uv tool install (recommended for multiple clients):

uv tool install semantic-cache-mcp
{
  "mcpServers": {
    "semantic-cache": {
      "command": "semantic-cache-mcp"
    }
  }
}

Restart Claude Code. Done.

Why Option 2?uvx spawns an isolated process per invocation, each loading its own embedding model (~200MB). If you run multiple Claude Code instances concurrently (e.g. across different projects), each one loads a separate copy, multiplying RAM usage. uv tool install puts the binary on your PATH so all projects share one installed copy and the model is loaded once per process.

GPU Acceleration (Optional)

For NVIDIA GPU acceleration, install with the gpu extra:

uv tool install "semantic-cache-mcp[gpu]"
# or with uvx: uvx "semantic-cache-mcp[gpu]"

Then set EMBEDDING_DEVICE=gpu in your MCP config env block. Falls back to CPU automatically if CUDA is unavailable.

Custom Embedding Models

Any HuggingFace model with an ONNX export works — set EMBEDDING_MODEL in your env config:

"env": {
  "EMBEDDING_MODEL": "nomic-ai/nomic-embed-text-v1.5"
}

If the model isn't in fastembed's built-in list, it's automatically downloaded and registered from HuggingFace Hub on first startup (ONNX file integrity is verified via SHA256). See env_variables.md for model recommendations.

Block Native File Tools (Recommended)

Disable the client's built-in file tools so all file I/O routes through semantic-cache.

Claude Code — add to ~/.claude/settings.json:

{
  "permissions": {
    "deny": ["Read", "Edit", "Write"]
  }
}

OpenCode — add to ~/.config/opencode/opencode.json:

{
  "$schema": "https://opencode.ai/config.json",
  "permission": {
    "read": "deny",
    "edit": "deny",
    "write": "deny"
  }
}

CLAUDE.md Configuration

Add to ~/.claude/CLAUDE.md to enforce semantic-cache globally:

## Tools

- MUST use `semantic-cache` instead of native Read/Write/Edit (80%+ token savings)
  - `read` / `batch_read` → file reading with diff-mode (set diff_mode=false after context compression)
  - `write` → new files or full rewrites; `append=true` for large files
  - `edit` / `batch_edit` → find/replace (full-file / scoped / line-replace)
  - `search` / `similar` → semantic search (seed cache first with read/batch_read)
  - `grep` → regex/literal pattern search across cached files
  - `glob` → find files by pattern; `cached_only=true` to filter to cached files
  - `diff` → compare two files with semantic similarity score
  - `stats` / `clear` → cache metrics and reset

Tools

Core

Tool Description
read Smart file reading with diff-mode. Three states: first read (full + cache), unchanged (99% savings), modified (diff, 80–95% savings). Use offset/limit for line ranges.
write Write files with cache integration. auto_format=true runs formatter. append=true enables chunked writes for large files. Returns diff on overwrite.
edit Find/replace using cached reads — three modes: full-file, scoped to a line range, or direct line replacement. dry_run=true previews. replace_all=true handles multiple matches. Returns unified diff.
batch_edit Up to 50 edits per call with partial success. Each entry can be find/replace, scoped, or line-range replacement. auto_format=true and dry_run=true supported.

Discovery

Tool Description
search Semantic/embedding search across cached files by meaning — not keywords. Seed cache first with read or batch_read.
similar Finds semantically similar cached files to a given path. Start with k=3–5. Only searches cached files.
glob Pattern matching with cache status per file. cached_only=true filters to already-cached files. Max 1000 matches, 5s timeout.
batch_read Read 2+ files in one call. Supports glob expansion in paths, priority ordering, token budget, and per-file diff suppression for unchanged files. Pre-scans and batch-embeds all new/changed files in a single model call. Set diff_mode=false after context compression.
grep Regex or literal pattern search across cached files with line numbers and optional context lines. Like ripgrep for the cache.
diff Compare two files. Returns unified diff plus semantic similarity score. Large diffs are auto-summarized to stay within token budget.

Management

Tool Description
stats Cache metrics, session usage (tokens saved, tool calls), and lifetime aggregates.
clear Reset all cache entries.

Tool Reference

read — Single file with diff-mode
read path="/src/app.py"
read path="/src/app.py" diff_mode=true         # default
read path="/src/app.py" diff_mode=false        # full content (use after context compression)
read path="/src/app.py" offset=120 limit=80    # lines 120–199 only

Three states:

State Response Token cost
First read Full content + cached Normal
Unchanged "File unchanged (1,234 tokens cached)" ~5 tokens
Modified Unified diff only 5–20% of original

Set diff_mode=false after context compression — Claude has lost its cached copy and needs full content.

write — Create or overwrite files
write path="/src/new.py" content="..."
write path="/src/new.py" content="..." auto_format=true
write path="/src/large.py" content="...chunk1..." append=false   # first chunk
write path="/src/large.py" content="...chunk2..." append=true    # subsequent chunks
  • Returns diff on overwrite, confirms creation on new files
  • append=true appends content rather than replacing — use for writing large files in chunks
  • Cache is updated immediately after write
edit — Find/replace with three modes
# Mode A — find/replace: searches entire file
edit path="/src/app.py" old_string="def foo():" new_string="def foo(x: int):"
edit path="/src/app.py" old_string="..." new_string="..." replace_all=true auto_format=true

# Mode B — scoped find/replace: search only within line range (shorter old_string suffices)
edit path="/src/app.py" old_string="pass" new_string="return x" start_line=42 end_line=42

# Mode C — line replace: replace entire range, no old_string needed (maximum token savings)
edit path="/src/app.py" new_string="    return result\n" start_line=80 end_line=83

Mode selection:

Mode Parameters Best for
Find/replace old_string + new_string Unique strings, no line numbers known
Scoped old_string + new_string + start_line/end_line Shorter context when read gave you line numbers
Line replace new_string + start_line/end_line (no old_string) Maximum token savings when line numbers are known
  • Uses cached content — no token cost for the read
  • Returns unified diff of the change
  • Multiple matches in scope: fails with hint to add context or use replace_all=true
  • Use batch_edit when applying 2+ independent changes to the same file
batch_edit — Multiple edits in one call
# Mode A — find/replace: [old, new]
batch_edit path="/src/app.py" edits='[["old1","new1"],["old2","new2"]]'

# Mode B — scoped: [old, new, start_line, end_line]
batch_edit path="/src/app.py" edits='[["pass","return x",42,42]]'

# Mode C — line replace: [null, new, start_line, end_line]
batch_edit path="/src/app.py" edits='[[null,"    return result\n",80,83]]'

# Mixed modes in one call (object syntax also supported)
batch_edit path="/src/app.py" edits='[
  ["old1", "new1"],
  {"old": "pass", "new": "return x", "start_line": 42, "end_line": 42},
  {"old": null, "new": "    return result\n", "start_line": 80, "end_line": 83}
]' auto_format=true
  • Up to 50 edits per call — each entry can use any mode independently
  • Partial success: individual edit failures don't block others
  • Single round-trip, single cache update
  • Failures reported per-entry so you can retry only what failed
search — Semantic search across cached files
search query="authentication middleware logic" k=5
search query="database connection pooling" k=3
  • Embedding-based semantic search — finds meaning, not keywords
  • Only searches files that have been previously cached via read or batch_read
  • Seed the cache first, then search
similar — Find semantically related files
similar path="/src/auth.py" k=3
similar path="/tests/test_auth.py" k=5
  • Finds cached files most similar to the given file
  • Useful for discovering related tests, implementations, or documentation
  • Only considers cached files; start with k=3–5
glob — Pattern matching with cache awareness
glob pattern="**/*.py" directory="./src"
glob pattern="**/*.py" directory="./src" cached_only=true
  • Shows cache status (cached/uncached) for each matched file
  • cached_only=true returns only files already in cache — useful for scoping searches
  • Max 1000 matches, 5-second timeout
batch_read — Multiple files with token budget
batch_read paths="/src/a.py,/src/b.py" max_total_tokens=50000
batch_read paths='["/src/a.py","/src/b.py"]' diff_mode=true priority="/src/main.py"
batch_read paths="/src/*.py" max_total_tokens=30000 diff_mode=false
  • Glob expansion: src/*.py expanded inline (max 50 files per glob)
  • Priority ordering: priority paths read first, remainder sorted smallest-first
  • Token budget: stops reading new files once max_total_tokens reached; skipped files include est_tokens hint
  • Unchanged suppression: unchanged files appear in summary.unchanged with no content (zero tokens)
  • Batch embedding: pre-scans all new/changed files and embeds them in a single model call before reading — N model calls reduced to 1
  • Context compression recovery: set diff_mode=false when Claude needs full content after losing context
diff — Compare two files
diff path1="/src/v1.py" path2="/src/v2.py"
  • Returns unified diff between two files
  • Includes semantic similarity score (cosine distance of embeddings)
  • Large diffs auto-summarized to stay within token budget

Configuration

Environment Variables

Variable Default Description
LOG_LEVEL INFO Logging verbosity (DEBUG, INFO, WARNING, ERROR)
TOOL_OUTPUT_MODE compact Response detail (compact, normal, debug)
TOOL_MAX_RESPONSE_TOKENS 0 Global response token cap (0 = disabled)
MAX_CONTENT_SIZE 100000 Max bytes returned by read operations
MAX_CACHE_ENTRIES 10000 Max cache entries before LRU-K eviction
EMBEDDING_DEVICE cpu Embedding hardware: cpu, cuda (GPU), auto (detect)
EMBEDDING_MODEL BAAI/bge-small-en-v1.5 FastEmbed model for search/similarity (options)
SEMANTIC_CACHE_DIR (platform) Override cache/database directory path

See docs/env_variables.md for detailed descriptions, model selection guidance, and examples.

Safety Limits

Limit Value Protects Against
MAX_WRITE_SIZE 10 MB Memory exhaustion via large writes
MAX_EDIT_SIZE 10 MB Memory exhaustion via large file edits
MAX_MATCHES 10,000 CPU exhaustion via unbounded replace_all

MCP Server Config

{
  "mcpServers": {
    "semantic-cache": {
      "command": "uvx",
      "args": ["semantic-cache-mcp"],
      "env": {
        "LOG_LEVEL": "INFO",
        "TOOL_OUTPUT_MODE": "compact",
        "MAX_CONTENT_SIZE": "100000",
        "EMBEDDING_DEVICE": "cpu",
        "EMBEDDING_MODEL": "BAAI/bge-small-en-v1.5"
      }
    }
  }
}

Embeddings: Uses FastEmbed with BAAI/bge-small-en-v1.5 by default (33M params, 384-dimensional, 512 token context). Runs entirely locally via ONNX Runtime — no API keys, no network calls during search. Set EMBEDDING_MODEL to use a different model, and EMBEDDING_DEVICE to control hardware: cpu (default), cuda (GPU), or auto (detect available).

Cache location: Platform-specific (~/.cache/semantic-cache-mcp/ on Linux, ~/Library/Caches/semantic-cache-mcp/ on macOS, %LOCALAPPDATA%\semantic-cache-mcp\ on Windows). Override with SEMANTIC_CACHE_DIR.


How It Works

┌─────────────┐     ┌──────────────┐     ┌──────────────────┐
│  Claude     │────▶│  smart_read  │────▶│  Cache Lookup    │
│  Code       │     │              │     │  (VectorStorage) │
└─────────────┘     └──────────────┘     └──────────────────┘
                           │
         ┌─────────────────┼─────────────────┐
         ▼                 ▼                 ▼
   ┌──────────┐     ┌──────────┐     ┌──────────────┐
   │Unchanged │     │ Changed  │     │  New / Large │
   │  ~0 tok  │     │  diff    │     │ summarize or │
   │  (99%)   │     │ (80-95%) │     │ full content │
   └──────────┘     └──────────┘     └──────────────┘

Read pipeline (in priority order):

  1. File unchanged — mtime matches cache entry → return "no changes" message (~5 tokens)
  2. File changed — compute unified diff → return diff only (80–95% savings)
  3. Semantically similar cached file — return diff from nearest neighbor (HNSW vector search)
  4. Large file — semantic summarization preserving docstrings and key function signatures
  5. New file — full content returned and embedded; batch_read pre-scans and embeds all new files in a single model call

Performance

Measured on this project's 30 source files (~136K tokens). Benchmarks run on a standard dev machine (CPU embeddings).

Token Savings

Phase Scenario Savings
Cold read First read, no cache 0% (baseline)
Unchanged re-read Same files, no modifications 99.1%
Content hash Touch files (mtime changed, content identical) 99.1%
Small edits ~5% of lines changed in 30% of files 98.1%
Batch read All files via batch_read 99.1%
Search 5 queries × k=5, previews vs full reads 98.4%
Overall (cached) Phases 2–6 combined 98.8%

Operation Latency

Operation Time
Unchanged read (single file) 2 ms
Unchanged re-read (29 files) 25 ms
Batch read (29 files, diff mode) 35 ms
Cold read (29 files, incl. embed) 2,554 ms
Write (200-line file) 47 ms
Edit (scoped find/replace) 48 ms
Semantic search (k=5) 4 ms
Semantic search (k=10) 5 ms
Find similar (k=3) 49 ms
Grep (literal) 1 ms
Grep (regex) 2 ms
Embedding model warmup 206 ms
Single embedding (largest file) 47 ms
Batch embedding (10 files) 469 ms

Run benchmarks yourself:

uv run python benchmarks/benchmark_token_savings.py    # token savings
uv run python benchmarks/benchmark_performance.py      # operation latency

See docs/performance.md for full benchmarks and methodology.


Documentation

Guide Description
Architecture Component design, algorithms, data flow
Performance Optimization techniques, benchmarks
Security Threat model, input validation, size limits
Advanced Usage Programmatic API, custom storage backends
Troubleshooting Common issues, debug logging
Environment Variables All configurable env vars with defaults and examples

Contributing

git clone https://github.com/CoderDayton/semantic-cache-mcp.git
cd semantic-cache-mcp
uv sync
uv run pytest

This project uses Python 3.12+, strict type hints throughout, Ruff for formatting and linting, and pytest for testing. See CONTRIBUTING.md for commit conventions, pre-commit hooks, and code standards.


License

MIT License — use freely in personal and commercial projects.


Credits

Built with FastMCP 3.0 and:

  • FastEmbed — local ONNX embeddings (configurable, default BAAI/bge-small-en-v1.5)
  • SimpleVecDB — HNSW vector storage with FTS5 keyword search
  • Semantic summarization based on TCRA-LLM (arXiv:2310.15556)
  • BLAKE3 cryptographic hashing for content freshness
  • LRU-K frequency-aware cache eviction

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

semantic_cache_mcp-0.3.2.tar.gz (400.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

semantic_cache_mcp-0.3.2-py3-none-any.whl (98.2 kB view details)

Uploaded Python 3

File details

Details for the file semantic_cache_mcp-0.3.2.tar.gz.

File metadata

  • Download URL: semantic_cache_mcp-0.3.2.tar.gz
  • Upload date:
  • Size: 400.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for semantic_cache_mcp-0.3.2.tar.gz
Algorithm Hash digest
SHA256 c5a64e0da34a01f565af2e515ef67c4d0a72eaa50657a8b47354f7a44bcacf1b
MD5 767ce8373f1496045a50424ace0a59fa
BLAKE2b-256 65e650efc3fcc0cb2b4e6df75fc920f9d5381331bdae2d25bcf441a4d7407819

See more details on using hashes here.

Provenance

The following attestation bundles were made for semantic_cache_mcp-0.3.2.tar.gz:

Publisher: release.yml on CoderDayton/semantic-cache-mcp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file semantic_cache_mcp-0.3.2-py3-none-any.whl.

File metadata

File hashes

Hashes for semantic_cache_mcp-0.3.2-py3-none-any.whl
Algorithm Hash digest
SHA256 35d1b399ec8aff6cb90677c8b5e442994b5bad7f8aeb85e09ee94db8449eb40d
MD5 3ed0e31f1592b99a1c2eafa947ecf867
BLAKE2b-256 bbbfead66b2785ae6a44ddd61837389486bb35f36be09dd738b4d3a4ddee11c4

See more details on using hashes here.

Provenance

The following attestation bundles were made for semantic_cache_mcp-0.3.2-py3-none-any.whl:

Publisher: release.yml on CoderDayton/semantic-cache-mcp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page