Skip to main content

Lightweight MCP server for semantic file caching with 80%+ token reduction

Project description

Semantic Cache MCP Logo

Semantic Cache MCP

Support on Ko-fi

Python 3.12+ FastMCP 3.0 License: MIT


Reduce Claude Code token usage by 80%+ with intelligent file caching.

Semantic Cache MCP is a Model Context Protocol server that eliminates redundant token consumption when Claude reads files. Instead of sending full file contents on every request, it returns diffs for changed files, suppresses unchanged files entirely, and intelligently summarizes large files — all transparently through 13 purpose-built MCP tools.


Features

  • 80%+ Token Reduction — Unchanged files cost ~0 tokens; changed files return diffs only
  • Automatic Three-State Reads — First read (full + cache), unchanged ("unchanged":true, 99% savings), modified (diff, 80–95% savings) — fully automatic, no configuration
  • Semantic Search — Hybrid BM25 + HNSW vector search via local ONNX embeddings (configurable model, default BAAI/bge-small-en-v1.5), no API keys, works offline
  • Batch Embeddingbatch_smart_read pre-scans all new/changed files and embeds them in a single model call (N calls → 1)
  • Content Hash Freshness — BLAKE3 hash detects when mtime changes but content is identical (touch, git checkout) — returns cached instead of re-reading
  • Grep — Regex/literal pattern search across cached files with line numbers and context
  • Semantic Summarization — 50–80% token savings on large files, structure preserved
  • DoS Protection — Write size, edit size, and match count limits enforced at every boundary

Installation

Add to Claude Code settings (~/.claude/settings.json):

Option 1uvx (always runs latest version):

{
  "mcpServers": {
    "semantic-cache": {
      "command": "uvx",
      "args": ["semantic-cache-mcp"]
    }
  }
}

Option 2uv tool install:

uv tool install semantic-cache-mcp
{
  "mcpServers": {
    "semantic-cache": {
      "command": "semantic-cache-mcp"
    }
  }
}

Restart Claude Code.

GPU Acceleration (Optional)

For NVIDIA GPU acceleration, install with the gpu extra:

uv tool install "semantic-cache-mcp[gpu]"
# or with uvx: uvx "semantic-cache-mcp[gpu]"

Then set EMBEDDING_DEVICE=gpu in your MCP config env block. Falls back to CPU automatically if CUDA is unavailable.

Custom Embedding Models

Any HuggingFace model with an ONNX export works — set EMBEDDING_MODEL in your env config:

"env": {
  "EMBEDDING_MODEL": "Snowflake/snowflake-arctic-embed-m-v2.0"
}

If the model isn't in fastembed's built-in list, it's automatically downloaded and registered from HuggingFace Hub on first startup (ONNX file integrity is verified via SHA256). See env_variables.md for model recommendations.

Block Native File Tools (Recommended)

Disable the client's built-in file tools so all file I/O routes through semantic-cache.

Claude Code — add to ~/.claude/settings.json:

{
  "permissions": {
    "deny": ["Read", "Edit", "Write"]
  }
}

OpenCode — add to ~/.config/opencode/opencode.json:

{
  "$schema": "https://opencode.ai/config.json",
  "permission": {
    "read": "deny",
    "edit": "deny",
    "write": "deny"
  }
}

CLAUDE.md Configuration

Add to ~/.claude/CLAUDE.md to enforce semantic-cache globally:

## Tools

- MUST use `semantic-cache-mcp` instead of native I/O tools (80%+ token savings)

Tools

Core

Tool Description
read Single-file cache-aware read. Returns full content on first read, unchanged markers on cache hits, diffs on modifications, and supports offset/limit for targeted recovery.
delete Single-path delete for one file or symlink, with cache eviction and dry_run=true. Intentionally does not support globs, recursive delete, or real-directory delete.
write Full-file create or replace with cache refresh. Returns creation status or an overwrite diff, supports append=true, and can run formatters.
edit Single-file exact edit using cached content. Best for one localized change; supports scoped and line-range replacement plus dry_run=true.
batch_edit Multiple exact edits in one file with partial success reporting. Best when several localized changes belong in the same file.

Discovery

Tool Description
search Cache-only semantic search for meaning or mixed keyword intent. Seed likely files first with batch_read; use grep for exact text.
similar Cache-only nearest-neighbor lookup for one source file. Best after seeding a directory with batch_read.
glob File discovery plus cache coverage. Use it to find candidates, then pass those paths into batch_read.
batch_read Multi-file cache-aware read for seeding and retrieval. Handles globs, priorities, token budgets, unchanged suppression, and diff/full routing.
grep Cache-only exact search with regex or literal matching, line numbers, and optional context. Best for symbols and exact strings.
diff Explicit side-by-side file comparison with unified diff and semantic similarity. Use read instead for “what changed since last read?”.

Management

Tool Description
stats Cache metrics, session usage (tokens saved, tool calls), and lifetime aggregates.
clear Reset all cache entries.

Tool Reference

read — Single file, automatic caching
read path="/src/app.py"                        # automatic: full, unchanged, or diff
read path="/src/app.py" offset=120 limit=80    # lines 120–199 only

Automatic three states:

State Response Token cost
First read Full content + cached Normal
Unchanged "File unchanged (1,234 tokens cached)" ~5 tokens
Modified Unified diff only 5–20% of original
write — Create or overwrite files
write path="/src/new.py" content="..."
write path="/src/new.py" content="..." auto_format=true
write path="/src/large.py" content="...chunk1..." append=false   # first chunk
write path="/src/large.py" content="...chunk2..." append=true    # subsequent chunks
edit — Find/replace with three modes
# Mode A — find/replace: searches entire file
edit path="/src/app.py" old_string="def foo():" new_string="def foo(x: int):"
edit path="/src/app.py" old_string="..." new_string="..." replace_all=true auto_format=true

# Mode B — scoped find/replace: search only within line range (shorter old_string suffices)
edit path="/src/app.py" old_string="pass" new_string="return x" start_line=42 end_line=42

# Mode C — line replace: replace entire range, no old_string needed (maximum token savings)
edit path="/src/app.py" new_string="    return result\n" start_line=80 end_line=83

Mode selection:

Mode Parameters Best for
Find/replace old_string + new_string Unique strings, no line numbers known
Scoped old_string + new_string + start_line/end_line Shorter context when read gave you line numbers
Line replace new_string + start_line/end_line (no old_string) Maximum token savings when line numbers are known
batch_edit — Multiple edits in one call
# Mode A — find/replace: [old, new]
batch_edit path="/src/app.py" edits='[["old1","new1"],["old2","new2"]]'

# Mode B — scoped: [old, new, start_line, end_line]
batch_edit path="/src/app.py" edits='[["pass","return x",42,42]]'

# Mode C — line replace: [null, new, start_line, end_line]
batch_edit path="/src/app.py" edits='[[null,"    return result\n",80,83]]'

# Mixed modes in one call (object syntax also supported)
batch_edit path="/src/app.py" edits='[
  ["old1", "new1"],
  {"old": "pass", "new": "return x", "start_line": 42, "end_line": 42},
  {"old": null, "new": "    return result\n", "start_line": 80, "end_line": 83}
]' auto_format=true
search — Semantic search across cached files
search query="authentication middleware logic" k=5
search query="database connection pooling" k=3
similar — Find semantically related files
similar path="/src/auth.py" k=3
similar path="/tests/test_auth.py" k=5
glob — Pattern matching with cache awareness
glob pattern="**/*.py" directory="./src"
glob pattern="**/*.py" directory="./src" cached_only=true
batch_read — Multiple files with token budget
batch_read paths="/src/a.py,/src/b.py" max_total_tokens=50000
batch_read paths='["/src/a.py","/src/b.py"]' priority="/src/main.py"
batch_read paths="/src/*.py" max_total_tokens=30000
  • Glob expansion: src/*.py expanded inline (max 50 files per glob)
  • Priority ordering: priority paths read first, remainder sorted smallest-first
  • Token budget: stops reading new files once max_total_tokens reached; skipped files include est_tokens hint
  • Unchanged suppression: unchanged files appear in summary.unchanged with no content (zero tokens)
  • Batch embedding: pre-scans all new/changed files and embeds them in a single model call before reading — N model calls reduced to 1
  • Recovery: use read with offset/limit for targeted line-range recovery after truncation or context loss
diff — Compare two files
diff path1="/src/v1.py" path2="/src/v2.py"

Configuration

Environment Variables

Variable Default Description
LOG_LEVEL INFO Logging verbosity (DEBUG, INFO, WARNING, ERROR)
TOOL_OUTPUT_MODE compact Response detail (compact, normal, debug)
TOOL_MAX_RESPONSE_TOKENS 0 Global response token cap (0 = disabled)
TOOL_TIMEOUT 30 Seconds before tool call times out (auto-resets executor)
MAX_CONTENT_SIZE 100000 Max bytes returned by read operations
MAX_CACHE_ENTRIES 10000 Max cache entries before LRU-K eviction
EMBEDDING_DEVICE cpu Embedding hardware: cpu, cuda (GPU), auto (detect)
EMBEDDING_MODEL BAAI/bge-small-en-v1.5 FastEmbed model for search/similarity (options)
SEMANTIC_CACHE_DIR (platform) Override cache/database directory path

See docs/env_variables.md for detailed descriptions, model selection guidance, and examples.

Safety Limits

Limit Value Protects Against
MAX_WRITE_SIZE 10 MB Memory exhaustion via large writes
MAX_EDIT_SIZE 10 MB Memory exhaustion via large file edits
MAX_MATCHES 10,000 CPU exhaustion via unbounded replace_all

MCP Server Config

{
  "mcpServers": {
    "semantic-cache": {
      "command": "uvx",
      "args": ["semantic-cache-mcp"],
      "env": {
        "LOG_LEVEL": "INFO",
        "TOOL_OUTPUT_MODE": "compact",
        "MAX_CONTENT_SIZE": "100000",
        "EMBEDDING_DEVICE": "cpu",
        "EMBEDDING_MODEL": "BAAI/bge-small-en-v1.5"
      }
    }
  }
}

Cache location: ~/.cache/semantic-cache-mcp/ (Linux), ~/Library/Caches/semantic-cache-mcp/ (macOS), %LOCALAPPDATA%\semantic-cache-mcp\ (Windows). Override with SEMANTIC_CACHE_DIR.


How It Works

┌─────────────┐     ┌──────────────┐     ┌──────────────────┐
│  Claude     │────▶│  smart_read  │────▶│  Cache Lookup    │
│  Code       │     │              │     │  (VectorStorage) │
└─────────────┘     └──────────────┘     └──────────────────┘
                           │
         ┌─────────────────┼─────────────────┐
         ▼                 ▼                 ▼
   ┌──────────┐     ┌──────────┐     ┌──────────────┐
   │Unchanged │     │ Changed  │     │  New / Large │
   │  ~0 tok  │     │  diff    │     │ summarize or │
   │  (99%)   │     │ (80-95%) │     │ full content │
   └──────────┘     └──────────┘     └──────────────┘

Performance

Measured on this project's 30 source files (~136K tokens). Benchmarks run on a standard dev machine (CPU embeddings).

Token Savings

Phase Scenario Savings
Cold read First read, no cache 0% (baseline)
Unchanged re-read Same files, no modifications 99.1%
Content hash Touch files (mtime changed, content identical) 99.1%
Small edits ~5% of lines changed in 30% of files 98.1%
Batch read All files via batch_read 99.1%
Search 5 queries × k=5, previews vs full reads 98.4%
Overall (cached) Phases 2–6 combined 98.8%

Operation Latency

Operation Time
Unchanged read (single file) 2 ms
Unchanged re-read (29 files) 25 ms
Batch read (29 files, diff mode) 35 ms
Cold read (29 files, incl. embed) 2,554 ms
Write (200-line file) 47 ms
Edit (scoped find/replace) 48 ms
Semantic search (k=5) 4 ms
Semantic search (k=10) 5 ms
Find similar (k=3) 49 ms
Grep (literal) 1 ms
Grep (regex) 2 ms
Embedding model warmup 206 ms
Single embedding (largest file) 47 ms
Batch embedding (10 files) 469 ms

Run benchmarks yourself:

uv run python benchmarks/benchmark_token_savings.py    # token savings
uv run python benchmarks/benchmark_performance.py      # operation latency

See docs/performance.md for full benchmarks and methodology.


Documentation

Guide Description
Architecture Component design, algorithms, data flow
Performance Optimization techniques, benchmarks
Security Threat model, input validation, size limits
Advanced Usage Programmatic API, custom storage backends
Troubleshooting Common issues, debug logging
Environment Variables All configurable env vars with defaults and examples

Contributing

git clone https://github.com/CoderDayton/semantic-cache-mcp.git
cd semantic-cache-mcp
uv sync
uv run pytest

See CONTRIBUTING.md for commit conventions, pre-commit hooks, and code standards.


License

MIT License — use freely in personal and commercial projects.


Credits

Built with FastMCP 3.0 and:

  • FastEmbed — local ONNX embeddings (configurable, default BAAI/bge-small-en-v1.5)
  • SimpleVecDB ≥ 2.5.0 — HNSW vector storage with FTS5 keyword search, atomic delete_collection, and opt-in embedding persistence (store_embeddings=True)
  • Semantic summarization based on TCRA-LLM (arXiv:2310.15556)
  • BLAKE3 cryptographic hashing for content freshness
  • LRU-K frequency-aware cache eviction

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

semantic_cache_mcp-0.4.2.tar.gz (435.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

semantic_cache_mcp-0.4.2-py3-none-any.whl (119.2 kB view details)

Uploaded Python 3

File details

Details for the file semantic_cache_mcp-0.4.2.tar.gz.

File metadata

  • Download URL: semantic_cache_mcp-0.4.2.tar.gz
  • Upload date:
  • Size: 435.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for semantic_cache_mcp-0.4.2.tar.gz
Algorithm Hash digest
SHA256 2dd5537f08b6476368be13f869c2cef498b6c4706a555fa307e81ae64e78faaa
MD5 d07e9548b3a1a1a8fd686c46c746c390
BLAKE2b-256 5ed327592d756f5d5320a999eacf1209267197b9d9f5370fc40acd1f436114e1

See more details on using hashes here.

Provenance

The following attestation bundles were made for semantic_cache_mcp-0.4.2.tar.gz:

Publisher: release.yml on CoderDayton/semantic-cache-mcp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file semantic_cache_mcp-0.4.2-py3-none-any.whl.

File metadata

File hashes

Hashes for semantic_cache_mcp-0.4.2-py3-none-any.whl
Algorithm Hash digest
SHA256 9e03abf75fdfaec186196d47b9ec98f161d7c73a4aff9623337901dba41d807b
MD5 c237c9a76332c004f58c96fc73e88462
BLAKE2b-256 160fecb9bc6dbf6c61c05e4a9d27b30571249bd7e36b9f3687787be178909493

See more details on using hashes here.

Provenance

The following attestation bundles were made for semantic_cache_mcp-0.4.2-py3-none-any.whl:

Publisher: release.yml on CoderDayton/semantic-cache-mcp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page