Skip to main content

Lightweight MCP server for semantic file caching with 80%+ token reduction

Project description

Semantic Cache MCP Logo

Semantic Cache MCP

Support on Ko-fi

Python 3.12+ FastMCP 3.0 License: MIT


Cut your MCP client's token usage by 98% on cached reads. Respond in milliseconds.

Semantic Cache MCP is a Model Context Protocol server that replaces redundant full-file reads with marker hits, unified diffs, and semantic summaries. Thirteen tools (read, batch_read, write, edit, batch_edit, search, grep, glob, similar, diff, delete, clear, stats) route every file operation through one cache-aware layer, so an MCP-capable agent skips files it has already seen.


Why this exists

In order of impact:

1. Reads stop costing tokens. The first read seeds the cache. Re-reads of unchanged files return a 5-token marker (mtime match, no disk I/O). Modified files return a unified diff. Files larger than the budget collapse to a semantic skeleton that preserves structure rather than slicing at a byte offset.

2. Search and grep run on the cache, not the disk. Semantic search (hybrid BM25 + HNSW), similar-file lookup, glob, and grep all read from the same indexed corpus that read/batch_read populate. An in-session result LRU collapses repeated queries to sub-millisecond hits.

3. Mutations are bounded by default. write, edit, and batch_edit enforce size and match limits, support dry_run, can run formatters, and refresh the cache atomically. Local FastEmbed is the default embedding provider; OpenAI-compatible endpoints are opt-in.


Installation

Add to Claude Code settings (~/.claude.json):

Option 1uvx (always runs latest version):

{
  "mcpServers": {
    "semantic-cache": {
      "command": "uvx",
      "args": ["semantic-cache-mcp"]
    }
  }
}

Option 2uv tool install:

uv tool install semantic-cache-mcp
{
  "mcpServers": {
    "semantic-cache": {
      "command": "semantic-cache-mcp"
    }
  }
}

Restart Claude Code.

GPU Acceleration (Optional)

For NVIDIA GPU acceleration, install with the gpu extra:

uv tool install "semantic-cache-mcp[gpu]"
# or with uvx: uvx "semantic-cache-mcp[gpu]"

Then set EMBEDDING_DEVICE=gpu in your MCP config env block. Falls back to CPU automatically if CUDA is unavailable.

Custom Embedding Models

Any HuggingFace model with an ONNX export works — set EMBEDDING_MODEL in your env config:

"env": {
  "EMBEDDING_MODEL": "Snowflake/snowflake-arctic-embed-m-v2.0"
}

If the model isn't in fastembed's built-in list, it's automatically downloaded and registered from HuggingFace Hub on first startup (ONNX file integrity is verified via SHA256). See env_variables.md for model recommendations.

OpenAI-Compatible Embeddings

Local FastEmbed remains the default. To route embeddings through an OpenAI-compatible provider instead, enable it in the MCP env block. Defaults target Ollama:

"env": {
  "OPENAI_EMBEDDINGS_ENABLED": "true",
  "OPENAI_BASE_URL": "http://localhost:11434/v1",
  "OPENAI_API_KEY": "ollama",
  "OPENAI_EMBEDDING_MODEL": "nomic-embed-text"
}

Run ollama pull nomic-embed-text first if the model is not installed. For hosted OpenAI, set OPENAI_BASE_URL=https://api.openai.com/v1, use a real OPENAI_API_KEY, and choose an embedding model such as text-embedding-3-small. OPENAI_EMBEDDING_DIMENSIONS is optional; leave it unset to infer the returned vector size.

Block Native File Tools (Recommended)

Disable the client's built-in file tools so all file I/O routes through semantic-cache.

Claude Code — add to ~/.claude/settings.json:

{
  "permissions": {
    "deny": ["Read", "Edit", "Write"]
  }
}

OpenCode — add to ~/.config/opencode/opencode.json:

{
  "$schema": "https://opencode.ai/config.json",
  "permission": {
    "read": "deny",
    "edit": "deny",
    "write": "deny"
  }
}

CLAUDE.md Configuration

Add to ~/.claude/CLAUDE.md to enforce semantic-cache globally:

## Tools

- MUST use `semantic-cache-mcp` instead of native I/O tools (98% token savings on cached reads)

Tools

Core

Tool Description
read Single-file cache-aware read. Returns full content on first read, unchanged markers on cache hits, diffs on modifications, and supports offset/limit for targeted recovery.
delete Single-path delete for one file or symlink, with cache eviction and dry_run=true. Intentionally does not support globs, recursive delete, or real-directory delete.
write Full-file create or replace with cache refresh. Returns creation status or an overwrite diff, supports append=true, and can run formatters.
edit Single-file exact edit using cached content. Best for one localized change; supports scoped and line-range replacement plus dry_run=true.
batch_edit Multiple exact edits in one file with partial success reporting. Best when several localized changes belong in the same file.

Discovery

Tool Description
search Cache-only semantic search for meaning or mixed keyword intent. Seed likely files first with batch_read; use grep for exact text.
similar Cache-only nearest-neighbor lookup for one source file. Best after seeding a directory with batch_read.
glob File discovery plus cache coverage. Use it to find candidates, then pass those paths into batch_read.
batch_read Multi-file cache-aware read for seeding and retrieval. Handles globs, priorities, token budgets, unchanged suppression, and diff/full routing.
grep Cache-only exact search with regex or literal matching, line numbers, and optional context. Best for symbols and exact strings.
diff Explicit side-by-side file comparison with unified diff and semantic similarity. Use read instead for “what changed since last read?”.

Management

Tool Description
stats Cache metrics, session usage (tokens saved, tool calls), and lifetime aggregates.
clear Reset all cache entries.

Tool Reference

The table above is the authoritative tool map. This section only shows the common call shapes.

read — Single file, automatic caching
read path="/src/app.py"                        # automatic: full, unchanged, or diff
read path="/src/app.py" offset=120 limit=80    # lines 120–199 only

Automatic three states:

State Response Token cost
First read Full content + cached Normal
Unchanged "File unchanged (1,234 tokens cached)" ~5 tokens
Modified Unified diff only 5–20% of original
write — Create or overwrite files
write path="/src/new.py" content="..."
write path="/src/new.py" content="..." auto_format=true
write path="/src/large.py" content="...chunk1..." append=false   # first chunk
write path="/src/large.py" content="...chunk2..." append=true    # subsequent chunks
edit — Find/replace with three modes
# Mode A — find/replace: searches entire file
edit path="/src/app.py" old_string="def foo():" new_string="def foo(x: int):"
edit path="/src/app.py" old_string="..." new_string="..." replace_all=true auto_format=true

# Mode B — scoped find/replace: search only within line range (shorter old_string suffices)
edit path="/src/app.py" old_string="pass" new_string="return x" start_line=42 end_line=42

# Mode C — line replace: replace entire range, no old_string needed (maximum token savings)
edit path="/src/app.py" new_string="    return result\n" start_line=80 end_line=83

Mode selection:

Mode Parameters Best for
Find/replace old_string + new_string Unique strings, no line numbers known
Scoped old_string + new_string + start_line/end_line Shorter context when read gave you line numbers
Line replace new_string + start_line/end_line (no old_string) Maximum token savings when line numbers are known
batch_edit — Multiple edits in one call
# Mode A — find/replace: [old, new]
batch_edit path="/src/app.py" edits='[["old1","new1"],["old2","new2"]]'

# Mode B — scoped: [old, new, start_line, end_line]
batch_edit path="/src/app.py" edits='[["pass","return x",42,42]]'

# Mode C — line replace: [null, new, start_line, end_line]
batch_edit path="/src/app.py" edits='[[null,"    return result\n",80,83]]'

# Mixed modes in one call (object syntax also supported)
batch_edit path="/src/app.py" edits='[
  ["old1", "new1"],
  {"old": "pass", "new": "return x", "start_line": 42, "end_line": 42},
  {"old": null, "new": "    return result\n", "start_line": 80, "end_line": 83}
]' auto_format=true
batch_read — Multiple files with token budget
batch_read paths="/src/a.py,/src/b.py" max_total_tokens=50000
batch_read paths='["/src/a.py","/src/b.py"]' priority="/src/main.py"
batch_read paths="/src/*.py" max_total_tokens=30000
  • Expands simple globs, honors priority, enforces max_total_tokens, and reports skipped paths with recovery hints.
  • Unchanged files are collapsed into the summary instead of repeating content.
discovery — Search, similar, glob, grep, diff
search query="authentication middleware logic" k=5
similar path="/src/auth.py" k=3
glob pattern="**/*.py" directory="./src" cached_only=true
grep pattern="class Cache" path="src/**/*.py"
diff path1="/src/v1.py" path2="/src/v2.py"

Configuration

Environment Variables

Variable Default Description
LOG_LEVEL INFO Logging verbosity (DEBUG, INFO, WARNING, ERROR)
TOOL_OUTPUT_MODE compact Response detail (compact, normal, debug)
TOOL_MAX_RESPONSE_TOKENS 0 Global response token cap (0 = disabled)
TOOL_TIMEOUT 30 Seconds before tool call times out (auto-resets executor)
MAX_CONTENT_SIZE 100000 Max bytes returned by read operations
MAX_CACHE_ENTRIES 10000 Max cache entries before LRU-K eviction
EMBEDDING_DEVICE cpu Embedding hardware: cpu, cuda (GPU), auto (detect)
EMBEDDING_MODEL BAAI/bge-small-en-v1.5 FastEmbed model for search/similarity (options)
OPENAI_EMBEDDINGS_ENABLED false Use OpenAI-compatible remote embeddings instead of local FastEmbed
OPENAI_BASE_URL http://localhost:11434/v1 OpenAI-compatible base URL; default targets Ollama
OPENAI_API_KEY ollama API key for the remote embedding provider
OPENAI_EMBEDDING_MODEL nomic-embed-text Remote embedding model name
OPENAI_EMBEDDING_DIMENSIONS (inferred) Optional requested/expected remote embedding dimension
SEMANTIC_CACHE_DIR (platform) Override cache/database directory path

See docs/env_variables.md for detailed descriptions, model selection guidance, and examples.

Safety Limits

Limit Value Protects Against
MAX_WRITE_SIZE 10 MB Memory exhaustion via large writes
MAX_EDIT_SIZE 10 MB Memory exhaustion via large file edits
MAX_MATCHES 10,000 CPU exhaustion via unbounded replace_all

MCP Server Config

{
  "mcpServers": {
    "semantic-cache": {
      "command": "uvx",
      "args": ["semantic-cache-mcp"],
      "env": {
        "LOG_LEVEL": "INFO",
        "TOOL_OUTPUT_MODE": "compact",
        "MAX_CONTENT_SIZE": "100000",
        "EMBEDDING_DEVICE": "cpu",
        "EMBEDDING_MODEL": "BAAI/bge-small-en-v1.5"
      }
    }
  }
}

Cache location: ~/.cache/semantic-cache-mcp/ (Linux), ~/Library/Caches/semantic-cache-mcp/ (macOS), %LOCALAPPDATA%\semantic-cache-mcp\ (Windows). Override with SEMANTIC_CACHE_DIR.


How It Works

┌──────────┐     ┌────────────┐     ┌──────────────────────────┐
│  Claude  │────▶│ smart_read │────▶│ stat() + cache lookup    │
│   Code   │     │            │     │ (BEFORE any disk read)   │
└──────────┘     └────────────┘     └──────────────────────────┘
                        │
       ┌────────────────┼─────────────────┬──────────────────┐
       ▼                ▼                 ▼                  ▼
 ┌──────────┐    ┌──────────┐      ┌──────────┐      ┌────────────┐
 │ mtime    │    │ mtime    │      │ Changed  │      │ New /      │
 │ match    │    │ drift,   │      │ content  │      │ Large      │
 │ FAST     │    │ hash     │      │ → diff   │      │ → summary  │
 │ PATH     │    │ match    │      │ (80-95%) │      │  or full   │
 │ ~5 tok   │    │ ~5 tok   │      └──────────┘      └────────────┘
 │ (99%)    │    │ (99%)    │
 │ ~1 ms    │    │ ~1 ms    │
 │ no I/O   │    │ +update  │
 └──────────┘    └──────────┘

search works the same way. An in-session LRU keyed on (query, k, directory) returns warm hits in ~10 µs; misses fall through to embed + BM25 + HNSW. Every cache mutation (put, clear, delete_path, update_mtime) bumps the LRU, so callers never see a result that predates a write.


Performance

Measured on this project's 43 source files (168,614 tokens), CPU embeddings, i9-13900K, commit 5cd7100. Reproducible via --json output for CI diffing.

Token savings — 98.5% overall (phases 2–6)

Phase Scenario Savings
Overall (cached, phases 2–6) Aggregate token reduction 98.5%
Unchanged re-read mtime match — fast path skips disk I/O 98.9%
Content hash mtime drifted, BLAKE3 still matches 98.9%
Batch read All files via batch_read, 200K budget 98.9%
Search previews 5 queries × k=5, previews vs full reads 98.3%
Small edits Real ~5% line changes in 30% of files 97.3%
Cold read First read, no cache (baseline) 0%

Latency — unchanged reads ~1 ms; repeat searches ~10 µs

Operation p50 Notes
Single unchanged read (fast path) 1.1 ms mtime + cache hit; no disk I/O
Single diff read (changed file) 1.0 ms hash check + unified diff
Search k=5 (cache hit) < 0.01 ms in-session LRU; 2,000×+ vs cold
Search k=5 (cache miss) 5.6 ms embed query + hybrid BM25/HNSW
Edit (scoped find/replace) 3.3 ms uses cached content
Find similar (k=3) 2.2 ms cached embedding reused
Grep (literal def ) 1.4 ms FTS5 over cached corpus
Grep (regex) 2.1 ms regex compiled once
Batch read (43 files, diff mode) 40.2 ms one ONNX inference for all new/changed files
Unchanged re-read (43 files) 26.9 ms whole-corpus pass
Cold read (43 files, total) 1,990 ms includes disk I/O, tokenisation, embedding
Write (200-line file) 49.1 ms creates + caches + embeds
Single embedding (largest file) 47 ms ONNX, single thread
Model warmup (one-time) 195 ms startup only

Run benchmarks yourself:

uv run python benchmarks/benchmark_token_savings.py    # token savings
uv run python benchmarks/benchmark_performance.py      # operation latency

See docs/performance.md for full benchmarks and methodology.


Documentation

Guide Description
Architecture Component design, algorithms, data flow
Performance Optimization techniques, benchmarks
Security Threat model, input validation, size limits
Advanced Usage Programmatic API, custom storage backends
Troubleshooting Common issues, debug logging
Environment Variables All configurable env vars with defaults and examples

Contributing

git clone https://github.com/CoderDayton/semantic-cache-mcp.git
cd semantic-cache-mcp
uv sync
uv run pytest

See CONTRIBUTING.md for commit conventions, pre-commit hooks, and code standards.


License

MIT License — use freely in personal and commercial projects.


Credits

Built with FastMCP 3.0 and:

  • FastEmbed — local ONNX embeddings (configurable, default BAAI/bge-small-en-v1.5)
  • SimpleVecDB ≥ 2.6.0 — HNSW vector storage with FTS5 keyword search, atomic delete_collection, and opt-in embedding persistence (store_embeddings=True)
  • Semantic summarization based on TCRA-LLM (arXiv:2310.15556)
  • BLAKE3 cryptographic hashing for content freshness
  • LRU-K frequency-aware cache eviction

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

semantic_cache_mcp-0.4.6.tar.gz (460.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

semantic_cache_mcp-0.4.6-py3-none-any.whl (125.3 kB view details)

Uploaded Python 3

File details

Details for the file semantic_cache_mcp-0.4.6.tar.gz.

File metadata

  • Download URL: semantic_cache_mcp-0.4.6.tar.gz
  • Upload date:
  • Size: 460.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for semantic_cache_mcp-0.4.6.tar.gz
Algorithm Hash digest
SHA256 e40f92e4869df8094b5aa49df5bd802b880f6fffcc0a2f891cb20091c509e33a
MD5 787ab8582f11356087986e766ccd5a72
BLAKE2b-256 bea5d404d6bb34b79bf754a4998c35e987f91bbede1ee92be7999921345371d7

See more details on using hashes here.

Provenance

The following attestation bundles were made for semantic_cache_mcp-0.4.6.tar.gz:

Publisher: release.yml on CoderDayton/semantic-cache-mcp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file semantic_cache_mcp-0.4.6-py3-none-any.whl.

File metadata

File hashes

Hashes for semantic_cache_mcp-0.4.6-py3-none-any.whl
Algorithm Hash digest
SHA256 60435bba21c200b17cd6e4881944f7ef899f2f57a86b5a58ca90c4010bfddf7b
MD5 c68024ad295afd6bd592b8f91606b40a
BLAKE2b-256 2f0bbd63a9e64913aa5c9c4ae4c7daf26166755d7325cf11d606018ef6d02ed2

See more details on using hashes here.

Provenance

The following attestation bundles were made for semantic_cache_mcp-0.4.6-py3-none-any.whl:

Publisher: release.yml on CoderDayton/semantic-cache-mcp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page