Skip to main content

Lightweight MCP server for semantic file caching with 80%+ token reduction

Project description

Semantic Cache MCP Logo

Semantic Cache MCP

Support on Ko-fi

Python 3.12+ FastMCP 3.0 License: MIT


Reduce Claude Code token usage by 80%+ with intelligent file caching.

Semantic Cache MCP is a Model Context Protocol server that eliminates redundant token consumption when Claude reads files. Instead of sending full file contents on every request, it returns diffs for changed files, suppresses unchanged files entirely, and intelligently summarizes large files — all transparently through 13 purpose-built MCP tools.


Features

  • Cache-aware reads — First read returns content, unchanged re-reads return a tiny marker, changed files return compact diffs.
  • Search without re-reading — Semantic search, similar-file lookup, grep, and glob all operate over cached project content.
  • Configurable embeddings — Local FastEmbed is the default; OpenAI-compatible providers are available when explicitly enabled.
  • Large-file discipline — Token budgets, semantic summarization, and content hashing keep responses small without losing freshness.
  • Bounded writes and edits — Size limits, match limits, dry runs, formatting hooks, and cache refreshes are handled at the tool boundary.

Installation

Add to Claude Code settings (~/.claude/settings.json):

Option 1uvx (always runs latest version):

{
  "mcpServers": {
    "semantic-cache": {
      "command": "uvx",
      "args": ["semantic-cache-mcp"]
    }
  }
}

Option 2uv tool install:

uv tool install semantic-cache-mcp
{
  "mcpServers": {
    "semantic-cache": {
      "command": "semantic-cache-mcp"
    }
  }
}

Restart Claude Code.

GPU Acceleration (Optional)

For NVIDIA GPU acceleration, install with the gpu extra:

uv tool install "semantic-cache-mcp[gpu]"
# or with uvx: uvx "semantic-cache-mcp[gpu]"

Then set EMBEDDING_DEVICE=gpu in your MCP config env block. Falls back to CPU automatically if CUDA is unavailable.

Custom Embedding Models

Any HuggingFace model with an ONNX export works — set EMBEDDING_MODEL in your env config:

"env": {
  "EMBEDDING_MODEL": "Snowflake/snowflake-arctic-embed-m-v2.0"
}

If the model isn't in fastembed's built-in list, it's automatically downloaded and registered from HuggingFace Hub on first startup (ONNX file integrity is verified via SHA256). See env_variables.md for model recommendations.

OpenAI-Compatible Embeddings

Local FastEmbed remains the default. To route embeddings through an OpenAI-compatible provider instead, enable it in the MCP env block. Defaults target Ollama:

"env": {
  "OPENAI_EMBEDDINGS_ENABLED": "true",
  "OPENAI_BASE_URL": "http://localhost:11434/v1",
  "OPENAI_API_KEY": "ollama",
  "OPENAI_EMBEDDING_MODEL": "nomic-embed-text"
}

Run ollama pull nomic-embed-text first if the model is not installed. For hosted OpenAI, set OPENAI_BASE_URL=https://api.openai.com/v1, use a real OPENAI_API_KEY, and choose an embedding model such as text-embedding-3-small. OPENAI_EMBEDDING_DIMENSIONS is optional; leave it unset to infer the returned vector size.

Block Native File Tools (Recommended)

Disable the client's built-in file tools so all file I/O routes through semantic-cache.

Claude Code — add to ~/.claude/settings.json:

{
  "permissions": {
    "deny": ["Read", "Edit", "Write"]
  }
}

OpenCode — add to ~/.config/opencode/opencode.json:

{
  "$schema": "https://opencode.ai/config.json",
  "permission": {
    "read": "deny",
    "edit": "deny",
    "write": "deny"
  }
}

CLAUDE.md Configuration

Add to ~/.claude/CLAUDE.md to enforce semantic-cache globally:

## Tools

- MUST use `semantic-cache-mcp` instead of native I/O tools (80%+ token savings)

Tools

Core

Tool Description
read Single-file cache-aware read. Returns full content on first read, unchanged markers on cache hits, diffs on modifications, and supports offset/limit for targeted recovery.
delete Single-path delete for one file or symlink, with cache eviction and dry_run=true. Intentionally does not support globs, recursive delete, or real-directory delete.
write Full-file create or replace with cache refresh. Returns creation status or an overwrite diff, supports append=true, and can run formatters.
edit Single-file exact edit using cached content. Best for one localized change; supports scoped and line-range replacement plus dry_run=true.
batch_edit Multiple exact edits in one file with partial success reporting. Best when several localized changes belong in the same file.

Discovery

Tool Description
search Cache-only semantic search for meaning or mixed keyword intent. Seed likely files first with batch_read; use grep for exact text.
similar Cache-only nearest-neighbor lookup for one source file. Best after seeding a directory with batch_read.
glob File discovery plus cache coverage. Use it to find candidates, then pass those paths into batch_read.
batch_read Multi-file cache-aware read for seeding and retrieval. Handles globs, priorities, token budgets, unchanged suppression, and diff/full routing.
grep Cache-only exact search with regex or literal matching, line numbers, and optional context. Best for symbols and exact strings.
diff Explicit side-by-side file comparison with unified diff and semantic similarity. Use read instead for “what changed since last read?”.

Management

Tool Description
stats Cache metrics, session usage (tokens saved, tool calls), and lifetime aggregates.
clear Reset all cache entries.

Tool Reference

The table above is the authoritative tool map. This section only shows the common call shapes.

read — Single file, automatic caching
read path="/src/app.py"                        # automatic: full, unchanged, or diff
read path="/src/app.py" offset=120 limit=80    # lines 120–199 only

Automatic three states:

State Response Token cost
First read Full content + cached Normal
Unchanged "File unchanged (1,234 tokens cached)" ~5 tokens
Modified Unified diff only 5–20% of original
write — Create or overwrite files
write path="/src/new.py" content="..."
write path="/src/new.py" content="..." auto_format=true
write path="/src/large.py" content="...chunk1..." append=false   # first chunk
write path="/src/large.py" content="...chunk2..." append=true    # subsequent chunks
edit — Find/replace with three modes
# Mode A — find/replace: searches entire file
edit path="/src/app.py" old_string="def foo():" new_string="def foo(x: int):"
edit path="/src/app.py" old_string="..." new_string="..." replace_all=true auto_format=true

# Mode B — scoped find/replace: search only within line range (shorter old_string suffices)
edit path="/src/app.py" old_string="pass" new_string="return x" start_line=42 end_line=42

# Mode C — line replace: replace entire range, no old_string needed (maximum token savings)
edit path="/src/app.py" new_string="    return result\n" start_line=80 end_line=83

Mode selection:

Mode Parameters Best for
Find/replace old_string + new_string Unique strings, no line numbers known
Scoped old_string + new_string + start_line/end_line Shorter context when read gave you line numbers
Line replace new_string + start_line/end_line (no old_string) Maximum token savings when line numbers are known
batch_edit — Multiple edits in one call
# Mode A — find/replace: [old, new]
batch_edit path="/src/app.py" edits='[["old1","new1"],["old2","new2"]]'

# Mode B — scoped: [old, new, start_line, end_line]
batch_edit path="/src/app.py" edits='[["pass","return x",42,42]]'

# Mode C — line replace: [null, new, start_line, end_line]
batch_edit path="/src/app.py" edits='[[null,"    return result\n",80,83]]'

# Mixed modes in one call (object syntax also supported)
batch_edit path="/src/app.py" edits='[
  ["old1", "new1"],
  {"old": "pass", "new": "return x", "start_line": 42, "end_line": 42},
  {"old": null, "new": "    return result\n", "start_line": 80, "end_line": 83}
]' auto_format=true
batch_read — Multiple files with token budget
batch_read paths="/src/a.py,/src/b.py" max_total_tokens=50000
batch_read paths='["/src/a.py","/src/b.py"]' priority="/src/main.py"
batch_read paths="/src/*.py" max_total_tokens=30000
  • Expands simple globs, honors priority, enforces max_total_tokens, and reports skipped paths with recovery hints.
  • Unchanged files are collapsed into the summary instead of repeating content.
discovery — Search, similar, glob, grep, diff
search query="authentication middleware logic" k=5
similar path="/src/auth.py" k=3
glob pattern="**/*.py" directory="./src" cached_only=true
grep pattern="class Cache" path="src/**/*.py"
diff path1="/src/v1.py" path2="/src/v2.py"

Configuration

Environment Variables

Variable Default Description
LOG_LEVEL INFO Logging verbosity (DEBUG, INFO, WARNING, ERROR)
TOOL_OUTPUT_MODE compact Response detail (compact, normal, debug)
TOOL_MAX_RESPONSE_TOKENS 0 Global response token cap (0 = disabled)
TOOL_TIMEOUT 30 Seconds before tool call times out (auto-resets executor)
MAX_CONTENT_SIZE 100000 Max bytes returned by read operations
MAX_CACHE_ENTRIES 10000 Max cache entries before LRU-K eviction
EMBEDDING_DEVICE cpu Embedding hardware: cpu, cuda (GPU), auto (detect)
EMBEDDING_MODEL BAAI/bge-small-en-v1.5 FastEmbed model for search/similarity (options)
OPENAI_EMBEDDINGS_ENABLED false Use OpenAI-compatible remote embeddings instead of local FastEmbed
OPENAI_BASE_URL http://localhost:11434/v1 OpenAI-compatible base URL; default targets Ollama
OPENAI_API_KEY ollama API key for the remote embedding provider
OPENAI_EMBEDDING_MODEL nomic-embed-text Remote embedding model name
OPENAI_EMBEDDING_DIMENSIONS (inferred) Optional requested/expected remote embedding dimension
SEMANTIC_CACHE_DIR (platform) Override cache/database directory path

See docs/env_variables.md for detailed descriptions, model selection guidance, and examples.

Safety Limits

Limit Value Protects Against
MAX_WRITE_SIZE 10 MB Memory exhaustion via large writes
MAX_EDIT_SIZE 10 MB Memory exhaustion via large file edits
MAX_MATCHES 10,000 CPU exhaustion via unbounded replace_all

MCP Server Config

{
  "mcpServers": {
    "semantic-cache": {
      "command": "uvx",
      "args": ["semantic-cache-mcp"],
      "env": {
        "LOG_LEVEL": "INFO",
        "TOOL_OUTPUT_MODE": "compact",
        "MAX_CONTENT_SIZE": "100000",
        "EMBEDDING_DEVICE": "cpu",
        "EMBEDDING_MODEL": "BAAI/bge-small-en-v1.5"
      }
    }
  }
}

Cache location: ~/.cache/semantic-cache-mcp/ (Linux), ~/Library/Caches/semantic-cache-mcp/ (macOS), %LOCALAPPDATA%\semantic-cache-mcp\ (Windows). Override with SEMANTIC_CACHE_DIR.


How It Works

┌─────────────┐     ┌──────────────┐     ┌──────────────────┐
│  Claude     │────▶│  smart_read  │────▶│  Cache Lookup    │
│  Code       │     │              │     │  (VectorStorage) │
└─────────────┘     └──────────────┘     └──────────────────┘
                           │
         ┌─────────────────┼─────────────────┐
         ▼                 ▼                 ▼
   ┌──────────┐     ┌──────────┐     ┌──────────────┐
   │Unchanged │     │ Changed  │     │  New / Large │
   │  ~0 tok  │     │  diff    │     │ summarize or │
   │  (99%)   │     │ (80-95%) │     │ full content │
   └──────────┘     └──────────┘     └──────────────┘

Performance

Measured on this project's 30 source files (~136K tokens). Benchmarks run on a standard dev machine (CPU embeddings).

Token Savings

Phase Scenario Savings
Cold read First read, no cache 0% (baseline)
Unchanged re-read Same files, no modifications 99.1%
Content hash Touch files (mtime changed, content identical) 99.1%
Small edits ~5% of lines changed in 30% of files 98.1%
Batch read All files via batch_read 99.1%
Search 5 queries × k=5, previews vs full reads 98.4%
Overall (cached) Phases 2–6 combined 98.8%

Operation Latency

Operation Time
Unchanged read (single file) 2 ms
Unchanged re-read (29 files) 25 ms
Batch read (29 files, diff mode) 35 ms
Cold read (29 files, incl. embed) 2,554 ms
Write (200-line file) 47 ms
Edit (scoped find/replace) 48 ms
Semantic search (k=5) 4 ms
Semantic search (k=10) 5 ms
Find similar (k=3) 49 ms
Grep (literal) 1 ms
Grep (regex) 2 ms
Embedding model warmup 206 ms
Single embedding (largest file) 47 ms
Batch embedding (10 files) 469 ms

Run benchmarks yourself:

uv run python benchmarks/benchmark_token_savings.py    # token savings
uv run python benchmarks/benchmark_performance.py      # operation latency

See docs/performance.md for full benchmarks and methodology.


Documentation

Guide Description
Architecture Component design, algorithms, data flow
Performance Optimization techniques, benchmarks
Security Threat model, input validation, size limits
Advanced Usage Programmatic API, custom storage backends
Troubleshooting Common issues, debug logging
Environment Variables All configurable env vars with defaults and examples

Contributing

git clone https://github.com/CoderDayton/semantic-cache-mcp.git
cd semantic-cache-mcp
uv sync
uv run pytest

See CONTRIBUTING.md for commit conventions, pre-commit hooks, and code standards.


License

MIT License — use freely in personal and commercial projects.


Credits

Built with FastMCP 3.0 and:

  • FastEmbed — local ONNX embeddings (configurable, default BAAI/bge-small-en-v1.5)
  • SimpleVecDB ≥ 2.5.0 — HNSW vector storage with FTS5 keyword search, atomic delete_collection, and opt-in embedding persistence (store_embeddings=True)
  • Semantic summarization based on TCRA-LLM (arXiv:2310.15556)
  • BLAKE3 cryptographic hashing for content freshness
  • LRU-K frequency-aware cache eviction

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

semantic_cache_mcp-0.4.5.tar.gz (447.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

semantic_cache_mcp-0.4.5-py3-none-any.whl (122.2 kB view details)

Uploaded Python 3

File details

Details for the file semantic_cache_mcp-0.4.5.tar.gz.

File metadata

  • Download URL: semantic_cache_mcp-0.4.5.tar.gz
  • Upload date:
  • Size: 447.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for semantic_cache_mcp-0.4.5.tar.gz
Algorithm Hash digest
SHA256 e4e2656ded1d9d7eec5c5834c4103f081d1ef86a7e34970f16e95ea876b5db07
MD5 5f37eb52abb12a2b4c451d7d2f2243dc
BLAKE2b-256 fd79cb37a73a1fee78e04cae7573277deaae629df48fec8753b912435c6cdb2b

See more details on using hashes here.

Provenance

The following attestation bundles were made for semantic_cache_mcp-0.4.5.tar.gz:

Publisher: release.yml on CoderDayton/semantic-cache-mcp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file semantic_cache_mcp-0.4.5-py3-none-any.whl.

File metadata

File hashes

Hashes for semantic_cache_mcp-0.4.5-py3-none-any.whl
Algorithm Hash digest
SHA256 85f5528662473681d97c5b6ec4a307ee70d88c4aa0833884e203e2d25e2e30ed
MD5 f6cbdf10742bd40ee14de6a3b95cd004
BLAKE2b-256 690fa0d5fe4621ff6255a46bd508db266e162f298ba16fb8f5e3bab298536752

See more details on using hashes here.

Provenance

The following attestation bundles were made for semantic_cache_mcp-0.4.5-py3-none-any.whl:

Publisher: release.yml on CoderDayton/semantic-cache-mcp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page