Reasoning-first MCP middleware for context distillation
Project description
🔍 Semantic-Sift
The Reasoning-First Middleware for High-Fidelity Agentic Workflows.
"It saves tokens while preserving context - maximizing reasoning, minimizing hallucination."
Semantic-Sift is a local Model Context Protocol (MCP) server that acts as an intelligent "Sanitation Tier" between your raw data and your AI’s context window.
While modern LLMs have massive context windows, their reasoning accuracy often degrades as noise increases. Semantic-Sift solves this by distilling technical logs, long-form documents, and chat histories into high-density context. It treats your context window as a precious resource—optimizing for Signal-to-Noise Ratio (SNR) so your models spend more time reasoning and less time navigating boilerplate.
🧠 Philosophy: The Studio of Two
Semantic-Sift is grounded in the Studio of Two philosophy: the belief that the future of engineering is a high-fidelity partnership between a human architect and a sovereign AI sidecar. By managing the friction of raw data ingestion, Sift allows this "Studio" to focus on building systems, not just applying patches. It acts as a cognitive filter that ensures both you and your agent are collaborating on the cleanest, most relevant representation of the technical truth.
🏛️ Multidisciplinary Value
Semantic-Sift is a strategic layer designed to manage attention across four key professional personas:
- For the Senior Engineer: A local-first, low-latency middleware using a dual-engine approach (Heuristic Sieve + Neural Reranker). It refined timestamps, repetitive boilerplate, and redundant JSON before they hit the wire, reducing latency and preventing "Lost in the Middle" reasoning failures.
- For the Project Manager: "Context Insurance." By reducing token overhead by 30-70%, Sift provides direct ROI on API costs and reduces the "retry loop" caused by model hallucinations in messy data environments.
- For the Researcher: Data integrity at scale. Supports MarkItDown (via the
[multi-modal]optional extra) to convert complex.pdf,.docx, and.xlsxinto structured, distilled Markdown, allowing for the rapid synthesis of massive technical repositories without losing critical semantic anchors. - For the Knowledge Partner: Cognitive Load Management. Sift manages the friction of raw data ingestion, allowing the human-AI partnership to focus on high-level strategy and architectural decisions rather than manual data triage.
💰 Value Engineering: Operational vs. Economic ROI
Semantic-Sift provides a dual-layer of value. While the economic benefits depend on your billing plan, the operational benefits apply to every professional workflow.
1. The Economic ROI (Direct Savings)
Target: Users on Per-Token API plans (GPT-4o, Claude 3.5).
- Wallet Protection: Sift acts as a local filter, typically reducing outgoing token volume by 30-70%.
- Compound Interest: In iterative agentic loops, these savings compound rapidly. Every character pruned is money that stays in your budget.
2. The Operational ROI (Quality & Performance)
Target: EVERYONE (including "Unlimited" or Per-Request subscription users).
- Attention Precision: Even with "infinite" context, LLMs suffer from "Lost in the Middle" syndrome. By removing noise, you ensure the model's full reasoning power is focused on the technical signal, resulting in higher-quality code and fewer hallucinations.
- Latency Reduction: Smaller prompts = Faster "Time to First Token" (TTFT). You spend less time waiting for the "cloud" to process boilerplate and more time in your flow state.
- Context Insurance: Prevents "Context length exceeded" errors on complex tasks. Sift ensures that 100% of your model's limit is filled with information, not formatting.
📚 Master Documentation Index
All technical details, architectural logic, and integration guides are strictly maintained in the doc/ directory to prevent data loss through summarization.
- doc/INDEX.md: The navigational roadmap and source of truth for the documentation structure.
- doc/ARCHITECTURE.md: Specifications of the Sift Hook Interceptor, the Distillation Kernel (Heuristic/Semantic/Ranking engines), and Caching.
- doc/TOOL_REFERENCE.md: Exhaustive operator's manual for all FastMCP tools (e.g.,
sift_read_file,sift_logs,sift_chat,sift_rank). - doc/INTEGRATION_ENCYCLOPEDIA.md: Master Compatibility Map, Hook Injector logic, Payload Structures, and the Master Configuration Matrix for connecting IDEs (Cursor, Gemini, VS Code, OpenCode, etc.).
- doc/TELEMETRY_SPEC.md: Design of the OpenTelemetry tracing, Echo-Detector (Double-Sifting Prevention), Audit Headers, and Privacy controls.
- doc/ORCHESTRATION_BLUEPRINTS.md: Actionable workflows for AI agents, including File Ingestion decision trees, Multi-Document RAG, and History Compaction.
🎯 High-Impact Use Cases
📚 The Knowledge Hunter (Researchers & Architects)
- The Pain: Reading 50-page PDFs, complex Word specs, or cluttered documentation sites.
- The Sift: Supports MarkItDown via the
[multi-modal]optional extra to natively ingest.pdf,.docx, and.xlsx. It converts corporate "noise" into structured Markdown, allowing your agent to synthesize multiple 14MB documents in a single turn.
🛠️ The Log Hunter (DevOps & SREs)
- The Pain: Finding a single error in 100,000 lines of technical logs.
- The Sift: The Heuristic Sieve refines timestamps and boilerplate in milliseconds. The Subconscious Hook automatically reranks results, so your agent only sees the most relevant data blocks.
🧠 The Context Strategist (AI Engineers)
- The Pain: LLM hallucination and reasoning degradation caused by messy data streams.
- The Sift: By delivering high-density context with 95% of the meaning preserved, Sift acts as a Cognitive Bridge. It ensures your LLM's attention is focused exclusively on the signal.
🚀 Quick Start
1. Installation
Clone the repository and install:
git clone https://github.com/luismichio/semantic-sift.git
cd semantic-sift
# Dedicated environment (Recommended)
python -m venv venv
.\venv\Scripts\activate
pip install .
🐍 Python Environment Guidance
Choosing the right Python path for your MCP configuration is critical for stability:
| Setup Type | Path Example | Pros | Cons |
|---|---|---|---|
| Dedicated Venv | .../semantic-sift/venv/Scripts/python.exe |
Isolated dependencies, no version conflicts with other tools. | Slightly more disk space. |
| Global Python | C:/Users/User/AppData/Local/.../python.exe |
Shared libraries, fast setup. | High risk of version conflicts (e.g., transformers mismatches). |
Recommendation: Always use the Dedicated Venv path in your mcp_config.json to ensure the sifting kernel is isolated and reliable.
For full semantic/reranking features (LLMLingua, Transformers, sentence-transformers):
pip install .[neural]
Note on Orchestration: Semantic-Sift is an "Intelligence Kernel." For complex multi-tool workflows, we strongly recommend installing Context-Pipe, the universal switchboard that natively routes data to Semantic-Sift without blocking your IDE.
For development tools (mypy, pytest):
pip install .[dev]
2. Connect the MCP
CRITICAL: For exact configuration paths for Cursor, Gemini, OpenCode, VS Code, and Claude, reference the Master Configuration Matrix.
3. Auto-Onboard
Once connected, ask your AI Assistant:
"Run
sift_onboard()to configure this project."
📊 Telemetry & Management Commands
Semantic-Sift operates invisibly, but you can always audit its performance and token savings without burning LLM tokens to do so.
- Terminal CLI:
- Run
semantic-sift-statsto print a global dashboard of your token savings, latency, and cache hits. - Run
semantic-sift-onboardto manually initialize Sift in any project (supports--envand--dry-run).
- Run
- MCP Prompts: Compatible clients (Claude Desktop, Cursor, Zed) will surface a
sift_dashboardprompt in their UI (often via a slash command or button) to instantly inject your telemetry stats into the chat. - OpenCode & Gemini CLI: The
sift_onboard()tool automatically injects native/sift-statsand/sift-onboardcustom slash commands into your IDE configuration.
🦀 Native Rust Sidecar (Meechi & Desktop Apps)
For high-performance, local-first applications (like Meechi), Semantic-Sift provides a native Rust binary (sift-core). This sidecar is optimized for portability and speed, with zero Python dependencies.
🔀 The Hybrid Engine (semantic-sift-cli)
To provide the best of both worlds, installing the Python package also exposes the semantic-sift-cli command. This acts as an Intelligent Router:
- For short tasks (<30,000 chars), it instantly shells out to the low-latency Rust
sift-core(ONNX). - For massive batch tasks, it dynamically loads the high-throughput PyTorch framework with Flash Attention to prevent memory explosion.
Feature Comparison
| Feature | Python MCP Server | Rust Sift-Core (Sidecar) |
|---|---|---|
| Heuristic Log Sifting | ✅ | ✅ (Native) |
| Semantic Compression | ✅ (PyTorch) | ✅ (ONNX) |
| Multi-Modal Ingestion | ✅ (via [multi-modal]) |
❌ (Text Only) |
| Supported Formats | .pdf, .xlsx, .docx, .html, .txt | .txt, .log, .out (Text) |
| Startup Latency | 3-5 seconds | ~10ms |
| Binary Size | ~1.5GB (with models) | ~15MB |
Note: For native apps like Meechi, we recommend a Tiered Ingestion strategy: use the app's frontend (e.g.,
pdf.js) to extract text, then pipe it to the Rust sidecar for high-speed semantic sifting.
Usage:
# Sift a log file instantly
cat app.log | sift-core logs
# Perform neural semantic compression
cat document.txt | sift-core semantic --model ./models/llmlingua2 --rate 0.4
🤖 For AI Agents: Operational Protocol
If you are an AI Agent (e.g., Gemini CLI, Cursor, Claude Code) reading this repository, follow this protocol:
- Initialization: Call
sift_onboard()to register background hooks. Usesift_onboard(dry_run=True)to preview all planned actions without writing any files. - Context Advisory: Before reading large files (>1,000 chars), call
sift_analyze_file(path)to determine the noise ratio. - Mandatory Sifting: If noise > 15%, pipe data through
sift_logsorsift_chatbefore inclusion in reasoning. For documents, usesift_doc(text, rate=0.4)— adjustrate(0.1–0.9) to trade compression depth against fidelity. - Ranking: Use
sift_rankto identify the most semantically relevant chunks for the user's prompt. - Extraction: When distilling PDFs or scraped content, use
sift_extraction(content, show_diff=True)to see exactly what was removed and verify faithfulness.
🛡️ Security & Testing
Semantic-Sift is built on a Zero-Vulnerability Baseline:
- Pytest: 100% pass rate on heuristic integrity.
- Bandit (SAST): Automated static analysis for Python patterns.
- Pip-Audit (SCA): Real-time supply chain monitoring for 0 known vulnerabilities.
Privacy and telemetry controls:
- Set
SIFT_TELEMETRY_DISABLED=trueto disable telemetry entirely. - Set
SIFT_TELEMETRY_URL=https://your-endpointto route metadata pulses to your own endpoint. - Set
SIFT_PULSE_RATE_LIMIT_S=10(default) to control async telemetry pulse frequency.
Security controls:
- Set
SIFT_ALLOW_GLOBAL_READS=trueto permitsift_read_file/sift_analyze_fileoutside the workspace root (path traversal guard is on by default).
Performance controls:
- Set
SIFT_HOOK_TIMEOUT_MS=3000to cap hook semantic latency before heuristic fallback. - Set
SIFT_MODEL_READY_WAIT_MS=1200to control semantic model warm-up wait time before returning heuristic-mode output. - Set
SIFT_COMPACTION_FIDELITY_THRESHOLD=0.3(default) to control the vocabulary-overlap threshold below which a low-fidelity compaction warning is emitted.
Hook logging controls:
- Set
SIFT_LOG_FILEto override the hook log path (default:.gemini/sift_debug.log). - Set
SIFT_LOG_LEVEL(DEBUG,INFO,WARNING,ERROR) to control hook log verbosity.
See SECURITY.md for our full security policy.
Telemetry schema and endpoint details are documented in doc/TELEMETRY_SPEC.md.
🔗 The Ecosystem (Studio of Two)
Semantic-Sift is a flagship member of the Studio of Two infrastructure. It is designed to work in high-fidelity harmony with:
- Context-Pipe: The universal switchboard for context engineering. While Sift provides the intelligence, Context-Pipe provides the orchestration. We highly recommend using Context-Pipe to chain Sift nodes with masking, search, and multi-modal ingestion tools.
⚖️ Licensing
Semantic-Sift is licensed under the Apache License 2.0. See LICENSE.md for details.
🤝 Contributing
Semantic-Sift is Open Source, but Closed to Contributions.
To maintain the strict architectural vision of the "Studio of Two" and keep maintenance overhead at absolute zero, this repository does not accept external pull requests. We encourage you to use, embed, and fork the code under the permissive Apache 2.0 license, but please do not submit PRs for new features or bug fixes. See CONTRIBUTING.md for details.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file semantic_sift-0.2.1.tar.gz.
File metadata
- Download URL: semantic_sift-0.2.1.tar.gz
- Upload date:
- Size: 57.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2bbfc6d2e8975b5983c2b7e88de02578dfd9d58dd3727f9c371fd092d6a98f50
|
|
| MD5 |
5b4aa797ba4483087c3d339005bace82
|
|
| BLAKE2b-256 |
c0acf783c6aa5133d5834c652c3be5ba81700613caeae8aebfefb0202f726a1c
|
Provenance
The following attestation bundles were made for semantic_sift-0.2.1.tar.gz:
Publisher:
release.yml on luismichio/semantic-sift
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
semantic_sift-0.2.1.tar.gz -
Subject digest:
2bbfc6d2e8975b5983c2b7e88de02578dfd9d58dd3727f9c371fd092d6a98f50 - Sigstore transparency entry: 1438682261
- Sigstore integration time:
-
Permalink:
luismichio/semantic-sift@ad27a61e2bd8a1f2232fcb3611297c0ba0b66297 -
Branch / Tag:
refs/tags/v0.2.1 - Owner: https://github.com/luismichio
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@ad27a61e2bd8a1f2232fcb3611297c0ba0b66297 -
Trigger Event:
push
-
Statement type:
File details
Details for the file semantic_sift-0.2.1-py3-none-any.whl.
File metadata
- Download URL: semantic_sift-0.2.1-py3-none-any.whl
- Upload date:
- Size: 46.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
49fd6110ee3c9b7e7305d0f796d54280d4e38f5b4eee1a60ff4c1d2b7d19b506
|
|
| MD5 |
dee6edeea190209ed58bb4c2a50d291d
|
|
| BLAKE2b-256 |
29bc2cc7d8bc0296848c93b621797c99b760db2237154243ef6576a4551b0de7
|
Provenance
The following attestation bundles were made for semantic_sift-0.2.1-py3-none-any.whl:
Publisher:
release.yml on luismichio/semantic-sift
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
semantic_sift-0.2.1-py3-none-any.whl -
Subject digest:
49fd6110ee3c9b7e7305d0f796d54280d4e38f5b4eee1a60ff4c1d2b7d19b506 - Sigstore transparency entry: 1438682265
- Sigstore integration time:
-
Permalink:
luismichio/semantic-sift@ad27a61e2bd8a1f2232fcb3611297c0ba0b66297 -
Branch / Tag:
refs/tags/v0.2.1 - Owner: https://github.com/luismichio
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@ad27a61e2bd8a1f2232fcb3611297c0ba0b66297 -
Trigger Event:
push
-
Statement type: