Frame Check MCP server: deterministic structural framing analysis for AI-generated documents, with default-on frame-divergence block per FRAME_DIVERGENCE_CONTRACT_v1 c1.0. MCP surface delegates V4.2 judgment to the caller's agent model; zero Frame Check LLM cost per query.
Reason this release was yanked:
Critical bundling bug: manifest.py missing from wheel; frame_check / frame_compare tools fail at runtime; fixed in 0.8.8
Project description
Frame Check
See what any document does not show you.
Frame Check is a structural framing analysis tool. It names which analytical perspectives a document covers, which it omits, how confidently it speaks, and which named patterns from the Frame Vocabulary Standard fire on the text. Numerical claims are cross-checked against authoritative sources where coverage exists. The tool ships as an MCP server so any MCP-compatible AI client (Claude Desktop, Cursor, etc.) can use it directly in conversation.
Live web surface: https://frame.clarethium.com (paused 2026-04-23 pending operator authorization to resume). The MCP package on PyPI runs locally and does not depend on the hosted site.
Quickstart (MCP server)
pip install frame-check-mcp
For Claude Desktop, add to claude_desktop_config.json:
{
"mcpServers": {
"frame-check": {
"command": "frame-check-mcp"
}
}
}
Restart the client. Then in any conversation:
Can you frame-check this document?
[paste any English analytical document]
The agent calls the frame_check tool with document_text set to the pasted content; defaults produce the full output (divergence block, suggested next actions, citation discipline). For Cursor and other MCP clients, follow your client's MCP-server configuration with the same frame-check-mcp command.
What you get back
Paste a document and Frame Check returns:
- A structural framing profile. Which of five analytical perspectives (causes, risks, stakeholders, trends, uncertainty) the document covers, which it omits, and the density of each.
- Voice and epistemic posture. How the document positions the reader; what share of claims carry attribution markers; the unhedged-vs-hedged claim ratio.
- Temporal orientation. Whether the document grounds in historical data, present state, or projections.
- Frame Vocabulary Standard matches. Named frame patterns whose rule-based signals fire on the text, each with a clickable library URL, identification cues, and worked examples.
- Frame divergence. A list of FVS catalog entries the document did NOT use, sorted by signal strength. The agent surfaces the highest-leverage absences as a structural reading the user could not see by re-reading their own document.
- Suggested next actions. Two to four specific moves derived from this call's findings: an entry to read, ready-made reprompts to put back to the source AI, and a pointer to the
challenge_documentMCP prompt for adversarial follow-ups. - Source-network verification (when relevant providers have coverage): numeric claims checked against SEC EDGAR, FRED, World Bank, Alpha Vantage, Wolfram Alpha.
The frame_compare tool runs the same analysis on two documents on the same subject and surfaces structural differences in framing, certainty, coverage, and sourcing. The worked example four-llms-on-bitcoin-retirement-2026 demonstrates the multi-model application: same prompt, four frontier models, four materially different framing signatures.
Approach
Structural measurement is the floor. Every framing claim is computed from deterministic pattern matchers; identical inputs return identical measurements. Zero LLM cost on the deterministic path.
Verification is bounded. Numeric claims are checked only against providers with genuine coverage for the claim type. Per-provider precision, recall, and F1 are surfaced rather than asserted.
Named-pattern detection is a separate reliability layer from the structural profile. Each FVS match ships with a teaching question, not a verdict; the product treats matches as hypotheses for the reader to evaluate. The agreement between rule-based detection and careful multi-source human labeling is an open, pre-registered research question with a published negative result and a Track B follow-up pending. See VALIDATION_PROGRAM.md and ANTICIPATED_CRITIQUES.md for the honest current state.
Full methodology: METHODOLOGY.md (also bundled in the wheel).
MCP surface
Two tools and four prompts. The full reference lives in MCP_SERVER.md:
frame_check: single-document structural analysis (the agent passesdocument_text; defaults handle the rest).frame_compare: cross-document structural diff on the same subject.frame_check_my_response: agent self-audit. The agent callsframe_checkon its own last response and surfaces the frame without verdict or defensive rewriting.frame_check_this_ai_response: the user pastes a response from another AI; the agent analyses what that AI did structurally.challenge_document: adversarial questions traced to the structural gaps in a reading.explain_framing: walkthrough template for a completedframe_checkresult.
100+ MCP resources expose the Frame Vocabulary Standard catalog, methodology paper, worked examples, and validation corpus directly to the agent. See MCP_SERVER.md "Resource surface" for the full URI list.
Every response carries three sections: analysis (the measurements), agent_guidance (composition discipline and citation rules), provenance (versions, license, citation, ISO-8601 timestamp, hosting status). When include_divergence=true (the default since 0.8.0), a divergence block is added with absent-frame records sorted by signal strength.
Running the web app from source (repository only)
The Flask web app at app.py is repository-only and is NOT bundled in the frame-check-mcp wheel. PyPI installers should follow the MCP quickstart above. To run the web app from a repo clone:
pip install -r requirements.txt
uvicorn app:app --host 127.0.0.1 --port 8001 --reload
Open http://127.0.0.1:8001. API keys are optional: without any keys the structural analysis works; with GEMINI_API_KEY or XAI_API_KEY, AI-narrative paths become available. Web app requires Python 3.12+; the MCP server supports Python 3.10+.
Tests
python3 run_tests.py
The full test suite covers parsing, framing detection, source-network verification, MCP contract conformance, adversarial harness, prompt-safety guards, and decision-readiness byte-stability. Approximately 100-200 seconds depending on which external API fixtures are exercised. CI runs the suite on every push to master and on pull requests (.github/workflows/tests.yml).
Documentation
Bundled with the wheel:
METHODOLOGY.md: detection methodology, FVS curation, calibration, limitations.MCP_SERVER.md: install, tool surface, resource surface, prompt surface, divergence block, release arc.FRAME_DIVERGENCE_v1.md,FRAME_DIVERGENCE_v2.md,FRAME_DIVERGENCE_CONTRACT_v1.md: divergence category definition, layered architecture, interface contract.V4_2_GAP_INVENTORY_v1.md: V4.2 engine readiness inventory (self-disclosed gaps).ANTICIPATED_CRITIQUES.md: nineteen enumerated adversarial critiques with prepared defenses; the project's self-enumerated attack surface.VALIDATION_PROGRAM.md: validation strategy, Track A / Track B status.RATERS.md: open invitation for external raters on the decision-readiness profile.CONTRIBUTING.md,GOVERNANCE.md,SECURITY.md,CITATION.cff.
Repository-only (in docs/internal/): audit deliverables, engine tier strategy, methodology paper outlines, outreach templates, in-flight design proposals, archived version artifacts. Visible to anyone browsing the repo for transparency; visually de-emphasized so the user-facing surface stands out.
License
Code is Apache-2.0; corpus artifacts (Frame Vocabulary Standard, methodology, worked examples, calibration dataset) are CC-BY-4.0. See LICENSE and NOTICE. The two licenses intentionally differ: code is forkable and auditable; corpus data compounds as a citeable research resource.
Citing Frame Check
See CITATION.cff for the canonical CFF 1.2.0 record. A short citation line:
Lucic, L. (2026). Frame Check: a research instrument for framing
and verification in documents.
https://github.com/lluvr/frame-check-mcp
Contact
Maintained by Lovro Lucic. Issues and findings: hello@clarethium.com. Project home: https://blog.clarethium.com.