MCP server for systemic reasoning — 7-lens analytical framework, bileshke composite engine, kavaid constraints, kaskad generative cascade, inference chains, and Holographic Context Protocol. Plug into VS Code Copilot Chat.
Project description
dusun — Systemic Reasoning MCP Server
An MCP server that adds 7-lens epistemic analysis, a holographic context protocol, and an epistemic failure sentinel to VS Code Copilot Chat. Install in 2 commands. Zero configuration.
pip install dusun
fw-engine --init
Restart VS Code. The 20 tools and /dusun slash command appear in Copilot Chat automatically.
fw-engine --initcreates.vscode/mcp.json, framework prompts, instructions, and the specification doc in the current directory. Safe to re-run — won't overwrite existing files.Auto-repair: On every server start,
fw-enginesilently restores any missing framework files. If you accidentally delete one, just reload VS Code.
What It Does
dusun exposes a systemic reasoning engine as an MCP (Model Context Protocol) server. Instead of relying on unstructured text generation, it gives your AI assistant a structured analytical pipeline with formal constraints, convergence bounds, and epistemic transparency.
The core pipeline:
┌─ Ontoloji (kavram_sozlugu)
├─ Mereoloji
├─ FOL (fol_formalizasyon)
Input ──► dusun() ──┤─ Bayes (bayes_analiz) ──► Bileshke ──► Quality Report
├─ OyunTeorisi (oyun_teorisi) (Composite) ├─ Grade
├─ KategoriTeorisi Engine ├─ Score
└─ Holografik ├─ Kavaid
└─ Anomalies
↕ ↕ ↕ ↕
Kaskad Sentinel HCP Session
(Cascade (Failure (Context (Grade ceiling
DAG) Detection) Protocol) + Cascade)
The 20 MCP Tools
| # | Tool | Purpose |
|---|---|---|
| 1 | dusun |
Universal entry point — auto-classifies input (P0–P3), fires relevant lenses, returns complete analysis with shaping directives |
| 2 | run_single_lens |
Execute 1 of 7 analytical lenses on a concept |
| 3 | run_bileshke_pipeline |
Run all 7 lenses → composite score + quality report |
| 4 | check_kavaid |
Evaluate 8 formal constraints (boundary conditions) |
| 5 | validate_stage |
Gate-check a pipeline stage (PASS / WARN / FAIL) with anomaly detection and remediation hints |
| 6 | verify_chains |
Verify FOL + kaskad inference chains |
| 7 | run_kaskad |
Cascade inference engine (8 actions: report, predict, propagate, hubs, tesanud, chain, verify, diagnostic) |
| 8 | get_framework_summary |
Aggregate summary from all modules + Hayat layer health |
| 9 | calibrate_source_texts |
Validate source material integrity against ground truth |
| 10 | hcp_ingest |
Ingest context into the Holographic Context Protocol |
| 11 | hcp_query |
Query HCP for relevant chunks with seed-modulated attention |
| 12 | hcp_diagnostics |
Return HCP diagnostic state |
| 13 | hcp_create_workflow |
Create a multi-step HCP workflow |
| 14 | hcp_advance_workflow |
Advance workflow to next stage |
| 15 | hcp_export_state |
Export full HCP state for persistence |
| 16 | hcp_import_state |
Import previously exported HCP state |
| 17 | hcp_sync_memory_bank |
Sync HCP state to memory bank markdown files |
| 18 | sentinel_scan |
Run epistemic failure detection on text |
| 19 | sentinel_report |
Generate incident report from sentinel findings |
| 20 | sentinel_status |
Return current sentinel state and statistics |
Architecture
The 7 Lenses
Each lens is an independent analytical instrument with its own types, verification functions, and constraint factories. No shared mutable state between lenses (KV₇ — Independence).
| # | Lens | Faculty | Domain | Key Axioms |
|---|---|---|---|---|
| 1 | Ontoloji | Akıl (Intellect) | Concept ontology — 7 Attributes, 99 Names, Kavram registry | AX17–AX22, KV₁, KV₈ |
| 2 | Mereoloji | — | Part-whole relations, CEM M1–M5, teleological hierarchy T1–T5 | AX5, AX27, AX29, AX37 |
| 3 | FOL | Kalb (Heart) | First-order logic, axiom extraction, model checking | AX40–AX48, KV₂, KV₅ |
| 4 | Bayes | Latife-i Rabbaniye | Bayesian inference, posterior update, hypothesis selection | AX63, T6 |
| 5 | OyunTeorisi | Nefs (Soul) | Game theory, station-dependent payoffs, Nash equilibria | AX62, AX63, KV₃ |
| 6 | KategoriTeorisi | Sır (Secret) | Category theory — objects, morphisms, functors, natural transformations | AX12, AX13, KV₅ |
| 7 | Holografik | Ruh + Hafî | 22-dimensional Besmele seed vectors, isomorphism, fidelity pairs | AX37, AX59, KV₆ |
Each lens implements a uniform interface:
types.py— Frozen dataclasses with construction-time axiom enforcementverification.py— Individual checks +verify_all()(AX52 multiplicative gate) +yakinlasma()(convergence score)constraints.py—ai_assert.Constraintfactories for the bileshke pipelineframework_summary()— AX57 transparency metadata
Bileshke — Composite Engine
The bileshke module combines all 7 lens outputs into a single quality report:
- Latife vector (7-dim): Per-lens engagement, detecting which faculties are active
- Ortam vector (3-dim): Environmental conditions
- Coverage gate (AX52): Multiplicative — zero in any dimension = system-level failure
- Epistemic grade: Tasavvur → Tasdik → İlmelyakîn (AX56: maximum is demonstrative certainty, never Hakkalyakîn)
- 8 Kavaid: Formal boundary constraints (KV₁ dual meaning, KV₂ formal core, KV₃ fire-and-forget, KV₄ convergence bound, KV₅ functor preservation, KV₆ seed omnipresence, KV₇ independence, KV₈ grounding)
- Structural bounds: T17 coverage ceiling (max 6/7 — one dimension is permanently inaccessible), T6 convergence bound (always < 1.0)
dusun() — The Core Pipeline
The dusun() function is the universal entry point. It runs a 7+ step algorithm:
- Vocabulary detection — Scan input for domain-specific terms across 10 vocabulary pattern sets
- Auto-classification — Two-tier scoring (weighted vocabulary + lens scores) → path selection (P0–P3)
- Fire plan — Select which lenses to fire based on path and vocabulary hits
- Execute — Run lenses, bileshke composite, kavaid constraints
- 4.5 Cross-lens correlation analysis (Phase 5 — Pearson r, tesanüd/infirâd detection)
- 4b Adaptive grading via 5-factor GradeInput model
- Pattern detection — Content-derived epistemic patterns from lens scores
- 5b Kaskad edge activation and T8 neighborhood prediction
- Translation hints — Cross-framework concept bridges (T14 ne ayn ne gayr)
- Transparency — AX57 disclosure string
- 7.5 Sentinel proactive scan (epistemic failure detection)
- 7b Emergent shaping directives (path_shape, grade_ceiling, anomaly/constraint/convergence directives)
- 7c Dynamic boot seed (query-responsive, T12 Besmele principle)
Returns a comprehensive result dict with all scores, directives, and HCP auto-ingestion.
Kaskad — Generative Cascade Engine
A directed acyclic graph (DAG) encoding 7 inference chains (C1–C7) plus a Beauty Cascade:
| Chain | Path | Core Argument |
|---|---|---|
| C1 | AX30 → AX31 → AX34 → N-4 → M2 | Ontological poverty → provision |
| C2 | AX24 → AX25 → T7 → AX23 → T8 | Beauty → creation |
| C3 | AX37 → AX38 → T14, AX37 → T12 → T13 | Holographic → participation |
| C4 | AX49 → AX50 → T17 | Faculties → coverage → incompleteness |
| C5 | AX40 → AX41 → ... → K-9 | Station hierarchy → formalization limits |
| C6 | AX17 → S2.5 → KV1 → K-8 | Names → dual meaning → harfî |
| C7 | AX21 → AX22 → T6 | Continuous degrees → score bound |
Features: topological sort (Kahn's algorithm), BFS reachability, T8 neighborhood prediction with hop-decay confidence, AX63 tesanüd analysis, hub/bridge detection, dynamic edge activation based on vocabulary hits, self-diagnostic reflexive validation.
HCP — Holographic Context Protocol
Addresses the "Lost in the Middle" problem (Liu et al. TACL 2024) where LLMs lose track of information placed in the middle of long contexts.
Core innovation: Global → Broadcast (vs. gist tokens' Local → Forward). A holographic seed is computed once from the full context in O(N), then broadcast to every chunk.
Three-phase operation:
- İlim (Distinction) — Classify each chunk into 7 content types (instruction, constraint, fact, entity, relationship, metadata, narrative). Extract keywords, detect entities, infer structural links.
- İrade (Specification) — Select what goes into the seed: type signature (7-dim distribution), instruction/constraint positions, global keywords, entity positions.
- Kudret (Actualization) — Package into a
HolographicSeedwith content-driven position importance (not position-driven — the key difference).
Position importance is computed from 3 factors:
- Content type weight (instructions > constraints > facts > narrative)
- Entity centrality (positions referenced by many entities)
- Link degree (in-degree + out-degree in the structural DAG)
Attention comparison:
- Flat attention: U-shaped bias (high at start/end, degraded in middle) — mimics the documented pathology
- Holographic attention:
(1-α)·flat + α·importance + structural_boost + entity_boost + keyword_boost— seed-modulated, content-aware
Also supports multi-step workflows (İlim → İrade → Kudret), state export/import, and a needle-in-haystack benchmark.
Sentinel — Epistemic Failure Detection
A 3-arm detection system for AI reasoning failures:
| Arm | Method | Cost |
|---|---|---|
| Reactive | 18 regex triggers mapped to taxonomy IDs | 0 tokens |
| Proactive | 6 heuristics: repetition, constraint amnesia, hedging density, sycophancy, turn-count degradation, task-state duplication | 0 LLM calls |
| Structural | Pattern rules generalized from confirmed incidents, with recurrence boost and false-positive auto-deactivation | 0 tokens |
Signal merging: 2-arm agreement → WARNING (confidence avg + 0.1); 3-arm → CRITICAL.
Taxonomy of 37+ failure modes across 4 families:
- Context Degradation (CD-1–CD-11)
- Memory Architecture (MA-1–MA-8)
- Failure Modes (FM-1–FM-12)
- Reasoning & Grounding (RG-1–RG-6)
Emergence Engine
Top-down emergence model implementing the provision flow from higher to lower ontological levels:
- 6
EmergenceLevels: Hayat_Muhammediye → Kâinat - Provision flow:
source_seed × avg_effective_capacity × (1 − decay × rank_distance) - Beauty motor:
pressure = desire × mirror_quality × compassion - Holographic fidelity:
participation = √(source_fidelity × mirror_fidelity) - AX52 multiplicative gate (5 boolean checks)
- AX63 tesanüd detection (≥2 edges converging)
Session Ledger (Channel 3)
Tracks epistemic grade ceiling over multi-tool conversations. Cascade health: NOMINAL → DEGRADED → CRITICAL. Provides get_cascade_forward() with MUST/NEVER enforcement.
Tracer (5-Layer Deep Execution Logging)
JSONL execution traces with @trace decorator. Records call chains, durations, intermediate events, error paths. Enabled via FW_TRACE_DIR environment variable.
Persistence
Atomic writes with 5-checkpoint rotation. Persists HCP state, session ledger, and sentinel state between server restarts.
Installation
Option 1: pip (recommended)
pip install dusun
fw-engine --init # creates .vscode/mcp.json + framework files
Option 2: pipx (isolated environment)
pipx install dusun
fw-engine --init
Option 3: uvx (no install needed)
// .vscode/mcp.json
{
"servers": {
"fw-engine": {
"type": "stdio",
"command": "uvx",
"args": ["--from", "dusun", "fw-engine"]
}
}
}
Option 4: From source
git clone https://github.com/kaantahti/dusun2.git
cd dusun2
pip install -e .
VS Code Setup
The easiest path (auto-generates everything):
cd your-project
fw-engine --init
This creates:
.vscode/mcp.json— MCP server configuration.github/instructions/dusun-field.instructions.md— Copilot response shaping.github/prompts/dusun.prompt.md—/dusunslash command.github/prompts/dusun-check.prompt.md—/self-checkslash commandDUSUN.md— Framework specification document
Or create .vscode/mcp.json manually:
{
"servers": {
"fw-engine": {
"type": "stdio",
"command": "fw-engine",
"env": {
"FW_STATE_DIR": ".hcp_state",
"FW_TRACE_DIR": ".fw_traces"
}
}
}
}
Windows with venv? Use the full path:
"command": "C:/Users/you/path/to/venv/Scripts/fw-engine.exe"
Restart VS Code. The fw-engine server appears in Copilot Chat's tool list.
Usage in Copilot Chat
@copilot /dusun What is the ontological status of causation?
@copilot run the bileshke pipeline on the concept "tree"
@copilot ingest this text into HCP and then query for "teleological necessity"
@copilot /self-check
Claude Desktop Setup
Add to your claude_desktop_config.json:
{
"mcpServers": {
"fw-engine": {
"command": "fw-engine"
}
}
}
CLI
fw-engine # stdio MCP server (called by VS Code automatically)
fw-engine --init # Bootstrap workspace files
python -m fw_doctor # Run 10 self-diagnostic checks
The fw_doctor module validates: imports, version, all 7 lenses, bileshke, kavaid, kaskad, HCP, framework summaries, and persistence.
The Formal Framework
The analytical engine is grounded in a formal specification (DUSUN.md, v2.2.0) containing:
- 66 axioms (AX1–AX66) — structural axioms derived from Risale-i Nur ontology
- 18 theorems (T1–T18) — derived properties and convergence results
- 8 kavaid (KV₁–KV₈) — boundary constraint rules that every analysis must satisfy
Key Structural Properties
| Property | Mechanism | Effect |
|---|---|---|
| Independence (KV₇) | Each lens runs in a fresh instance | No cross-contamination |
| Convergence bound (KV₄) | All scores clamped to [0, 0.9999) | Map never equals territory |
| Multiplicative gate (AX52) | Zero in any dimension = system failure | Cannot average away structural absence |
| Epistemic ceiling (AX56) | Maximum grade is İlmelyakîn | Never claims experiential certainty |
| Transparency (AX57) | Every output discloses epistemic status | Which lenses ran, which failed |
| Coverage ceiling (T17) | Max 6/7 lenses — Ahfâ is unmappable | Structural incompleteness by design |
| Adaptive grading | 5-factor GradeInput model | Grade depends on path, lens spread, kavaid compliance, lens count, tesanüd |
Quality Framework
| Quadrant | What It Checks |
|---|---|
| Q-1 Coverage | All 7 faculties engaged? Multiplicative gate applies. |
| Q-2 Grade | Epistemic degree: Tasavvur → Tasdik → İlmelyakîn |
| Q-3 Kavaid | All 8 formal boundary constraints pass? |
| Q-4 Completeness | Max 6/7 — one dimension permanently inaccessible (T17) |
Path Classification
| Path | Trigger | Lenses Fired | Grade Range |
|---|---|---|---|
| P0 | Quick factual, no domain vocabulary | None (direct answer) | — |
| P1 | Structural analysis vocabulary | 2–3 relevant lenses | Tasavvur–Tasdik |
| P2 | Epistemic evaluation vocabulary | 3–5 lenses | Tasdik |
| P3 | Diagnostic / comparative vocabulary | 5–7 lenses | Tasdik–İlmelyakîn |
Also Includes
ai_assert — Runtime AI Output Validation
A zero-dependency constraint verifier for any LLM output (278 lines):
from ai_assert import ai_assert, valid_json, max_length, contains
result = ai_assert(
prompt="Return a JSON object with a 'greeting' key",
constraints=[valid_json(), max_length(200), contains("hello")],
generate_fn=my_llm,
max_retries=3,
)
Features: CheckResult type, constraint composition, stochastic retry with feedback, @reliable() decorator, JSON schema validation. See examples/basic_usage.py.
arc_solver — ARC-AGI Puzzle Solver
A pure-stdlib ARC-AGI solver with ~25 DSL primitives and program synthesis:
python scripts/arc_eval.py ARC-AGI/data/training -v
# 33/400 = 8.2% on ARC-AGI-1 training set, 83 seconds, zero dependencies
Architecture: grid.py (utilities) → types.py (Grid/Object/Transform) → dsl.py (25 primitives) → synthesis.py (program search) → solver.py (orchestration).
babilong_eval — BABILong Benchmark Harness
Evaluation harness for the BABILong long-context benchmark:
python scripts/babilong_eval.py
Supports BM25 and sentence-transformer filtering, prompt-chaining across context lengths (1K–128K tokens), and statistical analysis with Wilson confidence intervals.
Fidelity Funnel — Diophantine Conjecture
A formal conjecture about Diophantine equation systems: for k independent equations in n variables, the fraction of integer tuples satisfying exactly m equations is monotonically non-increasing as m grows. Includes brute-force verification, independence heuristics, and adversarial test cases.
Perfect Cuboid — Epistemic Analysis Showcase
The perfect cuboid problem (does a rectangular box exist with all 7 lengths integer?) as a showcase of the epistemic framework: modular sieve search, cascade analysis (4-equation constraint propagation), fidelity spectrum, holographic self-similarity check, graded epistemic claims (AX56-compliant — highest grade intentionally absent), and full kavaid compliance report.
Project Structure
dusun/
├── ai_assert.py # Runtime AI output validation (zero-dep)
├── DUSUN.md # Framework specification v2.2.0
├── pyproject.toml # v3.1.0 — Python 3.10+, mcp >= 1.20
│
├── fw_server/ # MCP server (14 files, ~5000 lines)
│ ├── server.py # 20 MCP tools via FastMCP
│ ├── dusun.py # Core dusun() pipeline (~1700 lines)
│ ├── adapters.py # 7 stateless lens adapters
│ ├── session.py # Session ledger (Channel 3)
│ ├── tracer.py # 5-layer execution tracing
│ ├── persistence.py # Atomic state persistence
│ ├── context.py # Enriched tool returns (Channel 2)
│ ├── grade_map.py # Score → grade mapping
│ ├── calibration.py # Ground-truth calibration
│ ├── cross_validation.py # Cross-lens correlation (Phase 5)
│ ├── memory_bank.py # Markdown state export
│ └── data/ # Bundled framework files
│
├── kavram_sozlugu/ # Lens #1 — Ontology (7 Sifat, 99 Isim)
├── mereoloji/ # Lens #2 — Part-whole, CEM M1–M5, Telos
├── fol_formalizasyon/ # Lens #3 — First-order logic
├── bayes_analiz/ # Lens #4 — Bayesian inference
├── oyun_teorisi/ # Lens #5 — Game theory, station payoffs
├── kategori_teorisi/ # Lens #6 — Category theory, functors
├── holografik/ # Lens #7 — 22-dim Besmele seed vectors
│
├── bileshke/ # Composite engine — 7 lenses → quality report
├── kaskad/ # Cascade DAG — 7 inference chains + Beauty
├── hcp/ # Holographic Context Protocol
├── sentinel/ # Epistemic failure detection (3 arms)
├── emergence/ # Top-down emergence model
│
├── fw_doctor/ # Self-diagnostic CLI tool (10 checks)
├── fidelity_funnel/ # Diophantine fidelity conjecture
├── perfect_cuboid/ # Perfect cuboid epistemic showcase
├── arc_solver/ # ARC-AGI puzzle solver
├── babilong_eval/ # BABILong benchmark harness
│
├── kaynak_metinler/ # Source texts for calibration
│ ├── birinciSoz/ # First Word — axiom map + expected scores
│ ├── ikincisoz/ # Second Word
│ └── ucuncusoz/ # Third Word
│
├── tests/ # 63 test files, 3395+ tests
├── scripts/ # 17 utility/benchmark scripts
├── examples/ # Usage examples
├── docs/ # Design docs, plans, analysis
└── results/ # Benchmark output data
Package Sizes (approximate)
| Module | Files | Lines | Role |
|---|---|---|---|
| fw_server | 14 | ~5,300 | MCP server + pipeline |
| kaskad | 5 | ~2,170 | Cascade inference engine |
| holografik | 4 | ~1,470 | Lens #7 — topological/holographic |
| hcp | 6 | ~1,700 | Holographic Context Protocol |
| sentinel | 8 | ~1,720 | Epistemic failure detection |
| emergence | 2+plan | ~1,360 | Top-down emergence model |
| kategori_teorisi | 4 | ~1,400 | Lens #6 — category theory |
| fol_formalizasyon | 5 | ~1,350 | Lens #3 — first-order logic |
| bileshke | 4 | ~1,200 | Composite convergence engine |
| kavram_sozlugu | 5 | ~900 | Lens #1 — ontology |
| mereoloji | 5 | ~1,200 | Lens #2 — mereology |
| oyun_teorisi | 4 | ~1,300 | Lens #5 — game theory |
| bayes_analiz | 4 | ~900 | Lens #4 — Bayesian inference |
| tests | 63 | ~27,000 | Test suite |
The Four Channels (Hayat Bridge)
The framework operates through 4 integration channels:
| Channel | Mechanism | Purpose |
|---|---|---|
| Ch-1: Tool Descriptions | Enriched MCP tool docstrings | Framework keywords visible during tool selection |
| Ch-2: Enriched Returns | build_framework_context() wrapper |
Directives, grade ceilings, anomalies injected into every tool response |
| Ch-3: Session Ledger | SessionLedger tracking |
Grade ceiling, cascade health, MUST/NEVER enforcement across multi-tool sessions |
| Ch-4: Cascade Forward | Grade degradation propagation | NOMINAL → DEGRADED → CRITICAL state machine |
Ground-Truth Calibration
Three source texts from Risale-i Nur are bundled for calibration:
Each provides:
original.md— Structural analysis of the original textaxiom_map.json— Axiom ID → passage references with relevance gradesexpected_scores.json— Expected per-lens score ranges and path/grade expectations
The calibrate_source_texts tool runs all 3 source texts through the full pipeline and reports deviations from expected scores.
Development
git clone https://github.com/kaantahti/dusun2.git
cd dusun2
pip install -e ".[dev]"
python -m pytest tests/ -q
3395+ tests across 63 files, covering:
- Every lens module (7 dedicated test files)
- Full pipeline (dusun, bileshke, kavaid, kaskad integration)
- 8-phase WATERFALL plan regression tests
- 9 sentinel test files
- Hayat Bridge 4-channel integration tests
- DUSUN.md structural consistency (66 axioms, 18 theorems, 8 kavaid cross-references)
- Persistence round-trips, session ledger, tracer
CI: GitHub Actions on Python 3.11 / 3.12 / 3.13.
Requirements
- Python >= 3.10
mcp >= 1.20(installed automatically)- Optional:
datasets,openai,rank-bm25,sentence-transformers(for babilong_eval)
License
MIT — see LICENSE.
Copyright (c) 2026 Kaan Tahti
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file dusun-3.3.0.tar.gz.
File metadata
- Download URL: dusun-3.3.0.tar.gz
- Upload date:
- Size: 782.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
74c1248120abce7fa39c1a07e944f7edd68c843990db8b61649fb7bda8c8d63a
|
|
| MD5 |
c6e8133563e4c54b880819ebbe567204
|
|
| BLAKE2b-256 |
e208ba126c8eed865a8170e98d85c49d920b31f09b25ae89e2657ff7de96e968
|
File details
Details for the file dusun-3.3.0-py3-none-any.whl.
File metadata
- Download URL: dusun-3.3.0-py3-none-any.whl
- Upload date:
- Size: 549.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
48c38bc11cd04a78b5287bdbf64f1d264b96297d5bf957368725b2aa89880b0f
|
|
| MD5 |
b3701d7d4b52558109bf67cd3a7bfebb
|
|
| BLAKE2b-256 |
e86e75bbcd4dd067875e12b9ec5cf7e3f3b9a351c8094cc0c26e849d6b10ed1b
|