Skip to main content

Σ OVERWATCH — Reality Await Layer (RAL) control plane for agentic AI

Project description

CI License: MIT Python 3.10+

Institutional Decision Infrastructure

Truth · Reasoning · Memory

🚀 Start Here · 🔁 Hero Demo · 🏢 Boardroom Brief · 📜 Specs · 🗺️ Navigation · 🔬 RAL


The Problem

Your organization makes thousands of decisions. Almost none are structurally recorded with their reasoning, evidence, or assumptions.

  • Leader leaves → their rationale leaves with them.
  • Conditions change → nobody detects stale assumptions.
  • Incident occurs → root-cause analysis becomes guessing.
  • AI accelerates decisions 100× → governance designed for human speed fails silently.

This is not a documentation gap. It is a missing infrastructure layer.

Every institution pays this cost — in re-litigation, audit overhead, governance drag, and silent drift. The question: keep paying in consequences, or invest in prevention.

Full economic tension analysis · Boardroom brief · Risk model


The Solution

Σ OVERWATCH fills the void between systems of record and systems of engagement with a system of decision.

Every decision flows through three primitives:

Primitive Artifact What It Captures
Truth Decision Ledger Record (DLR) What was decided, by whom, with what evidence
Reasoning Reasoning Scaffold (RS) Why this choice — claims, counter-claims, weights
Memory Decision Scaffold + Memory Graph (DS + MG) Reusable templates + queryable institutional memory

When assumptions decay, Drift fires. When drift exceeds tolerance, a Patch corrects it. This is the Drift → Patch loop — continuous self-correction.


Progressive Escalation

Coherence Ops scales from a single decision loop to institutional credibility infrastructure:

Level Scale What It Proves
Mini Lattice 12 nodes Mechanics: one claim, three evidence streams, TTL, drift, patch, seal
Enterprise Lattice ~500 nodes Complexity: K-of-N quorum, correlation groups, regional validators, sync nodes
Credibility Engine 30,000–40,000 nodes Survivability: multi-region, automated drift, continuous sealing, hot/warm/cold

Same primitives. Same artifacts. Same loop. Different scale.

Examples: Mini Lattice · Enterprise Lattice · Credibility Engine Scale · Full docs

Demo: Credibility Engine Cockpit — static dashboard, 7 panels, 30 seconds to institutional state

Stage 2: Simulated Engine — live simulation driver, 4 scenarios (Day0–Day3), 2-second ticks

Stage 3: Runtime Engine — real engine with JSONL persistence + API endpoints


Why Scale Changes Everything

At 12 nodes, a human can trace every dependency. At 500, hidden correlations emerge. At 40,000, manual governance is impossible.

Principle Why It Matters at Scale
Truth decays Evidence has a shelf life. Without TTL discipline, stale assertions masquerade as current truth.
Silence is signal A lattice that stops producing drift signals is not healthy — it is blind. Watch for instability, not absence.
Independence must be enforced Sources that appear independent may share infrastructure. Correlation groups make hidden dependencies visible.
Drift is normal 100–400 drift events per day is steady state at production scale. Drift is maintenance fuel, not crisis.
Seal authority matters No single region should control institutional truth. Authority distribution (no region >40%) prevents capture.

At every scale, the same question: can the institution trust its own assertions right now? The Credibility Index answers it with a number. The Drift→Patch→Seal loop keeps that number honest.


Stage 2 — Simulated Credibility Engine

Run the simulation driver to power the dashboard with live synthetic data:

# Terminal 1: Start simulation (Day0 = stable baseline)
python sim/credibility-engine/runner.py --scenario day0

# Terminal 2: Serve dashboard
python -m http.server 8000

Visit: http://localhost:8000/dashboard/credibility-engine-demo/

Four scenarios model progressive institutional entropy: Day0 (stable), Day1 (entropy emerges), Day2 (coordinated darkness), Day3 (external mismatch + recovery). The dashboard updates every 2 seconds.

Simulation docs · Dashboard


Stage 3 — Real Credibility Engine Runtime

Run the API server to serve live credibility state:

uvicorn dashboard.api_server:app --reload

Engine persists live state under data/credibility/. Dashboard can run in API mode by setting DATA_MODE = "API" in app.js.

Endpoint Description
GET /api/credibility/snapshot Credibility Index, band, components, trend
GET /api/credibility/claims/tier0 Tier 0 claims with quorum and TTL
GET /api/credibility/drift/24h Drift events by severity, category, region
GET /api/credibility/correlation Correlation cluster map
GET /api/credibility/sync Sync plane integrity
GET /api/credibility/packet Sealed credibility packet (DLR/RS/DS/MG)

Runtime Engine docs · API integration


Try It (5 Minutes)

git clone https://github.com/8ryanWh1t3/DeepSigma.git && cd DeepSigma
pip install -r requirements.txt

# Score coherence (0–100, A–F)
python -m coherence_ops score ./coherence_ops/examples/sample_episodes.json --json

# Full pipeline: episodes → DLR → RS → DS → MG → report
python -m coherence_ops.examples.e2e_seal_to_report

# Why did we make this decision?
python -m coherence_ops iris query --type WHY --target ep-001

Drift → Patch in 60 seconds (v0.3.0):

python -m coherence_ops.examples.drift_patch_cycle
# BASELINE 90.00 (A) → DRIFT 85.75 (B) → PATCH 90.00 (A)

👉 Full walkthrough: HERO_DEMO.md — 8 steps, every artifact touched.


Golden Path (v0.5.1)

One command. One outcome. No ambiguity. Proves the full 7-step loop end-to-end: Connect → Normalize → Extract → Seal → Drift → Patch → Recall.

# Local (fixture mode — no credentials)
deepsigma golden-path sharepoint \
  --fixture demos/golden_path/fixtures/sharepoint_small --clean

# Or via the coherence CLI
coherence golden-path sharepoint \
  --fixture demos/golden_path/fixtures/sharepoint_small

# Docker
docker compose --profile golden-path run --rm golden-path

Output: golden_path_output/ with per-step JSON artifacts and summary.json.

👉 Details: demos/golden_path/README.md


Trust Scorecard (v0.6.0)

Measurable SLOs from every Golden Path run. Generated automatically in CI.

python -m tools.trust_scorecard \
  --input golden_path_ci_out --output trust_scorecard.json

# With coverage
python -m tools.trust_scorecard \
  --input golden_path_ci_out --output trust_scorecard.json --coverage 85.3

Output: trust_scorecard.json with metrics, SLO checks, and timing data.

👉 Spec: specs/trust_scorecard_v1.md · Dashboard: Trust Scorecard tab


Creative Director Suite (v0.6.2)

Excel-first Coherence Ops — govern creative decisions in a shared workbook that any team can edit in SharePoint. No code required.

# Generate the governed workbook
pip install -e ".[excel]"
python tools/generate_cds_workbook.py

# Explore the sample dataset
ls datasets/creative_director_suite/samples/

The workbook includes a BOOT sheet (LLM system prompt), 7 named governance tables (tblTimeline, tblDeliverables, tblDLR, tblClaims, tblAssumptions, tblPatchLog, tblCanonGuardrails), and a Coherence Index dashboard.

Quickstart:

  1. Download the template workbook from templates/creative_director_suite/
  2. Fill BOOT!A1 (or use the pre-filled template)
  3. Attach workbook to your LLM app (ChatGPT, Claude, Copilot)
  4. Respond to: "What Would You Like To Do Today?"
  5. Paste write-back rows into Excel tables

Docs: Excel-First Guide · Boot Protocol · Table Schemas · Dataset


Excel-first Money Demo (v0.6.3)

One command. Deterministic Drift→Patch proof — no LLM, no network.

python -m demos.excel_first --out out/excel_money_demo

# Or via console entry point
excel-demo --out out/excel_money_demo

Output: workbook.xlsx, run_record.json, drift_signal.json, patch_stub.json, coherence_delta.txt

Docs: Money Demo · BOOT Validator · MDPT Power App Pack


MDPT Beta Kit (v0.6.4)

Registry index, product CLI, and Power App starter kit for governed prompt operations.

flowchart TB
    subgraph SharePoint["SharePoint Lists"]
        PC[PromptCapabilities<br/>Master Registry]
        PR[PromptRuns<br/>Execution Log]
        DP[DriftPatches<br/>Patch Queue]
    end

    subgraph Generator["MDPT Index Generator"]
        CSV[CSV Export] --> GEN[generate_prompt_index.py]
        GEN --> IDX[prompt_index.json]
        GEN --> SUM[prompt_index_summary.md]
    end

    subgraph Lifecycle["Prompt Lifecycle"]
        direction LR
        INDEX[1. Index] --> CATALOG[2. Catalog]
        CATALOG --> USE[3. Use]
        USE --> LOG[4. Log]
        LOG --> DRIFT[5. Drift]
        DRIFT --> PATCH[6. Patch]
        PATCH -.->|refresh| INDEX
    end

    PC -->|export| CSV
    INDEX -.-> PC
    USE -.-> PR
    DRIFT -.-> DP
    PATCH -.-> DP

    style SharePoint fill:#0078d4,stroke:#0078d4,color:#fff
    style Generator fill:#16213e,stroke:#0f3460,color:#fff
    style Lifecycle fill:#0f3460,stroke:#0f3460,color:#fff
# Generate MDPT Prompt Index from SharePoint export
deepsigma mdpt index --csv prompt_export.csv --out out/mdpt

# Product CLI
deepsigma doctor                                    # Environment health check
deepsigma demo excel --out out/excel_money_demo     # Excel-first Money Demo
deepsigma validate boot <file.xlsx>                 # BOOT contract validation
deepsigma golden-path sharepoint --fixture ...      # 7-step Golden Path

Docs: CLI Reference · MDPT · Power App Starter Kit


Credibility Engine (v0.6.4)

Institutional-scale claim lattice with formal credibility scoring, evidence synchronization, and automated drift governance.

Credibility Index — composite 0–100 score from 6 components:

Component What It Measures
Tier-weighted claim integrity Higher-tier claims weigh more
Drift penalty Active drift signals reduce score
Correlation risk penalty Shared source dependencies penalized
Quorum margin compression Thin redundancy penalized
TTL expiration penalty Stale evidence penalized
Independent confirmation bonus 3+ independent sources rewarded
Score Band Action
95–100 Stable Monitor
85–94 Minor drift Review
70–84 Elevated risk Patch required
50–69 Structural degradation Immediate remediation
<50 Compromised Halt dependent decisions

Institutional Drift Categories — 5 scale-level patterns composing from 8 runtime drift types: timing entropy, correlation drift, confidence volatility, TTL compression, external mismatch.

Sync Plane — evidence timing infrastructure. Sync nodes are evidence about evidence. Event time vs. ingest time, monotonic sequences, independent beacons, watermark logic.

Category Definition: Coherence Ops is not monitoring, observability, or compliance. It is the operating layer that prevents institutions from lying to themselves over time.

Deployment:

  • MVP: 6–8 engineers, $1.5M–$3M/year
  • Production: 30k–40k nodes, 3+ regions, $6M–$10M/year (~$170–$280/node/year)

Docs: Credibility Engine · Credibility Index · Sync Plane · Deployment Patterns

Diagrams: Lattice Architecture · Drift Loop

Examples: Mini Lattice · Enterprise Lattice · Scale

Guardrails: Abstract model for institutional credibility infrastructure. Not domain-specific. Not modeling real-world weapons. Pure decision infrastructure.


Repo Structure

DeepSigma/
├─ START_HERE.md          # Front door
├─ HERO_DEMO.md           # 5-min hands-on walkthrough
├─ NAV.md                 # Navigation index
├── category/             # Economic tension, boardroom brief, risk model
├── canonical/            # Normative specs: DLR, RS, DS, MG, Prime Constitution
├── coherence_ops/        # Python library + CLI + examples
├── deepsigma/cli/        # Unified product CLI (doctor, demo, validate, mdpt, golden-path)
├── mdpt/                 # MDPT tools, templates, Power App starter kit
├── specs/                # JSON schemas (11 schemas)
├── examples/             # Episodes, drift events, demo data
├── llm_data_model/       # LLM-optimized canonical data model
├── datasets/             # Creative Director Suite sample data (8 CSVs)
├── docs/                 # Extended docs (vision, IRIS, policy packs, Excel-first)
├── templates/            # Excel workbook templates
├── docs/credibility-engine/ # Credibility Index, Sync Plane, deployment patterns
├── mermaid/              # 39+ architecture & flow diagrams
├── engine/               # Compression, degrade ladder, supervisor
├── dashboard/            # React dashboard + mock API
├── adapters/             # MCP, OpenClaw, SharePoint, Power Platform, AskSage, Snowflake, LangChain
├── demos/                # Golden Path end-to-end demo + fixtures
└── release/              # Release readiness checklist

CLI Quick Reference

Command Purpose
python -m coherence_ops audit <path> Cross-artifact consistency audit
python -m coherence_ops score <path> [--json] Coherence score (0–100, A–F)
python -m coherence_ops mg export <path> --format=json Export Memory Graph
python -m coherence_ops iris query --type WHY --target <id> Why was this decided?
python -m coherence_ops iris query --type WHAT_DRIFTED --json What assumptions decayed?
python -m coherence_ops demo <path> Score + IRIS in one command
coherence reconcile <path> [--auto-fix] [--json] Reconcile cross-artifact inconsistencies
coherence schema validate <file> --schema <name> Validate JSON against named schema
coherence dte check <path> --dte <spec> Check episodes against DTE constraints
deepsigma doctor Environment health check
deepsigma demo excel [--out DIR] Excel-first Money Demo
deepsigma validate boot <file.xlsx> BOOT contract validation
deepsigma mdpt index --csv <file> Generate MDPT Prompt Index
deepsigma golden-path <source> [--fixture <path>] 7-step end-to-end Golden Path

Connectors (v0.6.0)

All connectors conform to the Connector Contract v1.0 — a standard interface with a canonical Record Envelope for provenance, hashing, and access control.

Connector Transport MCP Tools Docs
SharePoint Graph API sharepoint.list / get / sync docs/26
Power Platform Dataverse Web API dataverse.list / get / query docs/27
AskSage REST API asksage.query / models / datasets / history docs/28
Snowflake Cortex + SQL API cortex.complete / embed / snowflake.query / tables / sync docs/29
LangChain Callback Governance + Exhaust handlers docs/23
OpenClaw HTTP Dashboard API client adapters/openclaw/

Key Links

Resource Path
Reality Await Layer (RAL) ABOUT.md
Front door START_HERE.md
Hero demo HERO_DEMO.md
Boardroom brief category/boardroom_brief.md
Economic tension category/economic_tension.md
Risk model category/risk_model.md
Canonical specs /canonical/
JSON schemas /specs/
Python library /coherence_ops/
IRIS docs docs/18-iris.md
Docs map docs/99-docs-map.md

Operations

Resource Purpose
OPS_RUNBOOK.md Run Money Demo, tests, diagnostics, incident playbooks
TROUBLESHOOTING.md Top 20 issues — symptom → cause → fix → verify
CONFIG_REFERENCE.md All CLI args, policy pack schema, environment variables
STABILITY.md What's stable, what's not, versioning policy, v1.0 criteria
TEST_STRATEGY.md Test tiers, SLOs, how to run locally, coverage

Run with coverage:

pytest --cov=coherence_ops --cov-report=term-missing

Contributing

See CONTRIBUTING.md. All contributions must maintain consistency with Truth · Reasoning · Memory and the four canonical artifacts (DLR / RS / DS / MG).

License

See LICENSE.


Σ OVERWATCH We don't sell agents. We sell the ability to trust them.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

deepsigma-0.7.0.tar.gz (216.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

deepsigma-0.7.0-py3-none-any.whl (189.6 kB view details)

Uploaded Python 3

File details

Details for the file deepsigma-0.7.0.tar.gz.

File metadata

  • Download URL: deepsigma-0.7.0.tar.gz
  • Upload date:
  • Size: 216.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for deepsigma-0.7.0.tar.gz
Algorithm Hash digest
SHA256 f0afb970b3f86fe3d46a63892d849585fd2e05d00e21b653d6a349ddd1929e17
MD5 1692cc1dc12cb9433afa4ace9a2221cf
BLAKE2b-256 9b3c71524f54303d4568925512f77ada634a82427fef2b764a2dbb1cf6b093b2

See more details on using hashes here.

Provenance

The following attestation bundles were made for deepsigma-0.7.0.tar.gz:

Publisher: ci.yml on 8ryanWh1t3/DeepSigma

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file deepsigma-0.7.0-py3-none-any.whl.

File metadata

  • Download URL: deepsigma-0.7.0-py3-none-any.whl
  • Upload date:
  • Size: 189.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for deepsigma-0.7.0-py3-none-any.whl
Algorithm Hash digest
SHA256 01cf7241045e09efe9e1036c49ce2c5d40827d405911a73d272640c9b21dc2c7
MD5 68ab84ac8d31dd8ad88cd1442f993c64
BLAKE2b-256 10331cbb65559eeddd835ac17fc072bf53208973f9f1bfc156bb4d0004346aca

See more details on using hashes here.

Provenance

The following attestation bundles were made for deepsigma-0.7.0-py3-none-any.whl:

Publisher: ci.yml on 8ryanWh1t3/DeepSigma

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page