Skip to main content

Self-governing development for Claude Code — graph-of-agents reasoning with 13 innovations, GCC/GSD/Ralph protocols, and MCP integration

Project description

CogniGraph

Graphs That Think — Self-Governing Development for Claude Code

Turn any codebase into a governed, self-improving reasoning network.
One command. Full governance. Zero cloud infrastructure.

PyPI version Python 3.10+ License: Apache 2.0 Tests: 332 passing MCP Compatible Patent: EP26162901.8


What if your development environment learned from every mistake and never repeated one?

CogniGraph is the first self-governing development tool. It transforms any knowledge graph into a reasoning network where each node is an autonomous LLM agent — then wraps your Claude Code sessions in structured protocols that enforce spec-before-code, atomic commits, and autonomous fix-test-fix loops. Install it, run kogni init, and your dev environment starts getting smarter every session.


Quick Start

pip install cognigraph[api]
cd your-project
kogni init

That's it. CogniGraph scans your repo, builds a knowledge graph, creates a governed CLAUDE.md, and registers an MCP server. Open Claude Code and you're running with:

  • GCC — session continuity, branch management, auto-commits
  • GSD — structured DISCUSS → PLAN → EXECUTE → VERIFY workflow
  • Ralph Loop — autonomous fix-test-fix iteration with safety guards
  • MCP tools — governed graph reasoning inside Claude Code

No cloud account. No infrastructure. Your machine, your API keys, your data.


What You Get

Tool What it does
kogni init Scans repo, builds KG, injects governance protocols into CLAUDE.md
kogni_context 500-token focused context for any entity (replaces 20-60K brute-force loading)
kogni_reason Governed multi-agent reasoning over your knowledge graph
kogni_inspect Graph structure inspection — nodes, edges, hubs, types
kogni_search Semantic search across all KG nodes
kogni run CLI reasoning query against any graph
kogni serve REST API server with API key auth

How It Works

pip install → kogni init → Claude Code opens → MCP tools available
                                    ↓
                         GCC protocols injected
                         (session memory, branch management, auto-commits)
                                    ↓
                         GSD workflow active
                         (spec before code, atomic commits, verification)
                                    ↓
                         Ralph Loop ready
                         (autonomous iteration with binary criteria)
                                    ↓
                         Graph learns from every session
                         (Bayesian edge updates, convergence tracking)

13 Patent-Protected Innovations

# Innovation Module
1 PCST Activation — sublinear subgraph selection cognigraph.activation.pcst
2 MasterObserver — zero-cost transparency layer cognigraph.orchestration.observer
3 Convergent Message Passing — similarity-based termination cognigraph.orchestration.convergence
4 Backend Fallback Chain — heterogeneous inference with cost budgets cognigraph.backends.fallback
5 Hierarchical Aggregation — centrality-based topology-aware synthesis cognigraph.orchestration.aggregation
6 SemanticSHACLGate — 3-layer OWL-aware governance cognigraph.ontology.semantic_shacl_gate
7 Constrained F1 — joint answer quality + governance metric cognigraph.benchmarks.constrained_f1
8 OntologyGenerator — automated OWL+SHACL from regulation text cognigraph.ontology.generator
9 Adaptive Activation — dynamic Kmax from query complexity cognigraph.activation.adaptive
10 Online Graph Learning — Bayesian edge weight updates cognigraph.learning.graph_learner
11 LoRA Auto-Selection — per-entity adapter matching cognigraph.adapters.auto_select
12 TAMR+ Connector — retrieval-to-reasoning pipeline cognigraph.connectors.tamr
13 MCP Plugin — governed context engineering for Claude Code cognigraph.plugins.mcp_server

The Three Protocols

GCC — Global Context Controller

Your Claude Code sessions have memory now. GCC gives every session structured continuity:

Session starts → reads last commit + branch state (~700 tokens)
Work happens   → auto-commits every 30 minutes
Session ends   → checkpoints progress, clears session log
Next session   → resumes in under 60 seconds from last checkpoint

No more re-explaining your codebase. No more lost context between sessions.

GSD — Get Shit Done

Spec before code. Every significant feature follows a structured workflow:

DISCUSS  →  What problem? What constraints? What's in scope?
PLAN     →  Atomic tasks, dependency graph, verification criteria
EXECUTE  →  One task = one commit, tests pass before moving on
VERIFY   →  3-source check: evidence vs plan vs success criteria

Scope creep gets captured in "deferred" — never mid-sprint.

Ralph Loop — Autonomous Iteration

Feed a task with binary completion criteria. Ralph works until done or blocked:

# Binary criteria: "All tests pass. Build succeeds. No console errors."
# Ralph iterates: fix → test → check → fix → test → check → DONE
# Safety: max 20 iterations, no force-push, no deploy to main

Each iteration produces a GCC commit. If blocked after N attempts, Ralph stops and reports what it tried.


Backends

Backend Models Install
Anthropic Claude Haiku / Sonnet / Opus pip install cognigraph[api]
OpenAI GPT-4o / GPT-4o-mini pip install cognigraph[api]
AWS Bedrock Any Bedrock model pip install cognigraph[api]
Ollama Any local model pip install cognigraph[api]
vLLM GPU inference + LoRA pip install cognigraph[gpu]
llama.cpp CPU GGUF models pip install cognigraph[cpu]

Smart routing sends complex queries to capable models and simple queries to cheap ones — all within your cost budget.

from cognigraph.backends.fallback import BackendFallbackChain

chain = BackendFallbackChain([
    AnthropicBackend(model="claude-haiku-4-5-20251001"),
    OllamaBackend(model="qwen2.5:0.5b"),
])
# Tries Anthropic first → falls back to local Ollama automatically

Free vs Pro

Feature Free Pro
Innovations 1-5 (PCST, Observer, Convergence, Fallback, Aggregation) Yes Yes
Innovations 6-13 (SemanticSHACL, Graph Learning, LoRA, TAMR+, MCP) Yes
GCC / GSD / Ralph protocols Yes Yes
kogni init + CLAUDE.md generation Yes Yes
MCP tools (context, reason, inspect, search) Basic Full
Online graph learning (Bayesian edge updates) Yes
SemanticSHACLGate governance Yes
LoRA auto-selection Yes
REST API server Yes Yes
Commercial use Yes Yes

Governance

The SemanticSHACLGate enforces 3-layer semantic validation on every reasoning output:

  1. Framework Fidelity — agents cite correct regulatory frameworks
  2. Scope Boundary — responses stay within assigned domain
  3. Cross-Reference Integrity — proper attribution for cross-framework mentions

MultiGov-30 benchmark: 99.7% governance accuracy (FF: 100%, SB: 100%, CR: 98.3%).


Benchmarks

Metric CogniGraph Single-Agent Baseline Improvement
Constrained F1 0.757 0.328 +131%
Governance Accuracy 99.7% N/A
Token Efficiency 500 tokens/query 20-60K tokens 40-120x

Python API

from cognigraph import CogniGraph
from cognigraph.backends.api import AnthropicBackend

graph = CogniGraph.from_json("knowledge_graph.json")
graph.set_default_backend(AnthropicBackend(model="claude-haiku-4-5-20251001"))

result = graph.reason("How does GDPR conflict with the AI Act?")
print(result.answer)
print(f"Confidence: {result.confidence:.2f}")
print(f"Governance: {result.governance_score:.3f}")
print(f"Cost: ${result.cost_usd:.4f}")

Patent & IP Notice

CogniGraph implements methods described in European Patent Application EP26162901.8 (filed 6 March 2026, Quantamix Solutions B.V.). See NOTICE for full details.

Academic and research use is freely permitted under Apache 2.0.


Citation

@article{kumar2026cognigraph,
  title   = {CogniGraph: Governed Intelligence through Graph-of-Agents Reasoning
             over Knowledge Graph Topologies with Semantic SHACL Validation},
  author  = {Kumar, Harish},
  year    = {2026},
  institution = {Quantamix Solutions B.V.},
  note    = {European Patent Application EP26162901.8},
  url     = {https://github.com/quantamixsol/cognigraph}
}

Contributing

See CONTRIBUTING.md for development setup, testing, and PR guidelines.

License

Apache 2.0 — use it commercially, modify it freely, just keep the attribution.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cognigraph-0.6.4.tar.gz (530.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cognigraph-0.6.4-py3-none-any.whl (334.0 kB view details)

Uploaded Python 3

File details

Details for the file cognigraph-0.6.4.tar.gz.

File metadata

  • Download URL: cognigraph-0.6.4.tar.gz
  • Upload date:
  • Size: 530.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.11

File hashes

Hashes for cognigraph-0.6.4.tar.gz
Algorithm Hash digest
SHA256 6b0c9299343850065d22a669d40df6f195fd8dd5d8ac42e6fc958341fb36fe30
MD5 6b91f93cba446caf363bb1471aeef34f
BLAKE2b-256 feffbb1560f42312903ed5d52112df1b10abcb20573810cbae4eff2876536e9e

See more details on using hashes here.

File details

Details for the file cognigraph-0.6.4-py3-none-any.whl.

File metadata

  • Download URL: cognigraph-0.6.4-py3-none-any.whl
  • Upload date:
  • Size: 334.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.11

File hashes

Hashes for cognigraph-0.6.4-py3-none-any.whl
Algorithm Hash digest
SHA256 16f0c4b8052f672c134d3529feeb8206ea30daa264f3f8b61490c934b5ea17ed
MD5 66ef08f2029b272caf72359edf12affd
BLAKE2b-256 616ec86806adc98950414eb3f1292982b2826e2ece10e2d081cedff7bceb10e2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page