Dev Intelligence Layer — turn any codebase into a reasoning-ready knowledge graph. Works with any IDE, any AI tool, or plain terminal.
Project description
CogniGraph
Dev Intelligence Layer — Graphs That Think
Turn any codebase into a reasoning-ready knowledge graph.
One command. Any IDE. Any AI tool. Zero cloud infrastructure.
What if your development environment understood your entire codebase — and kept learning?
CogniGraph transforms any codebase into a knowledge graph where every module, service, and config is a node backed by an autonomous LLM agent. Query it from any IDE, any AI tool, or plain terminal. One
pip install, onekogni init, and your dev environment becomes intelligent.
Quick Start
pip install cognigraph[api]
cd your-project
kogni init
That's it. CogniGraph scans your repo, builds a knowledge graph, and configures your IDE. Works with:
| IDE / Tool | Integration | Command |
|---|---|---|
| Claude Code | MCP server + CLAUDE.md | kogni init (auto-detected) |
| Cursor | MCP server + .cursorrules | kogni init --ide cursor |
| VS Code + Copilot | MCP server + copilot-instructions | kogni init --ide vscode |
| Windsurf | MCP server + .windsurfrules | kogni init --ide windsurf |
| Codex / Replit / JetBrains | CLI + Python SDK | kogni init --ide generic |
| Plain terminal | Full CLI | kogni init --ide generic |
| CI/CD pipelines | Python SDK | pip install cognigraph |
No cloud account. No infrastructure. Your machine, your API keys, your data.
What You Get
CLI (any terminal, any IDE)
kogni run "What depends on the auth service?" # Graph reasoning
kogni context auth-lambda # 500-token focused context
kogni inspect --stats # Graph statistics
kogni scan repo . # Rebuild knowledge graph
kogni rebuild # Rebuild chunks from source files
kogni rebuild --force # Force re-read ALL source files
kogni doctor # Health check
kogni setup-guide # Backend setup help
kogni register # Register for updates (optional)
kogni activate <key> # Activate team/enterprise license
kogni billing # View tier & usage
Python SDK (any Python environment)
from cognigraph import CogniGraph
# Load graph — auto-creates backend from cognigraph.yaml config
graph = CogniGraph.from_json("cognigraph.json", config="cognigraph.yaml")
result = graph.reason("How does GDPR conflict with the AI Act?")
print(result.answer) # Multi-agent synthesized answer
print(f"Cost: ${result.cost_usd:.4f}") # Transparent cost tracking
# Rebuild chunks from source files (e.g., after code changes)
graph.rebuild_chunks(force=True)
REST API (any HTTP client — Copilot, Postman, custom tools, bots)
# Start the server
kogni serve # localhost:8000
# Query from anything that speaks HTTP
curl -X POST http://localhost:8000/reason \
-H "Content-Type: application/json" \
-d '{"query": "What depends on the auth service?"}'
{
"answer": "The auth service is depended on by...",
"confidence": 0.87,
"cost_usd": 0.0023,
"latency_ms": 1250.5
}
Endpoints: /reason (single query), /reason/batch (up to 50), /graph/stats, /nodes/{id}, /health
Auth: API key via X-API-Key header or Bearer token
Docs: Interactive Swagger UI at http://localhost:8000/docs
Full reference: docs/api-reference.md
MCP Tools (Claude Code, Cursor, VS Code, Windsurf)
| Tool | Purpose |
|---|---|
kogni_context |
500-token focused context (replaces 20-60K file reads) |
kogni_reason |
Multi-agent graph reasoning |
kogni_inspect |
Graph structure inspection |
kogni_preflight |
Pre-change safety check |
kogni_impact |
"What breaks if I change X?" |
kogni_lessons |
Surface past mistakes before you repeat them |
kogni_learn |
Teach the graph new knowledge |
How It Works
Your Codebase ──→ kogni init ──→ Knowledge Graph (cognigraph.json)
│
┌──────────┬──────────┬────────┼────────┐
▼ ▼ ▼ ▼ ▼
CLI REST API Python MCP Direct
(terminal) (HTTP) SDK Server JSON read
│ │ │ │ │
▼ ▼ ▼ ▼ ▼
Any IDE Any tool Scripts Claude Custom
terminal Copilot CI/CD Cursor parsers
Postman Jupyter VS Code
Slack bots Replit Windsurf
The knowledge graph is the product. Once built, query it however you want:
| Access Method | Use When | Example |
|---|---|---|
kogni run |
Quick terminal query | kogni run "what calls payments?" |
kogni serve |
Any HTTP client needs access | curl localhost:8000/reason |
| Python SDK | Scripts, notebooks, pipelines | graph.reason("query") |
| MCP Server | AI-powered IDE with MCP support | Auto-available after kogni init |
| Read JSON | Custom integration, any language | Parse cognigraph.json directly |
Model-agnostic. Use free local models (Ollama), cloud APIs (Anthropic, OpenAI), or enterprise backends (AWS Bedrock). Smart routing sends complex queries to capable models and simple ones to cheap models, all within your cost budget.
13 Innovations (Patent EP26162901.8)
| # | Innovation | What it does |
|---|---|---|
| 1 | PCST Activation | Sublinear subgraph selection — only wake relevant nodes |
| 2 | MasterObserver | Zero-cost transparency layer for reasoning traces |
| 3 | Convergent Message Passing | Agents talk until they agree, then stop |
| 4 | Backend Fallback Chain | Auto-fallback across models with cost budgets |
| 5 | Hierarchical Aggregation | Topology-aware answer synthesis |
| 6 | SemanticSHACLGate | 3-layer OWL-aware governance validation |
| 7 | Constrained F1 | Joint quality + governance evaluation metric |
| 8 | OntologyGenerator | Auto-generate OWL+SHACL from documents |
| 9 | Adaptive Activation | Dynamic node selection from query complexity |
| 10 | Online Graph Learning | Bayesian edge weight updates from usage |
| 11 | LoRA Auto-Selection | Per-entity adapter matching |
| 12 | TAMR+ Connector | Retrieval-to-reasoning pipeline |
| 13 | Multi-Resolution Embeddings | Hybrid skill matching (regex + semantic) |
All 13 innovations are free for every developer. No license key required.
Backends
| Backend | Models | Cost | Install |
|---|---|---|---|
| Ollama | Any local model (Qwen, Llama, etc.) | $0 (local) | pip install cognigraph[api] |
| Anthropic | Claude Haiku / Sonnet / Opus | $5 free credits | pip install cognigraph[api] |
| OpenAI | GPT-4o / GPT-4o-mini | $5 free credits | pip install cognigraph[api] |
| AWS Bedrock | Claude, Titan, Llama, Mistral | AWS Free Tier | pip install cognigraph[api] |
| vLLM | GPU inference + LoRA | $0 (your GPU) | pip install cognigraph[gpu] |
| llama.cpp | CPU GGUF models | $0 (your CPU) | pip install cognigraph[cpu] |
kogni setup-guide # See all options with setup steps
kogni setup-guide ollama # Free, local, no API key needed
kogni setup-guide anthropic # Best quality, $5 free credits
kogni doctor # Verify everything works
Pricing — 100% Free for Developers
CogniGraph follows the open-core model: everything a solo developer needs is free forever. We monetize team and enterprise collaboration features.
| Community (Free) | Team | Enterprise | |
|---|---|---|---|
| Price | $0 forever | $29/dev/month | Custom |
| All 13 innovations | ✓ | ✓ | ✓ |
| All MCP tools (7 tools) | ✓ | ✓ | ✓ |
| All backends (Ollama, Anthropic, OpenAI, Bedrock, vLLM) | ✓ | ✓ | ✓ |
| CLI + Python SDK + REST API | ✓ | ✓ | ✓ |
| Unlimited queries | ✓ | ✓ | ✓ |
| Auto-growing knowledge graph | ✓ | ✓ | ✓ |
| Session continuity workspace | ✓ | ✓ | ✓ |
| SemanticSHACL governance | ✓ | ✓ | ✓ |
| Multi-IDE support | ✓ | ✓ | ✓ |
| Commercial use | ✓ | ✓ | ✓ |
| Shared KG sync across team | — | ✓ | ✓ |
| Multi-developer coordination | — | ✓ | ✓ |
| Team analytics & insights | — | ✓ | ✓ |
| Custom ontologies | — | ✓ | ✓ |
| Private deployment | — | — | ✓ |
| Compliance & audit trail | — | — | ✓ |
| SLA support | — | — | ✓ |
Why free? We believe every developer deserves intelligent tooling regardless of budget. The innovations that save you tokens and time should not be behind a paywall. Teams pay for collaboration — individuals never pay.
Benchmarks
| Metric | CogniGraph | Single-Agent Baseline | Improvement |
|---|---|---|---|
| Constrained F1 | 0.757 | 0.328 | +131% |
| Governance Accuracy | 99.7% | N/A | — |
| Token Efficiency | 500 tokens/query | 20-60K tokens | 40-120x |
Governance
The SemanticSHACLGate enforces 3-layer semantic validation on every reasoning output:
- Framework Fidelity — agents cite correct regulatory frameworks
- Scope Boundary — responses stay within assigned domain
- Cross-Reference Integrity — proper attribution across domains
MultiGov-30 benchmark: 99.7% governance accuracy (FF: 100%, SB: 100%, CR: 98.3%).
Patent & IP Notice
CogniGraph implements methods described in European Patent Application EP26162901.8 (filed 6 March 2026, Quantamix Solutions B.V.). See NOTICE for details.
All 13 innovations are free to use under Apache 2.0. The patent protects the specific methods — you can use CogniGraph freely in any project, commercial or otherwise.
What's New in v0.10.0
ChunkScorer replaces PCST as default activation — The #1 blocker (Bug 1, P0) is fixed:
- ChunkScorer (new default): Each chunk gets its own embedding and is scored independently against the query. A query about "ProductList function" directly matches the chunk containing that function, regardless of what else the file contains. No more activating
tailwind.config.tsinstead ofProducts.tsx. - PCST demoted to legacy: Still available via
strategy: "pcst"in config, but no longer the default. PCST's graph-structure bias toward hub nodes was fundamentally wrong for code search. - Multiple nodes activated: ChunkScorer returns all nodes above
min_scorethreshold (configurable), not just 1. Message-passing between agents actually works now. - Bug 7 fix: Bedrock auth detection now uses
boto3.Session().get_credentials()— works with IAM profiles, SSO, env vars, and~/.aws/credentials. - Bug 9 fix: Added control character escaping to JSON repair chain (LLMs produce literal newlines in strings).
- Bug 19 fix:
ReasoningResult.contentbackward-compat property added (alias for.answer). - Default strategy changed:
activation.strategydefaults to"chunk"(was"pcst").
9 new tests for ChunkScorer. 745 tests passing (up from 736).
What's New in v0.9.0
Neo4j Backend + Critical Bug Fixes — CogniGraph now supports Neo4j as a first-class backend alongside JSON/NetworkX:
- Neo4j backend:
CogniGraph.from_neo4j()/to_neo4j()for loading and exporting graphs - CypherActivation: Vector search on chunk embeddings via Cypher replaces PCST for Neo4j mode — faster and more accurate node activation
- Schema management:
create_schema()creates constraints + vector index on:Chunknodes - Chunk-level storage:
:CogniNode→:HAS_CHUNK→:Chunkwith optional embeddings - Bug 1 (P0) fix: Chunk-aware scoring now uses 500 chars from top 5 chunks with function/class prioritization (was 200 chars from 3 chunks)
- Bug 18 fix: Confidence calibration now uses relevance-weighted scoring instead of simple averaging
- Bug 7 fix: Bedrock
api_key_envcorrected toAWS_ACCESS_KEY_ID - Bug 9 fix: JSON repair now strips comments before fixing quotes/commas
- Bug 14 fix:
out/directory added to scan skip list - Bug 16 fix: SkillAdmin embedding log messages no longer repeat per query
- Bug 17 fix:
kogni doctorchecks bothkogniandcognigraphMCP keys
37 new tests (8 chunk scoring + 5 confidence calibration + 13 Neo4j connector + 7 CypherActivation + 4 graph Neo4j). 736 tests passing (up from 699).
What's New in v0.8.0
Context-Aware Query Reformulator (ADR-104) — Queries are now automatically enhanced with conversation context before PCST activation:
- Auto-hardened in Claude Code / Cursor / Codex (zero extra cost — uses existing conversation context)
- Pronoun resolution: "what does this do?" → resolves "this" from chat history
- Attachment support: screenshots, error logs, diagrams are described and woven into queries
- File + symbol injection: current file and active symbols ground vague queries
- LLM mode for standalone SDK users (configurable, optional)
- Fail-open: if reformulation fails, original query passes through unchanged
49 new tests for query reformulation. 699 tests passing (up from 650).
What's New in v0.7.9
Content-Aware PCST Activation (ADR-103) — 3-layer fix ensures PCST always selects content-bearing nodes over empty structural connectors (directories, namespaces):
- Layer 1:
log₂(2 + chunk_count)content richness multiplier in relevance scoring - Layer 2: Post-PCST filter replaces zero-chunk nodes with content-bearing neighbours
- Layer 3: Direct file lookup bypass when query mentions a specific filename
6 Bug Fixes:
- Bedrock config writes
regioninstead ofapi_key(P2) kogni grow --fullrespects SKIP_DIRS exclusions (P2)kogni doctordetects MCP registration for all IDEs (P2)kogni initprompts before overwriting cognigraph.yaml (P3)- SkillAdmin duplicate logging prevented (P3)
- 33 new tests for content-aware PCST activation
650 tests passing (up from 617).
What's New in v0.7.7
Chunk Pipeline (breaking fix) — Every node now auto-loads evidence chunks from source files at graph load time. Hand-built KGs that previously had zero chunks now get full evidence for reasoning. New kogni rebuild command and graph.rebuild_chunks() API.
13 Bug Fixes — All issues from end-to-end testing resolved:
- Agents no longer refuse queries with "outside my domain" (P0)
- REST API Pydantic forward reference crash fixed (P0)
- Server auto-creates real backend from config instead of MockBackend (P0)
from_json()accepts config path as string (P1)- Auto-backend creation when no backend set (P1)
- Bedrock cross-region inference profile guidance (P2)
- Metrics no longer double-count token savings (P2)
- JSON repair for LLM ontology generation (5 strategies) (P2)
- NetworkX FutureWarning suppressed (P3)
- MCP server reports correct version (P3)
Lead Generation — kogni register, kogni activate, kogni billing commands. Stripe webhook handler for automated license delivery.
617 tests passing (up from 554).
Citation
@article{kumar2026cognigraph,
title = {CogniGraph: Governed Intelligence through Graph-of-Agents Reasoning
over Knowledge Graph Topologies with Semantic SHACL Validation},
author = {Kumar, Harish},
year = {2026},
institution = {Quantamix Solutions B.V.},
note = {European Patent Application EP26162901.8},
url = {https://github.com/quantamixsol/cognigraph}
}
Contributing
See CONTRIBUTING.md for development setup, testing, and PR guidelines.
License
Apache 2.0 — use it commercially, modify it freely, just keep the attribution.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file cognigraph-0.10.0.tar.gz.
File metadata
- Download URL: cognigraph-0.10.0.tar.gz
- Upload date:
- Size: 625.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
78af3d8e79d4cb4385c2137d94af340d211b54159767dbb461323dec07e29a1d
|
|
| MD5 |
d94f163ca329f2c8d1558e13835e1a53
|
|
| BLAKE2b-256 |
aaf9755032d7a3eb91fbabe266933b0ceb6962e16aedfa20cfef7e6dc1b75f51
|
File details
Details for the file cognigraph-0.10.0-py3-none-any.whl.
File metadata
- Download URL: cognigraph-0.10.0-py3-none-any.whl
- Upload date:
- Size: 405.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
746bbe6d3946eef17e3c57cbf210d8ff0bad1ceb604120fcd77ef09d3b8fe371
|
|
| MD5 |
dccd80af8e2afae604733ffaf1133a61
|
|
| BLAKE2b-256 |
c2cf570bf5d593738ed3df186b5c880731126c0cb4ef4504c2b8f53d83eda3e9
|