Skip to main content

YantrikDB MCP Server — Cognitive memory for AI agents

Project description

YantrikDB MCP Server

Cognitive memory for AI agents. Works with Claude Code, Cursor, Windsurf, and any MCP-compatible client.

Website: yantrikdb.com · Docs: yantrikdb.com/guides/mcp · GitHub: yantrikos/yantrikdb-mcp

Install

pip install yantrikdb-mcp

Configure

The MCP server has three deployment modes. Pick the one that fits your setup.

Mode 1 — Local (default, recommended for single user)

The MCP server runs the engine in-process with a local SQLite database. Fast, private, zero dependencies.

{
  "mcpServers": {
    "yantrikdb": {
      "command": "yantrikdb-mcp"
    }
  }
}

That's it. The agent auto-recalls context, auto-remembers decisions, and auto-detects contradictions — no prompting needed.

Mode 2 — HTTP Cluster (recommended for shared/multi-machine setups)

Forward all tool calls to a YantrikDB HTTP cluster instead of using an embedded engine. The MCP server is a thin stateless client — all memories live on the cluster, accessible from any machine.

Benefits: shared memory across machines, high availability, no local embedder download, no local database.

{
  "mcpServers": {
    "yantrikdb": {
      "command": "yantrikdb-mcp",
      "env": {
        "YANTRIKDB_SERVER_URL": "http://node1:7438,http://node2:7438",
        "YANTRIKDB_TOKEN": "ydb_your_database_token"
      }
    }
  }
}
  • Comma-separate multiple nodes for Raft cluster auto-discovery
  • Automatic leader-following on failover
  • 15s request timeout
  • Get the token from the cluster: yantrikdb token create --db your_database

Mode 3 — SSE Server (legacy, single remote instance)

Run the MCP server itself as a long-running SSE server with its own embedded database. Clients connect via HTTP streaming.

# Generate a secure API key
export YANTRIKDB_API_KEY=$(python -c "import secrets; print(secrets.token_urlsafe(32))")

# Start SSE server
yantrikdb-mcp --transport sse --port 8420
{
  "mcpServers": {
    "yantrikdb": {
      "type": "sse",
      "url": "http://your-server:8420/sse",
      "headers": {
        "Authorization": "Bearer YOUR_API_KEY"
      }
    }
  }
}

Supports sse and streamable-http transports. Note: SSE connections can drop on idle — Mode 2 (HTTP Cluster) is more reliable for shared deployments.

Environment Variables

Variable Used in Mode Default Description
YANTRIKDB_SERVER_URL Cluster (unset → local mode) Comma-separated cluster node URLs
YANTRIKDB_TOKEN Cluster (none) Bearer token for the cluster database
YANTRIKDB_DB_PATH Local ~/.yantrikdb/memory.db Database file path
YANTRIKDB_EMBEDDING_MODEL Local all-MiniLM-L6-v2 Sentence transformer model
YANTRIKDB_EMBEDDING_DIM Local 384 Embedding dimension
YANTRIKDB_API_KEY SSE server (none) Bearer token when serving SSE/HTTP

Why Not File-Based Memory?

File-based memory (CLAUDE.md, memory files) loads everything into context every conversation. YantrikDB recalls only what's relevant.

Benchmark: 15 queries × 4 scales

Memories File-Based YantrikDB Savings Precision
100 1,770 tokens 69 tokens 96% 66%
500 9,807 tokens 72 tokens 99.3% 77%
1,000 19,988 tokens 72 tokens 99.6% 84%
5,000 101,739 tokens 53 tokens 99.9% 88%

Selective recall is O(1). File-based memory is O(n).

  • At 500 memories, file-based exceeds 32K context windows
  • At 5,000, it doesn't fit in any context window — not even 200K
  • YantrikDB stays at ~70 tokens per query, under 60ms latency
  • Precision improves with more data — the opposite of context stuffing

Run the benchmark yourself: python benchmarks/bench_token_savings.py

Tools

15 tools, full engine coverage:

Tool Actions Purpose
remember single / batch Store memories — decisions, preferences, facts, corrections
recall search / refine / feedback Semantic search, refinement, and retrieval feedback
forget single / batch Tombstone memories
correct Fix incorrect memory (preserves history)
think Consolidation + conflict detection + pattern mining
memory get / list / search / update_importance / archive / hydrate Manage individual memories + keyword search
graph relate / edges / link / search / profile / depth Knowledge graph operations
conflict list / get / resolve / reclassify Handle contradictions and teach substitution patterns
trigger pending / history / acknowledge / deliver / act / dismiss Proactive insights and warnings
session start / end / history / active / abandon_stale Session lifecycle management
temporal stale / upcoming Time-based memory queries
procedure learn / surface / reinforce Procedural memory — learn and reuse strategies
category list / members / learn / reset Substitution categories for conflict detection
personality get / set AI personality traits from memory patterns
stats stats / health / weights / maintenance Engine stats, health, weights, and index rebuilds

See yantrikdb.com/guides/mcp for full documentation.

Examples

1. Auto-recall at conversation start

User: "What did we decide about the database migration?"

The agent automatically calls recall("database migration decision") and retrieves relevant memories before responding — no manual prompting needed.

2. Remember decisions + build knowledge graph

User: "We're going with PostgreSQL for the new service. Alice will own the migration."

The agent calls:

  • remember(text="Decided to use PostgreSQL for the new service", domain="architecture", importance=0.8)
  • remember(text="Alice owns the PostgreSQL migration", domain="people", importance=0.7)
  • graph(action="relate", entity="Alice", target="PostgreSQL Migration", relationship="owns")

3. Contradiction detection

After storing "We use Python 3.11" and later "We upgraded to Python 3.12", calling think() detects the conflict. The agent surfaces it:

"I found a contradiction: you previously said Python 3.11, but recently mentioned Python 3.12. Which is current?"

Then resolves with conflict(action="resolve", conflict_id="...", strategy="keep_b").

Privacy Policy

YantrikDB MCP Server stores all data locally on your machine (default: ~/.yantrikdb/memory.db). No data is sent to external servers, no telemetry is collected, and no third-party services are contacted during operation.

  • Data collection: Only what you explicitly store via the remember tool or what the AI agent stores on your behalf.
  • Data storage: Local SQLite database on your filesystem. You control the path via YANTRIKDB_DB_PATH.
  • Third-party sharing: None. Data never leaves your machine in local (stdio) mode.
  • Network mode: When using SSE/HTTP transport, data travels between your client and your self-hosted server. No Anthropic or third-party servers are involved.
  • Embedding model: Uses a local ONNX model (all-MiniLM-L6-v2). Model files are downloaded once from Hugging Face Hub on first use, then cached locally.
  • Retention: Data persists until you delete it (forget tool) or delete the database file.
  • Contact: developer@pranab.co.in

Full policy: yantrikdb.com/privacy

Contributing

See CONTRIBUTING.md for a venv setup, running pytest, and opening PRs.

Support

License

This MCP server is licensed under MIT — use it freely in any project.

Note: This package depends on yantrikdb (the cognitive memory engine), which is licensed under AGPL-3.0. The AGPL applies to the engine itself — if you modify the engine and distribute it or provide it as a network service, those modifications must also be AGPL-3.0. Using the engine as-is via this MCP server does not trigger AGPL obligations on your code.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

yantrikdb_mcp-0.5.1.tar.gz (33.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

yantrikdb_mcp-0.5.1-py3-none-any.whl (31.3 kB view details)

Uploaded Python 3

File details

Details for the file yantrikdb_mcp-0.5.1.tar.gz.

File metadata

  • Download URL: yantrikdb_mcp-0.5.1.tar.gz
  • Upload date:
  • Size: 33.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for yantrikdb_mcp-0.5.1.tar.gz
Algorithm Hash digest
SHA256 abd65a72a9246b93eec246ae8d3a2bd2e47ca74f025b15f9bc585c59b55da092
MD5 f4ee15d8f80f9da89e364e66c589a390
BLAKE2b-256 04c4e56615d57bea4e876615b7a649a5f39c5f48c5d1df1adc9abf3a6b29fa5d

See more details on using hashes here.

Provenance

The following attestation bundles were made for yantrikdb_mcp-0.5.1.tar.gz:

Publisher: publish.yml on yantrikos/yantrikdb-mcp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file yantrikdb_mcp-0.5.1-py3-none-any.whl.

File metadata

  • Download URL: yantrikdb_mcp-0.5.1-py3-none-any.whl
  • Upload date:
  • Size: 31.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for yantrikdb_mcp-0.5.1-py3-none-any.whl
Algorithm Hash digest
SHA256 132ad80f614057d9015a72473d778ed4e607e8a021abfd08a105f4982c99e7be
MD5 496440947a0a7299b1070a79fd35c978
BLAKE2b-256 e32a01dd3892c0b0900013aa8cb2dd92b75d0757554588c9535cf137a011af32

See more details on using hashes here.

Provenance

The following attestation bundles were made for yantrikdb_mcp-0.5.1-py3-none-any.whl:

Publisher: publish.yml on yantrikos/yantrikdb-mcp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page