Persistent graph memory MCP server for AI agents using LadybugDB
Project description
🐞 Ladybug Memory MCP Server
Persistent graph memory for AI agents using LadybugDB — an embedded graph database with native vector search and full-text search.
Give your AI agent memory that persists across sessions, deduplicates automatically, and models knowledge as a graph with typed relationships.
Why Ladybug Memory?
- Graph memory — memories linked via Topic nodes and relationships (RELATED_TO, SUPERSEDES) with Cypher queries
- Three-layer auto-dedup — exact hash + semantic similarity + LLM-driven consolidation
- Hybrid search — HNSW vector search + full-text search combined
- Topic auto-linking — tags become graph nodes, enabling traversal queries
- Embedded — no Docker, no server process, single database directory
- Zero config — sensible defaults, just install and run
- Importance & access tracking — memories ranked by relevance and usage
Quick Start
# Run directly with uvx (no install needed)
uvx ladybug-memory-mcp
Or install and run:
pip install ladybug-memory-mcp
ladybug-memory-mcp
MCP Configuration
Add to your MCP client config (Kiro, Claude Desktop, Cursor, etc.):
{
"mcpServers": {
"ladybug-memory": {
"command": "uvx",
"args": ["ladybug-memory-mcp@latest"],
"env": {
"FASTMCP_LOG_LEVEL": "ERROR"
}
}
}
}
That's it — zero config required. All settings have sensible defaults.
Tools
| Tool | Description |
|---|---|
memory_store |
Store a memory with auto-dedup and auto-link to Topic nodes |
memory_search |
Hybrid semantic + keyword search, ranked by relevance |
memory_get |
Get full untruncated content of a memory by ID |
memory_update |
Update content, importance, or tags of a memory |
memory_delete |
Delete a memory and all its relationships |
memory_relate |
Create RELATED_TO or SUPERSEDES relationship between memories |
memory_traverse |
Run read-only Cypher queries against the memory graph |
memory_list |
List memories filtered by recency, category, topic, or importance |
memory_stats |
Database statistics: counts, categories, topics, relationships |
memory_consolidate |
Find clusters of similar memories for review and merging |
Graph Data Model
(:Memory) — content, embedding, category, tags, importance, access_count, timestamps
(:Topic) — auto-created from tags
(:Memory)-[:ABOUT]->(:Topic) # memory is about a topic
(:Memory)-[:RELATED_TO]->(:Memory) # memories are related
(:Memory)-[:SUPERSEDES]->(:Memory) # newer memory replaces older
Example: Store and Search
# Store a memory (via MCP tool call)
memory_store(
content="User prefers Python over Node.js for backend tools",
category="preference",
tags=["python", "nodejs", "backend"],
importance=4
)
# Search memories
memory_search(query="what language does the user prefer")
# Traverse the graph
memory_traverse(
cypher_query="MATCH (m:Memory)-[:ABOUT]->(t:Topic {name: 'python'}) RETURN m.content"
)
Example: Graph Relationships
# Link related memories
memory_relate(from_id=5, to_id=3, relationship="RELATED_TO")
# Mark a decision as superseded
memory_relate(from_id=8, to_id=2, relationship="SUPERSEDES")
# Find all memories about a topic
memory_traverse(
cypher_query="MATCH (m:Memory)-[:ABOUT]->(t:Topic) RETURN t.name, COUNT(m) ORDER BY COUNT(m) DESC"
)
Three-Layer Deduplication
Every memory_store call runs through three dedup layers:
- Exact hash — SHA256 of normalized content. Identical content is rejected, importance bumped.
- Semantic similarity — If cosine similarity > 0.92 with an existing memory, merges into it (keeps longer content, merges tags, bumps importance).
- Consolidation — Manual via
memory_consolidate. Finds clusters of related memories for LLM-driven review and merging.
Categories
| Category | Use for |
|---|---|
learning |
Technical knowledge, facts, how things work |
preference |
User preferences and choices |
decision |
Architecture decisions, tool choices |
pattern |
Recurring workflows, conventions |
general |
Everything else (default) |
Configuration
All settings are optional — defaults work out of the box.
| Environment Variable | Default | Description |
|---|---|---|
MEMORY_DB_PATH |
~/.agent-memory/memory.lbug |
LadybugDB database path |
MEMORY_DEDUP_THRESHOLD |
0.92 |
Semantic similarity threshold for auto-dedup |
MEMORY_EMBEDDING_MODEL |
BAAI/bge-small-en-v1.5 |
FastEmbed model for embeddings |
MEMORY_EMBEDDING_DIM |
384 |
Embedding dimension (must match model) |
MEMORY_SEARCH_LIMIT |
10 |
Max results from memory_search |
MEMORY_LIST_LIMIT |
20 |
Max results from memory_list |
MEMORY_MAX_CONTENT |
500 |
Content truncation length in search/list results |
MEMORY_LATENCY_WARN_MS |
50 |
Log warning when operation exceeds this (ms) |
In-Memory Mode (Testing)
"env": { "MEMORY_DB_PATH": ":memory:" }
All data is ephemeral — lost on restart. Useful for testing.
Kiro Power
This server is also available as a Kiro Power with:
- Pre-configured MCP server
- Three hooks for automatic memory persistence and recall
- Steering files with setup guide and Cypher query examples
Architecture
AI Agent (Kiro, Claude, etc.)
│
├─ memory_store ──→ embed content → dedup check → insert node → link topics
├─ memory_search ─→ embed query → HNSW vector search + FTS → rank & return
├─ memory_traverse → execute Cypher → return graph results
│
└─ LadybugDB (embedded, single directory)
├─ Memory nodes (content + FLOAT[384] embeddings)
├─ Topic nodes (auto-linked from tags)
├─ HNSW vector index (cosine similarity)
├─ FTS index (keyword search)
└─ Graph relationships (ABOUT, RELATED_TO, SUPERSEDES)
Requirements
- Python 3.10+
- Dependencies installed automatically:
real-ladybug,fastembed,mcp - ~130MB disk for the embedding model (downloaded on first run)
Contributing
Issues and PRs welcome. See LICENSE for terms.
License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ladybug_memory_mcp-0.1.0.tar.gz.
File metadata
- Download URL: ladybug_memory_mcp-0.1.0.tar.gz
- Upload date:
- Size: 12.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8b5b3e9d3d6c50461093afbaeb1e007cde519577d2ac939ced510926acef83b6
|
|
| MD5 |
8fd0485646093918bbb937026073d3f1
|
|
| BLAKE2b-256 |
559865f361953b4a6fd7236ac9a042189bb8da5b63f2255396e7fd67a90293dc
|
File details
Details for the file ladybug_memory_mcp-0.1.0-py3-none-any.whl.
File metadata
- Download URL: ladybug_memory_mcp-0.1.0-py3-none-any.whl
- Upload date:
- Size: 13.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
58ab1adaaa71478a5effe28c734de9583722bb59f57c30286f6a733e2e0f1a2a
|
|
| MD5 |
0f3fbd018f82c0e43413840aaf5f6189
|
|
| BLAKE2b-256 |
3d03d81186dce8ebf24f44d226fe3b8f07d1f2169bb14b024b35ab5ff3d8edfe
|