Three-layer persistent memory system (Markdown + ChromaDB vectors + NetworkX graph) for AI agents
Project description
agent-memory-core
Three-layer persistent memory for AI agents. Give any agent platform long-term recall with semantic search, knowledge graphs, and human-readable markdown — all in one pip install.
Why?
Most agent memory solutions give you one thing — a vector DB or a key-value store. Real memory needs three layers working together:
┌─────────────────────────────────────────────────────┐
│ Your AI Agent │
│ (LangChain / CrewAI / AutoGPT / n8n) │
└──────────────────────┬──────────────────────────────┘
│
┌────────▼────────┐
│ MemoryManager │ ← Unified API
└────────┬────────┘
┌──────────────┼──────────────┐
▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐
│ L1 │ │ L2 │ │ L3 │
│ Markdown │ │ Vector │ │ Graph │
│ │ │ │ │ │
│MEMORY.md │ │ ChromaDB │ │NetworkX │
│Daily logs│ │Sentence │ │Knowledge │
│Reference │ │Transform │ │ Graph │
└──────────┘ └──────────┘ └──────────┘
Human- Semantic Relationship
readable search traversal
Quick Start
pip install agent-memory-core
from agent_memory_core import MemoryManager
# Point to any directory — it becomes your memory workspace
mm = MemoryManager("/path/to/workspace")
# Index existing markdown files into vectors + graph
stats = mm.index()
print(f"Indexed {stats['chunks']} chunks, {stats['nodes']} graph nodes")
# Semantic search across all memory
results = mm.search("what architecture decisions were made?")
for r in results["vector_results"]:
print(f"[{r['metadata']['section']}] {r['content'][:100]}")
# Store new information (auto-indexes)
mm.store("Decided to use PostgreSQL for the main database", to_memory=True)
# Get formatted context for system prompt injection
context = mm.search_formatted("database decisions")
print(context)
Architecture
| Layer | Technology | Purpose | File Location |
|---|---|---|---|
| L1: Markdown | Plain .md files |
Human-readable curated knowledge | MEMORY.md, memory/*.md, reference/*.md |
| L2: Vector | ChromaDB + all-MiniLM-L6-v2 | Semantic similarity search | vector_memory/chroma_db/ |
| L3: Graph | NetworkX (directed) | Relationship traversal between concepts | vector_memory/memory_graph.json |
All three layers sync together. The indexer parses L1 markdown → generates L2 embeddings → rebuilds L3 graph automatically.
Platform Integrations
LangChain
pip install agent-memory-core[langchain]
from agent_memory_core.integrations.langchain import MemorySearchTool, MemoryStoreTool
tools = [
MemorySearchTool(base_dir="/workspace"),
MemoryStoreTool(base_dir="/workspace"),
]
# Use with any LangChain agent
agent = create_react_agent(llm, tools)
CrewAI
pip install agent-memory-core[crewai]
from agent_memory_core.integrations.crewai import MemorySearchTool, MemoryStoreTool
agent = Agent(
role="Research Assistant",
tools=[MemorySearchTool(base_dir="/workspace")],
)
Any Platform (Direct API)
from agent_memory_core import MemoryManager
mm = MemoryManager("/workspace")
# Use in any framework's tool/function calling
def search_memory(query: str) -> str:
return mm.search_formatted(query, compact=True)
def store_memory(text: str) -> str:
mm.store(text, to_memory=True)
return "Stored."
Features
Significance Classifier
Automatically classify which interactions are worth remembering:
from agent_memory_core import SignificanceClassifier
# Rule-based (no LLM needed)
classifier = SignificanceClassifier()
is_sig, reason, score = classifier.classify("Decided to migrate to PostgreSQL")
# → (True, "SIGNIFICANT: 3 indicators, 2 high-priority (score: 1.90)", 1.9)
# LLM-powered (bring any provider)
classifier = SignificanceClassifier(llm_fn=lambda prompt: openai.chat(prompt))
Auto-Promotion Pipeline
Automatically promote significant daily log entries to long-term memory:
result = mm.promote(days_back=3, min_confidence=0.7)
print(f"Promoted {result['promotions_made']} entries to MEMORY.md")
Sync Status
Monitor memory health:
status = mm.sync_status()
# {'memory_md_hash': 'a1b2c3', 'vector_chunks': 42, 'nodes': 156, 'edges': 312, ...}
Workspace Structure
After setup, your workspace looks like:
workspace/
├── MEMORY.md ← Curated long-term knowledge
├── memory/
│ ├── 2026-02-21.md ← Daily log (auto-created)
│ └── ...
├── reference/ ← Institutional knowledge (optional)
│ ├── people.md
│ └── infrastructure.md
└── vector_memory/
├── chroma_db/ ← Vector database
└── memory_graph.json ← Knowledge graph
API Reference
MemoryManager(base_dir, **kwargs)
| Method | Description |
|---|---|
index() |
Parse markdown → index vectors → rebuild graph |
search(query, n_results=5) |
Search vectors + graph, return structured results |
search_formatted(query, compact=False) |
Search and return markdown-formatted context |
store(text, to_memory=False) |
Store to daily log (+ MEMORY.md), re-index |
promote(days_back=3, min_confidence=0.7) |
Auto-promote significant entries |
sync_status() |
Memory health check |
Individual Stores
Access layers directly when needed:
mm.markdown # MarkdownStore — file management
mm.vectors # VectorStore — ChromaDB operations
mm.graph # GraphStore — NetworkX operations
Configuration
mm = MemoryManager(
base_dir="/workspace",
vector_db_subdir="vector_memory/chroma_db", # ChromaDB location
graph_filename="memory_graph.json", # Graph file name
model_name="all-MiniLM-L6-v2", # Embedding model
llm_fn=my_llm_callable, # Optional LLM for classifier
)
Development
git clone https://github.com/Jakebot-ops/agent-memory-core.git
cd agent-memory-core
pip install -e ".[dev]"
pytest
License
MIT — use it however you want.
Credits
Born from the agent-memory-guide — a practical guide to building persistent memory for AI agents. This package extracts the three-layer architecture into a standalone, pip-installable library.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agent_memory_core-0.1.0.tar.gz.
File metadata
- Download URL: agent_memory_core-0.1.0.tar.gz
- Upload date:
- Size: 17.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
94c83da11525956e8bbcdadd066df6d67b733c69c05e88f2a03d557356e0e4cf
|
|
| MD5 |
e0cca477e35cbfcd926dc496861212d4
|
|
| BLAKE2b-256 |
f90c7b65252317b57184e2a434e97396eef89ed68e7140932bd7f56ace47bcd3
|
File details
Details for the file agent_memory_core-0.1.0-py3-none-any.whl.
File metadata
- Download URL: agent_memory_core-0.1.0-py3-none-any.whl
- Upload date:
- Size: 20.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
77961de1d1f32abfbf3238a505fa05ca548a9c9a562e508325d2101285f5f393
|
|
| MD5 |
1adcc0e121ddcb8f509c1e249447ccb6
|
|
| BLAKE2b-256 |
fee934da7e9d27dcd31722d8ba153be86264c6f1ebb35b0c945d9e3e885a3e67
|