Portable human-AI collaboration memory system with MCP server and knowledge graphs
Project description
aidiary
A portable memory system for AI coding assistants — remember conventions, track anti-patterns, and build institutional knowledge that persists across sessions.
What it does
aidiary gives your AI assistant a structured, persistent memory. Instead of starting every session cold, the assistant recalls your coding conventions, past mistakes, and project-specific knowledge. Entries are scored by confidence and verification count, stale knowledge gets flagged, and contradictions are detected automatically.
Works as an MCP server — any MCP-compatible tool (VS Code Copilot Chat, Claude Code, Cursor, etc.) can read and write memories through a standard protocol.
Install
Requires: Python 3.12+
pip install aidiary[mcp]
Quick start
# Scaffold a new project with starter memory files
aidiary-init ./my-project
# Start the MCP server
memory-server --memories ./my-project/memories --output ./my-project/output
Or add to your editor's MCP config:
{
"servers": {
"memory": {
"type": "stdio",
"command": "memory-server",
"args": ["--memories", "/path/to/memories", "--output", "/path/to/output"]
}
}
}
Directory layout
aidiary uses two directories:
my-project/
├── memories/ # memory files (markdown only)
│ ├── conventions.md
│ ├── workflow.md
│ ├── anti-patterns.md
│ └── ...
└── output/ # generated artifacts (dashboard, etc.)
└── memory-health.html
⚠️ Warning: Do not place non-markdown files in
memories/. This directory is designed to be used as input for semantic graph generation (e.g. graphifyy). Foreign files (HTML, JSON, cache) will pollute the graph. All generated output goes to the--outputdirectory.
⚠️ Backup your memories. Memory files are written directly — there is no journaling or atomic write protection. A crash or disk-full event during a write could leave a file in a partial state. Use version control (
git) or periodic backups to protect yourmemories/directory.
CLI flags and environment variables
| Flag | Env var | Default | Purpose |
|---|---|---|---|
--memories, -m |
COPILOT_MEMORY_DIR |
./memories |
Markdown memory files |
--output, -o |
COPILOT_MEMORY_OUTPUT_DIR |
./output |
Dashboard HTML and generated artifacts |
What you get
15 MCP tools — recall, remember, update, stage, review_staged, record_mistake, archive, restore, consolidate, briefing, reflect, health_report, dashboard, transfer_report, export_universal
Session briefing — top conventions, recent anti-patterns, health warnings, and quick stats at session start
Staging pipeline — new entries go through validation, overlap detection, and contradiction checks before committing
Memory health dashboard — single-file HTML with summary cards, sortable tables, and distribution charts. Auto-generated to output/memory-health.html at session start (briefing) and session end (reflect). Open it in any browser to review your memory health. Can also be triggered manually via the dashboard MCP tool.
Anti-pattern tracking — episodic memory for mistakes with correction links, so the same error isn't repeated
Cross-project transfer — entries tagged universal or project-scoped. Export universal knowledge for reuse across workspaces
Session-end reflection — summarizes what was learned, verified, and corrected during a session
Session lifecycle
Session start Session end
│ │
▼ ▼
briefing reflect
│ │
├─ top conventions ├─ entries recorded today
├─ recent anti-patterns ├─ entries verified today
├─ health warnings ├─ anti-patterns logged
└─ auto-generates dashboard └─ auto-generates dashboard
│
▼
output/memory-health.html
(open in browser to review)
The dashboard at output/memory-health.html is regenerated automatically at every session start and end. No manual action needed — just open the file to see the latest health status.
For library users (not using MCP), generate the dashboard explicitly:
from aidiary.dashboard import generate_dashboard
html = generate_dashboard(memories, fm)
(output_dir / "memory-health.html").write_text(html)
How it works
Memories are plain markdown files with YAML metadata per entry (confidence, source, verified count, scope). No database, no embeddings, no external services — just files you can read, edit, and version-control.
The MCP server parses these files, provides keyword search with scoring, and enforces a constitution (write rules, size limits, forbidden content). A consolidation engine detects stale entries, confidence mismatches, and merge candidates.
Using aidiary as a library
from pathlib import Path
from aidiary.store import MemoryStore
from aidiary.search import search_memories
from aidiary.scoring import health_report
from aidiary.consolidate import consolidate
from aidiary.briefing import briefing
from aidiary.dashboard import generate_dashboard
# Point to your directories
memories_dir = Path("./memories")
output_dir = Path("./output")
output_dir.mkdir(exist_ok=True)
# Read and search memories
store = MemoryStore(memories_dir)
memories = store.load_all()
results = search_memories("dependency management", memories)
# Write a new entry
store.append(
"conventions", "Pin exact versions",
"Always pin exact versions in requirements files.",
{"confidence": "high", "source": "observation"},
)
# Generate health report and dashboard
report = health_report(memories)
fm = store.load_frontmatter_only()
html = generate_dashboard(memories, fm)
(output_dir / "memory-health.html").write_text(html)
# Session briefing
summary = briefing(memories, fm)
Memory lifecycle (library)
Beyond basic read/write, aidiary supports a full memory lifecycle:
from aidiary.store import MemoryStore
from aidiary.search import search_memories
from aidiary.staging import stage, review_staged, list_staged
from aidiary.transfer import transfer_report, export_universal
from aidiary.reflect import session_reflect
store = MemoryStore(memories_dir)
# Upsert — update an existing entry (matched by heading, case-insensitive)
store.append(
"conventions", "Pin exact versions",
"Pin exact versions in ALL requirements files. Never use floating ranges.",
{"confidence": "high", "source": "correction"},
upsert=True, # updates if heading exists, creates if not
)
# Search — returns dicts with file, heading, score, body
results = search_memories("exact versions", store.load_all())
# [{"file": "conventions", "heading": "Pin exact versions", "score": 2, ...}]
# Archive — move stale entries to archive.md
store.archive_section("tools", "Deprecated tool")
# Restore — bring archived entries back
store.restore_section("Deprecated tool")
# Stage — queue entries for human review before committing
memories = store.load_all()
result = stage(
"New pattern observed", "Always use dataclasses for data models.",
"conventions",
{"confidence": "medium", "source": "observation"},
store=store, memories=memories,
)
# result = {"status": "staged", "annotations": [...]}
# Review staged entries
staged = list_staged(store=store)
review_staged("New pattern observed", "promote", store=store) # or "reject"
# Transfer — identify entries that can move to other projects
report = transfer_report(store.load_all()) # markdown summary
json_export = export_universal(store.load_all()) # JSON manifest
# Session-end reflection
memories = store.load_all()
fm = store.load_frontmatter_only()
reflection = session_reflect(memories, fm, store=store)
Memory file format
---
topic: conventions
entry_count: 3
last_updated: 2026-04-18
---
# Conventions
## Pin exact versions
- confidence: high
- source: observation
- scope: universal
- verified_count: 5
- last_verified: 2026-04-18
Always pin exact versions in requirements files.
Connect your AI assistant
How the command works
After pip install aidiary[mcp], the memory-server command is available in the active Python environment. There are two ways to invoke it:
# Option 1: entry point (if installed globally or via pipx)
memory-server --memories ./memories --output ./output
# Option 2: full venv path (for project-local venvs)
/path/to/project/.venv/bin/memory-server --memories ./memories --output ./output
# Option 3: python -m (always works)
/path/to/project/.venv/bin/python -m aidiary.server ./memories
Note: When using
python -m aidiary.server, the first positional argument is the memories directory. The--memories/--outputflags only work with thememory-serverentry point.
MCP configuration by editor
VS Code (GitHub Copilot) — .vscode/mcp.json:
{
"servers": {
"copilot-memory": {
"type": "stdio",
"command": "${workspaceFolder}/.venv/bin/memory-server",
"args": ["--memories", "${workspaceFolder}/memories", "--output", "${workspaceFolder}/output"]
}
}
}
Use the full venv path for
commandso VS Code finds the right Python environment. If installed globally viapipx, you can use just"command": "memory-server".
Claude Desktop — ~/Library/Application Support/Claude/claude_desktop_config.json (macOS):
{
"mcpServers": {
"copilot-memory": {
"command": "/ABSOLUTE/PATH/TO/.venv/bin/memory-server",
"args": ["--memories", "/ABSOLUTE/PATH/TO/memories", "--output", "/ABSOLUTE/PATH/TO/output"]
}
}
}
Claude Desktop requires absolute paths — no
~or relative paths. Usewhich memory-serverto find the full path.
Cursor — .cursor/mcp.json:
{
"mcpServers": {
"copilot-memory": {
"command": "/ABSOLUTE/PATH/TO/.venv/bin/memory-server",
"args": ["--memories", "./memories", "--output", "./output"]
}
}
}
Agent prompt template
Add this to your project instructions (e.g. .github/copilot-instructions.md, CLAUDE.md, or equivalent) to tell the agent how to use memory:
## Memory System
This project uses aidiary for persistent memory. The MCP server "copilot-memory"
provides tools for reading and writing structured knowledge.
### Session start
Call `briefing` as the first action in every conversation. This returns:
- Top conventions (what to follow)
- Recent anti-patterns (what to avoid)
- Health warnings (stale entries, contradictions)
### During the session
- Before acting on a topic, call `recall` with relevant keywords
- After learning something new, call `remember` with file, heading, body, confidence, source
- After making a mistake, call `record_mistake` with what happened, why it was wrong, and the lesson
- Use `stage` for entries that need human review before committing
### Session end
Call `reflect` to summarize what was learned, verified, and corrected.
What happens when the agent uses memory
┌─────────────────────────────────────────────────────────────────┐
│ Agent Session Flow │
├─────────────────────────────────────────────────────────────────┤
│ │
│ 1. Session starts │
│ │ │
│ ▼ │
│ ┌──────────┐ MCP call ┌─────────────────────────────┐ │
│ │ Agent │──────────────▶│ briefing() │ │
│ └──────────┘ │ → top conventions │ │
│ │ │ → recent anti-patterns │ │
│ │ ◀────────────────────│ → health warnings │ │
│ │ (agent reads context)│ → auto-generates dashboard │ │
│ │ └─────────────────────────────┘ │
│ ▼ │
│ 2. Agent works on a task │
│ │ │
│ ├──▶ recall("dependency management") │
│ │ → returns matching entries ranked by relevance │
│ │ │
│ ├──▶ remember(file="conventions", heading="Use uv", ...) │
│ │ → validates via constitution → writes to markdown │
│ │ │
│ ├──▶ record_mistake(heading="Used wrong venv", ...) │
│ │ → writes to anti-patterns.md with correction link │
│ │ │
│ ├──▶ stage(heading="New pattern observed", ...) │
│ │ → overlap check → contradiction check → staging.md │
│ │ │
│ ▼ │
│ 3. Session ends │
│ │ │
│ ▼ │
│ ┌──────────┐ MCP call ┌─────────────────────────────┐ │
│ │ Agent │──────────────▶│ reflect() │ │
│ └──────────┘ │ → entries recorded today │ │
│ │ → entries verified today │ │
│ │ → consolidation suggestions │ │
│ │ → auto-generates dashboard │ │
│ └─────────────────────────────┘ │
│ │ │
│ ▼ │
│ output/memory-health.html │
│ (human opens in browser) │
└─────────────────────────────────────────────────────────────────┘
Memory grows across sessions
Session 1 Session 2 Session 3
│ │ │
▼ ▼ ▼
briefing (empty) briefing (5) briefing (12)
│ │ │
├ learn 3 things ├ recall + verify ├ recall + verify
├ 1 mistake ├ learn 2 more ├ detect contradiction
│ ├ 1 mistake ├ archive stale entry
▼ ▼ ▼
reflect reflect reflect
│ │ │
▼ ▼ ▼
5 entries 9 entries 12 entries
0 verified 3 verified 6 verified
1 anti-pattern 2 anti-patterns 2 anti-patterns (1 archived)
Each session builds on the last. The agent starts faster, makes fewer repeated mistakes, and the knowledge base self-maintains through verification and consolidation.
Tech stack
Python 3.12+ · MCP protocol · Markdown + YAML metadata · No external dependencies (base package). Optional: mcp library for MCP server.
License
MIT
Author
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file aidiary-0.1.0.tar.gz.
File metadata
- Download URL: aidiary-0.1.0.tar.gz
- Upload date:
- Size: 53.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6a9a351ceec1e61bc1f13d9403306407bd9dd4406d2a4278403affd2e0ca5985
|
|
| MD5 |
954058d34c333512ad4688ff881b4e02
|
|
| BLAKE2b-256 |
badfeea2bac5286a5901e3ab35190b520227fa81f62d68a4cb8f08d793a1bcfe
|
File details
Details for the file aidiary-0.1.0-py3-none-any.whl.
File metadata
- Download URL: aidiary-0.1.0-py3-none-any.whl
- Upload date:
- Size: 46.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d61cb8ae3b24fc7a624cb6127401522c57fc3b00f1118508f3e9ad7a5a692ad7
|
|
| MD5 |
c7dd579d4460c17b16182b3f378418fd
|
|
| BLAKE2b-256 |
1263af15546c46796896f5fc65e8f1f44652912a8826972c954aff5eac46425a
|