Skip to main content

Portable human-AI collaboration memory system with MCP server and knowledge graphs

Project description

aidiary

A portable memory system for AI coding assistants — remember conventions, track anti-patterns, and build institutional knowledge that persists across sessions.

What it does

aidiary gives your AI assistant a structured, persistent memory. Instead of starting every session cold, the assistant recalls your coding conventions, past mistakes, and project-specific knowledge. Entries are scored by confidence and verification count, stale knowledge gets flagged, and contradictions are detected automatically.

Works as an MCP server — any MCP-compatible tool (VS Code Copilot Chat, Claude Code, Cursor, etc.) can read and write memories through a standard protocol.

Install

Requires: Python 3.12+

# Core memory system + MCP server
pip install aidiary[mcp]

# Optional: knowledge graph pipeline (requires graphifyy)
pip install aidiary[graphs]

# Both
pip install aidiary[mcp,graphs]

Quick start

1. Scaffold a project

aidiary-init ./my-project
cd my-project

This creates memories/ with starter files (conventions, workflow, anti-patterns, tools, project-setup) and an empty output/ directory.

2. Install skills for your AI assistant

Skills teach your AI assistant decision-making principles — loaded automatically when the agent is about to make a recommendation. Zero always-on context cost.

aidiary vscode install        # VS Code Copilot → ~/.copilot/skills/
aidiary claude install        # Claude Code → ~/.claude/skills/
aidiary gemini install        # Gemini CLI → ~/.gemini/skills/
aidiary copilot install       # GitHub Copilot CLI
aidiary kiro install          # Kiro IDE
aidiary hermes install        # Hermes

What the skill contains: 5 core principles (empirical-first, ensemble, degrees-of-freedom, challenge-first, docs-first) + 5 real-world anti-patterns with corrections. See Agent Skills standard.

3. Start the MCP server

memory-server --memories ./memories --output ./output

Or add to your editor's MCP config (VS Code example — .vscode/mcp.json):

{
  "servers": {
    "memory": {
      "type": "stdio",
      "command": "memory-server",
      "args": ["--memories", "${workspaceFolder}/memories", "--output", "${workspaceFolder}/output"]
    }
  }
}

4. (Optional) Set up knowledge graphs

If you installed aidiary[graphs], create a graphs.toml in your project root and build:

rebuild-graphs --code-only    # instant AST graph, no LLM needed
rebuild-graphs --status       # check freshness + staleness %

See the Graph building workflow section below for the full graphs.toml config format, staleness monitoring, and CLI reference.

Note: aidiary[graphs] requires graphifyy. The memory system (aidiary[mcp]) works independently — graphs are optional.

5. Your first session

Tell your AI assistant to call briefing at session start. It returns top conventions, recent anti-patterns, and health warnings. After learning something, the agent calls remember. At session end, reflect summarizes what was learned.

Session 1 (cold start)         Session 2 (warm start)
  briefing → empty               briefing → 5 conventions, 1 anti-pattern
  work on task                    recall("dependency management") → 2 hits
  remember 3 conventions          verify existing entry
  record 1 mistake                learn 2 new things
  reflect → 4 entries             reflect → 7 entries, 1 verified

Directory layout

my-project/
├── graphs.toml           # graph build config (optional)
├── memories/             # memory files (markdown only)
│   ├── conventions.md
│   ├── workflow.md
│   ├── anti-patterns.md
│   ├── tools.md
│   └── project-setup.md
└── output/               # all generated artifacts
    ├── memory-health.html
    └── graphs/           # knowledge graph outputs (optional)
        ├── code/
        │   ├── graph.json
        │   └── graph.html
        └── docs/

Warning: Do not place non-markdown files in memories/. This directory is designed to be used as input for semantic graph generation. All generated output goes to output/.

Backup your memories. Memory files use atomic writes (tempfile + os.replace()) to prevent corruption, but version control (git) provides the best protection for your memories/ directory.

CLI flags and environment variables

Flag Env var Default Purpose
--memories, -m COPILOT_MEMORY_DIR ./memories Markdown memory files
--output, -o COPILOT_MEMORY_OUTPUT_DIR ./output Dashboard HTML and generated artifacts

What's new

0.2.1

New mcp-tool-protocol skill + observatory node-layer click load fix

  • NEW skill: mcp-tool-protocol — schema-first MCP tool calls (read schema → probe → fan out). Skill ecosystem 1 → 2 (joins decision-principles).
  • Observatory node-layer click load fix — clicking a layer toggle no longer leaves nodes in a half-loaded state; layer visibility now applies cleanly on every click.

0.2.0

Production hardening + condensation + unified observatory + modular graph pipeline

  • Atomic file writes_safe_write() now uses tempfile.mkstemp() + os.replace() (POSIX atomic). No data corruption on crash or disk-full.
  • safe_int() helper — 9 crash sites across 6 modules protected from corrupted metadata. Corrupted verified_count no longer kills the MCP server.
  • condense MCP tool (18th tool) — synthesize N related entries into 1 principle. 6 fidelity metrics (KCR/NKR/SR/ERR/TS/SMD), 7 safety guards (anti-pattern immunity, one-level-deep rule, circuit breaker, KCR reject/warn, NKR warn, SR warn, SMD pre-check). Archives originals with condensed_into lineage.
  • Principle maturity classification — L1-L5 classifier (classify_maturity()). remember/update responses show maturity level. Soft warnings for bare rules (L2) missing reasoning.
  • Memory governance dashboard — 4 new panels: maturity distribution (L1-L5 with targets), verification depth (4 buckets), action queue (drill-down with tool suggestions and pagination), graph health (optional).
  • Unified Knowledge Observatory — multi-layer vis-network viewer: code (blue), docs (green), memory (purple) graphs + gold bridge edges. Layer toggles, search, minimap, zoom controls, click-to-filter. L1 keyword bridges auto-generated.
  • Modular graph pipeline (Phase 2a + 2b) — graphs.toml config format with [backends.*] registry, [ignore] patterns, [views.*] declarative output. backend_graphify.py isolates all graphify imports. pipeline.py is a pure orchestrator. rebuild-graphs --status --json, --refresh, --views-only, --no-views flags. Cross-project aidiary[graphs] support.
  • Action queue pagination — all tiers (core/mcp/graphs). Prev/Next/page-size controls for drill-down items. No more [:5] cap.
  • Light mode contrast — graph background darkened (#f0f2f5#e0e3ea) for better node visibility.
  • Script injection protection — JSON in <script> tags escaped with </<\/ (OWASP).
  • Traceback logging — MCP handler exceptions print full traceback to stderr.
  • Best-effort dashboard regenbriefing/reflect dashboard auto-regen wrapped in try/except. Main output never lost.
  • MANIFEST.in — skills .md files now included in sdist.
  • graphifyy version aligned>=0.5.0 in both optional-deps and dev.
  • Tier isolation verified — 12 core modules import without mcp/graphifyy (smoke test #66). 6 graphs modules import without graphifyy (smoke test #67). backend_graphify.py is the single isolation boundary.
  • 74 smoke tests + 40 Playwright e2e tests, all passing.

Bug fixes

  • Cache freshness signalrebuild-graphs --status no longer reports "fresh" when source files have changed since the last build. 3-line fix; 51 smoke tests passing. (Roadmap #41)

0.1.0

Core memory system + MCP server + skills

  • 15 MCP toolsrecall, remember, update, stage, review_staged, record_mistake, archive, restore, consolidate, briefing, reflect, health_report, dashboard, transfer_report, export_universal.
  • Memory constitution — write-time validation: size limits, forbidden content, duplicate detection, dedup guard.
  • Staging pipelinestage + review_staged with constitution validation, overlap detection, contradiction checks.
  • Memory health dashboard — single-file HTML with summary cards, sortable entry table, distribution charts, anti-pattern timeline. Auto-generated at session start and end.
  • Session briefing — top conventions, recent anti-patterns, health warnings at session start.
  • Session-end reflection — summarizes what was learned, verified, and corrected.
  • Anti-pattern tracking — episodic memory with correction links and supersession chains.
  • Cross-project transferscope: universal | project metadata, transfer_report + export_universal tools.
  • Reflection hierarchyreflections.md with derived_from metadata for synthesized principles.
  • Contradiction detection — keyword heuristic with negation-context polarity (Option A).
  • Self-critique in consolidation — stale entry detection, merge suggestions, confidence mismatch flags.
  • Decision-principles skill — bundled VS Code / Claude / Gemini skill with 5 principles + 5 anti-patterns. aidiary vscode install CLI with 6 platform targets.
  • aidiary-init — scaffold a new project with starter memory files + output directory.
  • rebuild-graphs (Phase 1) — config-driven graph rebuild with --code-only, --build-only, --fresh flags.
  • Production hardening — path traversal protection, auto-create directories, MANIFEST.in, --memories/--output CLI flags with env var fallbacks.
  • 79 tests, all passing.

Bug fixes

  • remember upsert — calling remember with an existing heading now replaces the body in place instead of raising DuplicateHeadingError; entry count stays stable, metadata bumped. (Roadmap #I1)

What you get

18 MCP toolsrecall, remember, update, stage, review_staged, record_mistake, archive, restore, consolidate, condense, briefing, reflect, health_report, dashboard, list_topics, get_memory, transfer_report, export_universal

Memory condensation — synthesize N related entries into 1 principle with fidelity metrics and safety guards. Anti-pattern immunity, one-level-deep rule, provisional status

Session briefing — top conventions, recent anti-patterns, health warnings, and quick stats at session start

Staging pipeline — new entries go through validation, overlap detection, and contradiction checks before committing

Memory health dashboard — single-file HTML with summary cards, sortable tables, and distribution charts. Auto-generated to output/memory-health.html at session start (briefing) and session end (reflect). Open it in any browser to review your memory health. Can also be triggered manually via the dashboard MCP tool. Includes governance panels (maturity distribution, verification depth, action queue with pagination) and a unified Knowledge Observatory (multi-layer graph viewer with search, minimap, and layer toggles).

Anti-pattern tracking — episodic memory for mistakes with correction links, so the same error isn't repeated

Cross-project transfer — entries tagged universal or project-scoped. Export universal knowledge for reuse across workspaces

Session-end reflection — summarizes what was learned, verified, and corrected during a session

Decision-principles skill — bundled Agent Skills standard skill that teaches your AI assistant 5 core decision-making principles (empirical-first, ensemble, degrees-of-freedom, challenge-first, docs-first). Auto-discovered by VS Code Copilot, Claude Code, Gemini CLI, and other compatible agents. Zero always-on context cost — loaded only when the agent is making a recommendation.

Why skills matter: MCP tools are passive — the agent must call them explicitly. Skills are proactive — auto-loaded when relevant. Without skills, conventions exist in memory but don't fire at decision time. In baseline measurement, 30% of AI recommendations needed user correction due to greedy/local-optimum thinking. Skills address this by injecting decision principles exactly when the agent is about to make a recommendation.

Install skills

Install decision-principles for all supported platforms — see Quick start § Step 2 above for the full command list and platform targets.

aidiary vscode install      # VS Code → ~/.copilot/skills/
aidiary vscode uninstall    # remove

Session lifecycle

Session start                          Session end
     │                                      │
     ▼                                      ▼
  briefing                               reflect
     │                                      │
     ├─ top conventions                     ├─ entries recorded today
     ├─ recent anti-patterns                ├─ entries verified today
     ├─ health warnings                     ├─ anti-patterns logged
     └─ auto-generates dashboard            └─ auto-generates dashboard
                                                 │
                                                 ▼
                                      output/memory-health.html
                                      (open in browser to review)

The dashboard at output/memory-health.html is regenerated automatically at every session start and end. No manual action needed — just open the file to see the latest health status.

For library users (not using MCP), generate the dashboard explicitly:

from aidiary.dashboard import generate_dashboard

html = generate_dashboard(memories, fm)
(output_dir / "memory-health.html").write_text(html)

How it works

Memories are plain markdown files with YAML metadata per entry (confidence, source, verified count, scope). No database, no embeddings, no external services — just files you can read, edit, and version-control.

The MCP server parses these files, provides keyword search with scoring, and enforces a constitution (write rules, size limits, forbidden content). A consolidation engine detects stale entries, confidence mismatches, and merge candidates.

Using aidiary as a library

from pathlib import Path
from aidiary.store import MemoryStore
from aidiary.search import search_memories
from aidiary.scoring import health_report
from aidiary.consolidate import consolidate
from aidiary.briefing import briefing
from aidiary.dashboard import generate_dashboard

# Point to your directories
memories_dir = Path("./memories")
output_dir = Path("./output")
output_dir.mkdir(exist_ok=True)

# Read and search memories
store = MemoryStore(memories_dir)
memories = store.load_all()
results = search_memories("dependency management", memories)

# Write a new entry
store.append(
    "conventions", "Pin exact versions",
    "Always pin exact versions in requirements files.",
    {"confidence": "high", "source": "observation"},
)

# Generate health report and dashboard
report = health_report(memories)
fm = store.load_frontmatter_only()
html = generate_dashboard(memories, fm)
(output_dir / "memory-health.html").write_text(html)

# Session briefing
summary = briefing(memories, fm)

Memory lifecycle (library)

Beyond basic read/write, aidiary supports a full memory lifecycle:

from aidiary.store import MemoryStore
from aidiary.search import search_memories
from aidiary.staging import stage, review_staged, list_staged
from aidiary.transfer import transfer_report, export_universal
from aidiary.condense import condense
from aidiary.reflect import session_reflect

store = MemoryStore(memories_dir)

# Upsert — update an existing entry (matched by heading, case-insensitive)
store.append(
    "conventions", "Pin exact versions",
    "Pin exact versions in ALL requirements files. Never use floating ranges.",
    {"confidence": "high", "source": "correction"},
    upsert=True,  # updates if heading exists, creates if not
)

# Search — returns dicts with file, heading, score, body
results = search_memories("exact versions", store.load_all())
# [{"file": "conventions", "heading": "Pin exact versions", "score": 2, ...}]

# Archive — move stale entries to archive.md
store.archive_section("tools", "Deprecated tool")

# Restore — bring archived entries back
store.restore_section("Deprecated tool")

# Stage — queue entries for human review before committing
memories = store.load_all()
result = stage(
    "New pattern observed", "Always use dataclasses for data models.",
    "conventions",
    {"confidence": "medium", "source": "observation"},
    store=store, memories=memories,
)
# result = {"status": "staged", "annotations": [...]}

# Review staged entries
staged = list_staged(store=store)
review_staged("New pattern observed", "promote", store=store)  # or "reject"

# Transfer — identify entries that can move to other projects
report = transfer_report(store.load_all())  # markdown summary
json_export = export_universal(store.load_all())  # JSON manifest

# Condense — synthesize N related entries into 1 principle
memories = store.load_all()
result = condense(
    entries=["Pin exact versions", "Never use floating ranges"],
    condensed_heading="Dependency version pinning",
    condensed_body="Always pin exact versions in all requirements files — because floating ranges break silently across environments.",
    target_file="conventions",
    metadata={"confidence": "high", "source": "reflection", "scope": "universal"},
    store=store, memories=memories,
)
# result = {"status": "condensed", "metrics": {"kcr": 0.85, ...}, "archived": [...]}

# Session-end reflection
memories = store.load_all()
fm = store.load_frontmatter_only()
reflection = session_reflect(memories, fm, store=store)

Memory file format

---
topic: conventions
entry_count: 3
last_updated: 2026-04-18
---
# Conventions

## Pin exact versions
- confidence: high
- source: observation
- scope: universal
- verified_count: 5
- last_verified: 2026-04-18

Always pin exact versions in requirements files.

Connect your AI assistant

How the command works

After pip install aidiary[mcp], the memory-server command is available in the active Python environment. There are two ways to invoke it:

# Option 1: entry point (if installed globally or via pipx)
memory-server --memories ./memories --output ./output

# Option 2: full venv path (for project-local venvs)
/path/to/project/.venv/bin/memory-server --memories ./memories --output ./output

# Option 3: python -m (always works)
/path/to/project/.venv/bin/python -m aidiary.server ./memories

Note: When using python -m aidiary.server, the first positional argument is the memories directory. The --memories / --output flags only work with the memory-server entry point.

MCP configuration by editor

VS Code (GitHub Copilot).vscode/mcp.json:

{
  "servers": {
    "copilot-memory": {
      "type": "stdio",
      "command": "${workspaceFolder}/.venv/bin/memory-server",
      "args": ["--memories", "${workspaceFolder}/memories", "--output", "${workspaceFolder}/output"]
    }
  }
}

Use the full venv path for command so VS Code finds the right Python environment. If installed globally via pipx, you can use just "command": "memory-server".

Claude Desktop~/Library/Application Support/Claude/claude_desktop_config.json (macOS):

{
  "mcpServers": {
    "copilot-memory": {
      "command": "/ABSOLUTE/PATH/TO/.venv/bin/memory-server",
      "args": ["--memories", "/ABSOLUTE/PATH/TO/memories", "--output", "/ABSOLUTE/PATH/TO/output"]
    }
  }
}

Claude Desktop requires absolute paths — no ~ or relative paths. Use which memory-server to find the full path.

Cursor.cursor/mcp.json:

{
  "mcpServers": {
    "copilot-memory": {
      "command": "/ABSOLUTE/PATH/TO/.venv/bin/memory-server",
      "args": ["--memories", "./memories", "--output", "./output"]
    }
  }
}

Agent prompt template

Add this to your project instructions (.github/copilot-instructions.md, CLAUDE.md, or equivalent). See Tell the agent about your memory system in the Knowledge graphs section below for the full template including graph query guidance.

What happens when the agent uses memory

┌─────────────────────────────────────────────────────────────────┐
│                    Agent Session Flow                           │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  1. Session starts                                              │
│     │                                                           │
│     ▼                                                           │
│  ┌──────────┐   MCP call    ┌─────────────────────────────┐     │
│  │  Agent   │──────────────▶│  briefing()                 │     │
│  └──────────┘               │  → top conventions          │     │
│     │                       │  → recent anti-patterns     │     │
│     │  ◀────────────────────│  → health warnings          │     │
│     │  (agent reads context)│  → auto-generates dashboard │     │
│     │                       └─────────────────────────────┘     │
│     ▼                                                           │
│  2. Agent works on a task                                       │
│     │                                                           │
│     ├──▶ recall("dependency management")                        │
│     │    → returns matching entries ranked by relevance         │
│     │                                                           │
│     ├──▶ remember(file="conventions", heading="Use uv", ...)    │
│     │    → validates via constitution → writes to markdown      │
│     │                                                           │
│     ├──▶ record_mistake(heading="Used wrong venv", ...)         │
│     │    → writes to anti-patterns.md with correction link      │
│     │                                                           │
│     ├──▶ stage(heading="New pattern observed", ...)             │
│     │    → overlap check → contradiction check → staging.md     │
│     │                                                           │
│     ▼                                                           │
│  3. Session ends                                                │
│     │                                                           │
│     ▼                                                           │
│  ┌──────────┐   MCP call    ┌─────────────────────────────┐     │
│  │  Agent   │──────────────▶│  reflect()                  │     │
│  └──────────┘               │  → entries recorded today   │     │
│                             │  → entries verified today   │     │
│                             │  → consolidation suggestions│     │
│                             │  → auto-generates dashboard │     │
│                             └─────────────────────────────┘     │
│                                        │                        │
│                                        ▼                        │
│                             output/memory-health.html           │
│                             (human opens in browser)            │
└─────────────────────────────────────────────────────────────────┘

Memory grows across sessions

Session 1          Session 2          Session 3
   │                  │                  │
   ▼                  ▼                  ▼
briefing (empty)   briefing (5)       briefing (12)
   │                  │                  │
   ├ learn 3 things   ├ recall + verify  ├ recall + verify
   ├ 1 mistake        ├ learn 2 more     ├ detect contradiction
   │                  ├ 1 mistake        ├ archive stale entry
   ▼                  ▼                  ▼
reflect            reflect             reflect
   │                  │                  │
   ▼                  ▼                  ▼
5 entries          9 entries           12 entries
0 verified         3 verified          6 verified
1 anti-pattern     2 anti-patterns     2 anti-patterns (1 archived)

Each session builds on the last. The agent starts faster, makes fewer repeated mistakes, and the knowledge base self-maintains through verification and consolidation.

Three layers of AI assistant customization

┌────────────────────────────────────────────────────────┐
│ Layer 1: Skills (proactive — zero always-on cost)      │
│ Auto-loaded when relevant. Decision principles,        │
│ domain expertise. No manual recall needed.             │
├────────────────────────────────────────────────────────┤
│ Layer 2: MCP Server (passive — on-demand tools)        │
│ recall, remember, stage, reflect, briefing, etc.       │
│ Agent calls tools when it needs memory access.         │
├────────────────────────────────────────────────────────┤
│ Layer 3: Instructions (static — always-on context)     │
│ copilot-instructions.md, CLAUDE.md, AGENTS.md          │
│ Tells the agent about the memory system.               │
└────────────────────────────────────────────────────────┘

Skills are the most efficient layer — loaded only when relevant, zero context cost when idle. The MCP server provides the tools. Instructions tell the agent to use them.

Knowledge graphs (optional)

aidiary can build knowledge graphs from your code, docs, and memory files — giving your AI assistant a concept network to query for architectural decisions, recall augmentation, and staging validation.

pip install aidiary[graphs]

Graphs are optional. The memory system (aidiary[mcp]) works without graphs. Install aidiary[graphs] only if you want knowledge graph features. The graphify backend is lazy-loaded — it's only imported when you run rebuild-graphs.

Graph building workflow

# 1. Create a graphs.toml config in your project root
cat > graphs.toml << 'EOF'
project = "my-project"
default_backend = "graphify"

[backends.graphify]
ignore_filename = ".graphifyignore"

[ignore]
patterns = ["**/__pycache__/", "**/__init__.py"]

[staleness]
warn_threshold = 50    # % of semantic graphs stale → ⚠ warning
error_threshold = 100  # % stale → 🔴 action required

[graphs.code]
input = "src/"
output = "output/graphs/code/"
method = "ast"

[graphs.docs]
input = "docs/"
output = "output/graphs/docs/"
method = "semantic"
EOF

# 2. Build graphs
rebuild-graphs                    # build all graphs
rebuild-graphs --code-only        # AST only (instant, no LLM)

# 3. Check status — includes staleness % per graph
rebuild-graphs --status
# Output:
#   Graph    Type       Nodes  Edges  Communities  Cache              Stale
#   code     instant      231    406           11  ✓ always fresh          —
#   docs     assisted     141    142           31  ⚠ stale              100%
#   Overall: 🔴 100% stale (2/2 semantic graphs)

# 4. Machine-readable for AI agents (includes staleness thresholds)
rebuild-graphs --status --json

# 5. Get fix guidance for stale graphs
rebuild-graphs --refresh

# 4. Views
rebuild-graphs --views-only       # regenerate views without rebuilding
rebuild-graphs --no-views         # build graphs only, skip views

Multi-backend architecture

graphs.toml supports multiple backends. Today graphify is the default; future backends (tree-sitter, sentence-transformers, CLIP, node2vec) plug in via the same config:

# Future: same input, different backends
# [backends.text-embed]
# model = "all-MiniLM-L6-v2"
# dimensions = 384
#
# [graphs.doc-vectors]
# input = "docs/"
# output = "output/embeddings/docs/"
# backend = "text-embed"
# method = "sentence-transformer"

Module structure

aidiary/graphs/
├── config.py             # TOML loader (generic, no backend imports)
├── ignore.py             # pattern merge + write + match
├── pipeline.py           # orchestrator (zero backend imports)
├── backend_graphify.py   # graphify-specific (all graphify.* imports)
├── views.py              # view dispatcher
├── view_json.py          # JSON summary renderer
└── cli.py                # argparse

All graphify imports live in backend_graphify.py. The orchestrator, config, ignore, views, and CLI have zero backend imports — swapping backends means adding a new backend_<name>.py file and a [backends.<name>] section in graphs.toml.

Serve graphs via MCP

After building, expose your graphs to the AI assistant via graphify's MCP server. Add to .vscode/mcp.json:

{
  "servers": {
    "memory": {
      "type": "stdio",
      "command": "${workspaceFolder}/.venv/bin/memory-server",
      "args": ["--memories", "${workspaceFolder}/memories", "--output", "${workspaceFolder}/output"]
    },
    "code-graph": {
      "type": "stdio",
      "command": "${workspaceFolder}/.venv/bin/python",
      "args": ["-m", "graphify.serve", "${workspaceFolder}/output/graphs/code/graph.json"]
    },
    "docs-graph": {
      "type": "stdio",
      "command": "${workspaceFolder}/.venv/bin/python",
      "args": ["-m", "graphify.serve", "${workspaceFolder}/output/graphs/docs/graph.json"]
    }
  }
}

Each graph MCP server provides tools like god_nodes, get_neighbors, shortest_path, query_graph — the AI agent uses these for architectural analysis, concept discovery, and coupling checks.

After rebuilding graphs, restart MCP servers. They load graph.json once at startup. Use Cmd+Shift+P → "MCP: Restart All Servers" in VS Code.

Tell the agent about your memory system

Add this to your project instructions (.github/copilot-instructions.md for VS Code, CLAUDE.md for Claude Code):

## Memory System

Call `briefing` at the start of every conversation.

| Task complexity | Action |
|----------------|--------|
| Any conversation | **`briefing`** — always, as the first action |
| Quick question | `recall` with relevant keywords before acting |
| Multi-step work | `briefing` → then `recall` as needed |
| Decision-making | `recall` → rethink → present options |

### During work
- Before acting on a topic, call `recall` with relevant keywords
- After learning something new, call `remember`
- After making a mistake, call `record_mistake`

### Session end
- Call `reflect` to summarize what was learned

### Knowledge graphs (if configured)
Before any architectural decision or refactoring, query the graph:
- `god_nodes` — most-connected concepts (start here)
- `get_neighbors` — what connects to the concept you're changing
- `shortest_path` — how two concepts relate

Decision principles and custom skills

The bundled decision-principles skill teaches your AI assistant 5 core patterns:

Principle What it prevents
Empirical-first Building features without data
Ensemble Locking into one approach
Degrees of freedom Popularity bubbles, feedback loops
Challenge-first Greedy/local-optimum decisions
Docs-first Jumping to implementation without design

To create your own custom skills, follow the Agent Skills standard. Place a SKILL.md file with YAML frontmatter (name, description) in ~/.copilot/skills/<skill-name>/.

Full setup walkthrough

1. pip install aidiary[mcp,graphs]     Install package
2. aidiary-init ./my-project           Scaffold memories + output
3. aidiary vscode install              Install decision skill
4. Create graphs.toml                  Configure graph builds
5. rebuild-graphs --code-only          Build code graph (instant)
6. Add MCP configs to mcp.json         Memory server + graph servers
7. Add instructions to copilot-instructions.md
8. Start coding — agent calls briefing, recall, remember, reflect

Tech stack

Python 3.12+ · MCP protocol · Markdown + YAML metadata · No external dependencies (base package). Optional: mcp library for MCP server.

License

MIT

Author

Yingding Wang

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aidiary-0.2.1.tar.gz (109.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aidiary-0.2.1-py3-none-any.whl (94.3 kB view details)

Uploaded Python 3

File details

Details for the file aidiary-0.2.1.tar.gz.

File metadata

  • Download URL: aidiary-0.2.1.tar.gz
  • Upload date:
  • Size: 109.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.4

File hashes

Hashes for aidiary-0.2.1.tar.gz
Algorithm Hash digest
SHA256 59e6c54ce29c668485418db54a8776e878b927ce9671fc107fd202c05a813e08
MD5 55f6491e7c7d1bb84b6fb7cecd098c44
BLAKE2b-256 477898821aabe4e056fe33aff98de6b77e1cbaa3f60630577f141f5d2da9644e

See more details on using hashes here.

File details

Details for the file aidiary-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: aidiary-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 94.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.4

File hashes

Hashes for aidiary-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 4161e60293a2b2c891b315ee287c13786be6f1da411574b15e7c052e8a30b3a0
MD5 f379093c9fe722c897a5db3a163d8b7c
BLAKE2b-256 94063bc1fac1c83e9127b7161094763b4e0dcb8272ed916f732a51bded00b382

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page