Portable human-AI collaboration memory system with MCP server and knowledge graphs
Project description
aidiary
A portable memory system for AI coding assistants — remember conventions, track anti-patterns, and build institutional knowledge that persists across sessions.
What it does
aidiary gives your AI assistant a structured, persistent memory. Instead of starting every session cold, the assistant recalls your coding conventions, past mistakes, and project-specific knowledge. Entries are scored by confidence and verification count, stale knowledge gets flagged, and contradictions are detected automatically.
Works as an MCP server — any MCP-compatible tool (VS Code Copilot Chat, Claude Code, Cursor, etc.) can read and write memories through a standard protocol.
Install
Requires: Python 3.12+
# Core memory system + MCP server
pip install aidiary[mcp]
# Optional: knowledge graph pipeline (requires graphifyy)
pip install aidiary[graphs]
# Both
pip install aidiary[mcp,graphs]
Quick start
1. Scaffold a project
aidiary-init ./my-project
cd my-project
This creates memories/ with starter files (conventions, workflow, anti-patterns, tools, project-setup) and an empty output/ directory.
2. Install skills for your AI assistant
Skills teach your AI assistant decision-making principles — loaded automatically when the agent is about to make a recommendation. Zero always-on context cost.
aidiary vscode install # VS Code Copilot → ~/.copilot/skills/
aidiary claude install # Claude Code → ~/.claude/skills/
aidiary gemini install # Gemini CLI → ~/.gemini/skills/
aidiary copilot install # GitHub Copilot CLI
aidiary kiro install # Kiro IDE
aidiary hermes install # Hermes
What the skill contains: 5 core principles (empirical-first, ensemble, degrees-of-freedom, challenge-first, docs-first) + 5 real-world anti-patterns with corrections. See Agent Skills standard.
3. Start the MCP server
memory-server --memories ./memories --output ./output
Or add to your editor's MCP config (VS Code example — .vscode/mcp.json):
{
"servers": {
"memory": {
"type": "stdio",
"command": "memory-server",
"args": ["--memories", "${workspaceFolder}/memories", "--output", "${workspaceFolder}/output"]
}
}
}
4. (Optional) Set up knowledge graphs
If you installed aidiary[graphs], create a graphs.toml in your project root and build:
rebuild-graphs --code-only # instant AST graph, no LLM needed
rebuild-graphs --status # check freshness + staleness %
See the Graph building workflow section below for the full graphs.toml config format, staleness monitoring, and CLI reference.
Note:
aidiary[graphs]requires graphifyy. The memory system (aidiary[mcp]) works independently — graphs are optional.
5. Your first session
Tell your AI assistant to call briefing at session start. It returns top conventions, recent anti-patterns, and health warnings. After learning something, the agent calls remember. At session end, reflect summarizes what was learned.
Session 1 (cold start) Session 2 (warm start)
briefing → empty briefing → 5 conventions, 1 anti-pattern
work on task recall("dependency management") → 2 hits
remember 3 conventions verify existing entry
record 1 mistake learn 2 new things
reflect → 4 entries reflect → 7 entries, 1 verified
Directory layout
my-project/
├── graphs.toml # graph build config (optional)
├── memories/ # memory files (markdown only)
│ ├── conventions.md
│ ├── workflow.md
│ ├── anti-patterns.md
│ ├── tools.md
│ └── project-setup.md
└── output/ # all generated artifacts
├── memory-health.html
└── graphs/ # knowledge graph outputs (optional)
├── code/
│ ├── graph.json
│ └── graph.html
└── docs/
Warning: Do not place non-markdown files in
memories/. This directory is designed to be used as input for semantic graph generation. All generated output goes tooutput/.
Backup your memories. Memory files use atomic writes (
tempfile+os.replace()) to prevent corruption, but version control (git) provides the best protection for yourmemories/directory.
CLI flags and environment variables
| Flag | Env var | Default | Purpose |
|---|---|---|---|
--memories, -m |
COPILOT_MEMORY_DIR |
./memories |
Markdown memory files |
--output, -o |
COPILOT_MEMORY_OUTPUT_DIR |
./output |
Dashboard HTML and generated artifacts |
What's new
0.2.0
Production hardening + condensation + unified observatory + modular graph pipeline
- Atomic file writes —
_safe_write()now usestempfile.mkstemp()+os.replace()(POSIX atomic). No data corruption on crash or disk-full. safe_int()helper — 9 crash sites across 6 modules protected from corrupted metadata. Corruptedverified_countno longer kills the MCP server.condenseMCP tool (18th tool) — synthesize N related entries into 1 principle. 6 fidelity metrics (KCR/NKR/SR/ERR/TS/SMD), 7 safety guards (anti-pattern immunity, one-level-deep rule, circuit breaker, KCR reject/warn, NKR warn, SR warn, SMD pre-check). Archives originals withcondensed_intolineage.- Principle maturity classification — L1-L5 classifier (
classify_maturity()).remember/updateresponses show maturity level. Soft warnings for bare rules (L2) missing reasoning. - Memory governance dashboard — 4 new panels: maturity distribution (L1-L5 with targets), verification depth (4 buckets), action queue (drill-down with tool suggestions and pagination), graph health (optional).
- Unified Knowledge Observatory — multi-layer vis-network viewer: code (blue), docs (green), memory (purple) graphs + gold bridge edges. Layer toggles, search, minimap, zoom controls, click-to-filter. L1 keyword bridges auto-generated.
- Modular graph pipeline (Phase 2a + 2b) —
graphs.tomlconfig format with[backends.*]registry,[ignore]patterns,[views.*]declarative output.backend_graphify.pyisolates all graphify imports.pipeline.pyis a pure orchestrator.rebuild-graphs --status --json,--refresh,--views-only,--no-viewsflags. Cross-projectaidiary[graphs]support. - Action queue pagination — all tiers (core/mcp/graphs). Prev/Next/page-size controls for drill-down items. No more
[:5]cap. - Light mode contrast — graph background darkened (
#f0f2f5→#e0e3ea) for better node visibility. - Script injection protection — JSON in
<script>tags escaped with</→<\/(OWASP). - Traceback logging — MCP handler exceptions print full traceback to stderr.
- Best-effort dashboard regen —
briefing/reflectdashboard auto-regen wrapped in try/except. Main output never lost. - MANIFEST.in — skills
.mdfiles now included in sdist. graphifyyversion aligned —>=0.5.0in both optional-deps and dev.- Tier isolation verified — 12 core modules import without mcp/graphifyy (smoke test #66). 6 graphs modules import without graphifyy (smoke test #67).
backend_graphify.pyis the single isolation boundary. - 74 smoke tests + 40 Playwright e2e tests, all passing.
0.1.0
Core memory system + MCP server + skills
- 15 MCP tools —
recall,remember,update,stage,review_staged,record_mistake,archive,restore,consolidate,briefing,reflect,health_report,dashboard,transfer_report,export_universal. - Memory constitution — write-time validation: size limits, forbidden content, duplicate detection, dedup guard.
- Staging pipeline —
stage+review_stagedwith constitution validation, overlap detection, contradiction checks. - Memory health dashboard — single-file HTML with summary cards, sortable entry table, distribution charts, anti-pattern timeline. Auto-generated at session start and end.
- Session briefing — top conventions, recent anti-patterns, health warnings at session start.
- Session-end reflection — summarizes what was learned, verified, and corrected.
- Anti-pattern tracking — episodic memory with correction links and supersession chains.
- Cross-project transfer —
scope: universal | projectmetadata,transfer_report+export_universaltools. - Reflection hierarchy —
reflections.mdwithderived_frommetadata for synthesized principles. - Contradiction detection — keyword heuristic with negation-context polarity (Option A).
- Self-critique in consolidation — stale entry detection, merge suggestions, confidence mismatch flags.
- Decision-principles skill — bundled VS Code / Claude / Gemini skill with 5 principles + 5 anti-patterns.
aidiary vscode installCLI with 6 platform targets. aidiary-init— scaffold a new project with starter memory files + output directory.rebuild-graphs(Phase 1) — config-driven graph rebuild with--code-only,--build-only,--freshflags.- Production hardening — path traversal protection, auto-create directories,
MANIFEST.in,--memories/--outputCLI flags with env var fallbacks. - 79 tests, all passing.
What you get
18 MCP tools — recall, remember, update, stage, review_staged, record_mistake, archive, restore, consolidate, condense, briefing, reflect, health_report, dashboard, list_topics, get_memory, transfer_report, export_universal
Memory condensation — synthesize N related entries into 1 principle with fidelity metrics and safety guards. Anti-pattern immunity, one-level-deep rule, provisional status
Session briefing — top conventions, recent anti-patterns, health warnings, and quick stats at session start
Staging pipeline — new entries go through validation, overlap detection, and contradiction checks before committing
Memory health dashboard — single-file HTML with summary cards, sortable tables, and distribution charts. Auto-generated to output/memory-health.html at session start (briefing) and session end (reflect). Open it in any browser to review your memory health. Can also be triggered manually via the dashboard MCP tool. Includes governance panels (maturity distribution, verification depth, action queue with pagination) and a unified Knowledge Observatory (multi-layer graph viewer with search, minimap, and layer toggles).
Anti-pattern tracking — episodic memory for mistakes with correction links, so the same error isn't repeated
Cross-project transfer — entries tagged universal or project-scoped. Export universal knowledge for reuse across workspaces
Session-end reflection — summarizes what was learned, verified, and corrected during a session
Decision-principles skill — bundled Agent Skills standard skill that teaches your AI assistant 5 core decision-making principles (empirical-first, ensemble, degrees-of-freedom, challenge-first, docs-first). Auto-discovered by VS Code Copilot, Claude Code, Gemini CLI, and other compatible agents. Zero always-on context cost — loaded only when the agent is making a recommendation.
Why skills matter: MCP tools are passive — the agent must call them explicitly. Skills are proactive — auto-loaded when relevant. Without skills, conventions exist in memory but don't fire at decision time. In baseline measurement, 30% of AI recommendations needed user correction due to greedy/local-optimum thinking. Skills address this by injecting decision principles exactly when the agent is about to make a recommendation.
Install skills
Install decision-principles for all supported platforms — see Quick start § Step 2 above for the full command list and platform targets.
aidiary vscode install # VS Code → ~/.copilot/skills/
aidiary vscode uninstall # remove
Session lifecycle
Session start Session end
│ │
▼ ▼
briefing reflect
│ │
├─ top conventions ├─ entries recorded today
├─ recent anti-patterns ├─ entries verified today
├─ health warnings ├─ anti-patterns logged
└─ auto-generates dashboard └─ auto-generates dashboard
│
▼
output/memory-health.html
(open in browser to review)
The dashboard at output/memory-health.html is regenerated automatically at every session start and end. No manual action needed — just open the file to see the latest health status.
For library users (not using MCP), generate the dashboard explicitly:
from aidiary.dashboard import generate_dashboard
html = generate_dashboard(memories, fm)
(output_dir / "memory-health.html").write_text(html)
How it works
Memories are plain markdown files with YAML metadata per entry (confidence, source, verified count, scope). No database, no embeddings, no external services — just files you can read, edit, and version-control.
The MCP server parses these files, provides keyword search with scoring, and enforces a constitution (write rules, size limits, forbidden content). A consolidation engine detects stale entries, confidence mismatches, and merge candidates.
Using aidiary as a library
from pathlib import Path
from aidiary.store import MemoryStore
from aidiary.search import search_memories
from aidiary.scoring import health_report
from aidiary.consolidate import consolidate
from aidiary.briefing import briefing
from aidiary.dashboard import generate_dashboard
# Point to your directories
memories_dir = Path("./memories")
output_dir = Path("./output")
output_dir.mkdir(exist_ok=True)
# Read and search memories
store = MemoryStore(memories_dir)
memories = store.load_all()
results = search_memories("dependency management", memories)
# Write a new entry
store.append(
"conventions", "Pin exact versions",
"Always pin exact versions in requirements files.",
{"confidence": "high", "source": "observation"},
)
# Generate health report and dashboard
report = health_report(memories)
fm = store.load_frontmatter_only()
html = generate_dashboard(memories, fm)
(output_dir / "memory-health.html").write_text(html)
# Session briefing
summary = briefing(memories, fm)
Memory lifecycle (library)
Beyond basic read/write, aidiary supports a full memory lifecycle:
from aidiary.store import MemoryStore
from aidiary.search import search_memories
from aidiary.staging import stage, review_staged, list_staged
from aidiary.transfer import transfer_report, export_universal
from aidiary.condense import condense
from aidiary.reflect import session_reflect
store = MemoryStore(memories_dir)
# Upsert — update an existing entry (matched by heading, case-insensitive)
store.append(
"conventions", "Pin exact versions",
"Pin exact versions in ALL requirements files. Never use floating ranges.",
{"confidence": "high", "source": "correction"},
upsert=True, # updates if heading exists, creates if not
)
# Search — returns dicts with file, heading, score, body
results = search_memories("exact versions", store.load_all())
# [{"file": "conventions", "heading": "Pin exact versions", "score": 2, ...}]
# Archive — move stale entries to archive.md
store.archive_section("tools", "Deprecated tool")
# Restore — bring archived entries back
store.restore_section("Deprecated tool")
# Stage — queue entries for human review before committing
memories = store.load_all()
result = stage(
"New pattern observed", "Always use dataclasses for data models.",
"conventions",
{"confidence": "medium", "source": "observation"},
store=store, memories=memories,
)
# result = {"status": "staged", "annotations": [...]}
# Review staged entries
staged = list_staged(store=store)
review_staged("New pattern observed", "promote", store=store) # or "reject"
# Transfer — identify entries that can move to other projects
report = transfer_report(store.load_all()) # markdown summary
json_export = export_universal(store.load_all()) # JSON manifest
# Condense — synthesize N related entries into 1 principle
memories = store.load_all()
result = condense(
entries=["Pin exact versions", "Never use floating ranges"],
condensed_heading="Dependency version pinning",
condensed_body="Always pin exact versions in all requirements files — because floating ranges break silently across environments.",
target_file="conventions",
metadata={"confidence": "high", "source": "reflection", "scope": "universal"},
store=store, memories=memories,
)
# result = {"status": "condensed", "metrics": {"kcr": 0.85, ...}, "archived": [...]}
# Session-end reflection
memories = store.load_all()
fm = store.load_frontmatter_only()
reflection = session_reflect(memories, fm, store=store)
Memory file format
---
topic: conventions
entry_count: 3
last_updated: 2026-04-18
---
# Conventions
## Pin exact versions
- confidence: high
- source: observation
- scope: universal
- verified_count: 5
- last_verified: 2026-04-18
Always pin exact versions in requirements files.
Connect your AI assistant
How the command works
After pip install aidiary[mcp], the memory-server command is available in the active Python environment. There are two ways to invoke it:
# Option 1: entry point (if installed globally or via pipx)
memory-server --memories ./memories --output ./output
# Option 2: full venv path (for project-local venvs)
/path/to/project/.venv/bin/memory-server --memories ./memories --output ./output
# Option 3: python -m (always works)
/path/to/project/.venv/bin/python -m aidiary.server ./memories
Note: When using
python -m aidiary.server, the first positional argument is the memories directory. The--memories/--outputflags only work with thememory-serverentry point.
MCP configuration by editor
VS Code (GitHub Copilot) — .vscode/mcp.json:
{
"servers": {
"copilot-memory": {
"type": "stdio",
"command": "${workspaceFolder}/.venv/bin/memory-server",
"args": ["--memories", "${workspaceFolder}/memories", "--output", "${workspaceFolder}/output"]
}
}
}
Use the full venv path for
commandso VS Code finds the right Python environment. If installed globally viapipx, you can use just"command": "memory-server".
Claude Desktop — ~/Library/Application Support/Claude/claude_desktop_config.json (macOS):
{
"mcpServers": {
"copilot-memory": {
"command": "/ABSOLUTE/PATH/TO/.venv/bin/memory-server",
"args": ["--memories", "/ABSOLUTE/PATH/TO/memories", "--output", "/ABSOLUTE/PATH/TO/output"]
}
}
}
Claude Desktop requires absolute paths — no
~or relative paths. Usewhich memory-serverto find the full path.
Cursor — .cursor/mcp.json:
{
"mcpServers": {
"copilot-memory": {
"command": "/ABSOLUTE/PATH/TO/.venv/bin/memory-server",
"args": ["--memories", "./memories", "--output", "./output"]
}
}
}
Agent prompt template
Add this to your project instructions (.github/copilot-instructions.md, CLAUDE.md, or equivalent). See Tell the agent about your memory system in the Knowledge graphs section below for the full template including graph query guidance.
What happens when the agent uses memory
┌─────────────────────────────────────────────────────────────────┐
│ Agent Session Flow │
├─────────────────────────────────────────────────────────────────┤
│ │
│ 1. Session starts │
│ │ │
│ ▼ │
│ ┌──────────┐ MCP call ┌─────────────────────────────┐ │
│ │ Agent │──────────────▶│ briefing() │ │
│ └──────────┘ │ → top conventions │ │
│ │ │ → recent anti-patterns │ │
│ │ ◀────────────────────│ → health warnings │ │
│ │ (agent reads context)│ → auto-generates dashboard │ │
│ │ └─────────────────────────────┘ │
│ ▼ │
│ 2. Agent works on a task │
│ │ │
│ ├──▶ recall("dependency management") │
│ │ → returns matching entries ranked by relevance │
│ │ │
│ ├──▶ remember(file="conventions", heading="Use uv", ...) │
│ │ → validates via constitution → writes to markdown │
│ │ │
│ ├──▶ record_mistake(heading="Used wrong venv", ...) │
│ │ → writes to anti-patterns.md with correction link │
│ │ │
│ ├──▶ stage(heading="New pattern observed", ...) │
│ │ → overlap check → contradiction check → staging.md │
│ │ │
│ ▼ │
│ 3. Session ends │
│ │ │
│ ▼ │
│ ┌──────────┐ MCP call ┌─────────────────────────────┐ │
│ │ Agent │──────────────▶│ reflect() │ │
│ └──────────┘ │ → entries recorded today │ │
│ │ → entries verified today │ │
│ │ → consolidation suggestions│ │
│ │ → auto-generates dashboard │ │
│ └─────────────────────────────┘ │
│ │ │
│ ▼ │
│ output/memory-health.html │
│ (human opens in browser) │
└─────────────────────────────────────────────────────────────────┘
Memory grows across sessions
Session 1 Session 2 Session 3
│ │ │
▼ ▼ ▼
briefing (empty) briefing (5) briefing (12)
│ │ │
├ learn 3 things ├ recall + verify ├ recall + verify
├ 1 mistake ├ learn 2 more ├ detect contradiction
│ ├ 1 mistake ├ archive stale entry
▼ ▼ ▼
reflect reflect reflect
│ │ │
▼ ▼ ▼
5 entries 9 entries 12 entries
0 verified 3 verified 6 verified
1 anti-pattern 2 anti-patterns 2 anti-patterns (1 archived)
Each session builds on the last. The agent starts faster, makes fewer repeated mistakes, and the knowledge base self-maintains through verification and consolidation.
Three layers of AI assistant customization
┌────────────────────────────────────────────────────────┐
│ Layer 1: Skills (proactive — zero always-on cost) │
│ Auto-loaded when relevant. Decision principles, │
│ domain expertise. No manual recall needed. │
├────────────────────────────────────────────────────────┤
│ Layer 2: MCP Server (passive — on-demand tools) │
│ recall, remember, stage, reflect, briefing, etc. │
│ Agent calls tools when it needs memory access. │
├────────────────────────────────────────────────────────┤
│ Layer 3: Instructions (static — always-on context) │
│ copilot-instructions.md, CLAUDE.md, AGENTS.md │
│ Tells the agent about the memory system. │
└────────────────────────────────────────────────────────┘
Skills are the most efficient layer — loaded only when relevant, zero context cost when idle. The MCP server provides the tools. Instructions tell the agent to use them.
Knowledge graphs (optional)
aidiary can build knowledge graphs from your code, docs, and memory files — giving your AI assistant a concept network to query for architectural decisions, recall augmentation, and staging validation.
pip install aidiary[graphs]
Graphs are optional. The memory system (
aidiary[mcp]) works without graphs. Installaidiary[graphs]only if you want knowledge graph features. The graphify backend is lazy-loaded — it's only imported when you runrebuild-graphs.
Graph building workflow
# 1. Create a graphs.toml config in your project root
cat > graphs.toml << 'EOF'
project = "my-project"
default_backend = "graphify"
[backends.graphify]
ignore_filename = ".graphifyignore"
[ignore]
patterns = ["**/__pycache__/", "**/__init__.py"]
[staleness]
warn_threshold = 50 # % of semantic graphs stale → ⚠ warning
error_threshold = 100 # % stale → 🔴 action required
[graphs.code]
input = "src/"
output = "output/graphs/code/"
method = "ast"
[graphs.docs]
input = "docs/"
output = "output/graphs/docs/"
method = "semantic"
EOF
# 2. Build graphs
rebuild-graphs # build all graphs
rebuild-graphs --code-only # AST only (instant, no LLM)
# 3. Check status — includes staleness % per graph
rebuild-graphs --status
# Output:
# Graph Type Nodes Edges Communities Cache Stale
# code instant 231 406 11 ✓ always fresh —
# docs assisted 141 142 31 ⚠ stale 100%
# Overall: 🔴 100% stale (2/2 semantic graphs)
# 4. Machine-readable for AI agents (includes staleness thresholds)
rebuild-graphs --status --json
# 5. Get fix guidance for stale graphs
rebuild-graphs --refresh
# 4. Views
rebuild-graphs --views-only # regenerate views without rebuilding
rebuild-graphs --no-views # build graphs only, skip views
Multi-backend architecture
graphs.toml supports multiple backends. Today graphify is the default; future backends (tree-sitter, sentence-transformers, CLIP, node2vec) plug in via the same config:
# Future: same input, different backends
# [backends.text-embed]
# model = "all-MiniLM-L6-v2"
# dimensions = 384
#
# [graphs.doc-vectors]
# input = "docs/"
# output = "output/embeddings/docs/"
# backend = "text-embed"
# method = "sentence-transformer"
Module structure
aidiary/graphs/
├── config.py # TOML loader (generic, no backend imports)
├── ignore.py # pattern merge + write + match
├── pipeline.py # orchestrator (zero backend imports)
├── backend_graphify.py # graphify-specific (all graphify.* imports)
├── views.py # view dispatcher
├── view_json.py # JSON summary renderer
└── cli.py # argparse
All graphify imports live in backend_graphify.py. The orchestrator, config, ignore, views, and CLI have zero backend imports — swapping backends means adding a new backend_<name>.py file and a [backends.<name>] section in graphs.toml.
Serve graphs via MCP
After building, expose your graphs to the AI assistant via graphify's MCP server. Add to .vscode/mcp.json:
{
"servers": {
"memory": {
"type": "stdio",
"command": "${workspaceFolder}/.venv/bin/memory-server",
"args": ["--memories", "${workspaceFolder}/memories", "--output", "${workspaceFolder}/output"]
},
"code-graph": {
"type": "stdio",
"command": "${workspaceFolder}/.venv/bin/python",
"args": ["-m", "graphify.serve", "${workspaceFolder}/output/graphs/code/graph.json"]
},
"docs-graph": {
"type": "stdio",
"command": "${workspaceFolder}/.venv/bin/python",
"args": ["-m", "graphify.serve", "${workspaceFolder}/output/graphs/docs/graph.json"]
}
}
}
Each graph MCP server provides tools like god_nodes, get_neighbors, shortest_path, query_graph — the AI agent uses these for architectural analysis, concept discovery, and coupling checks.
After rebuilding graphs, restart MCP servers. They load
graph.jsononce at startup. Use Cmd+Shift+P → "MCP: Restart All Servers" in VS Code.
Tell the agent about your memory system
Add this to your project instructions (.github/copilot-instructions.md for VS Code, CLAUDE.md for Claude Code):
## Memory System
Call `briefing` at the start of every conversation.
| Task complexity | Action |
|----------------|--------|
| Any conversation | **`briefing`** — always, as the first action |
| Quick question | `recall` with relevant keywords before acting |
| Multi-step work | `briefing` → then `recall` as needed |
| Decision-making | `recall` → rethink → present options |
### During work
- Before acting on a topic, call `recall` with relevant keywords
- After learning something new, call `remember`
- After making a mistake, call `record_mistake`
### Session end
- Call `reflect` to summarize what was learned
### Knowledge graphs (if configured)
Before any architectural decision or refactoring, query the graph:
- `god_nodes` — most-connected concepts (start here)
- `get_neighbors` — what connects to the concept you're changing
- `shortest_path` — how two concepts relate
Decision principles and custom skills
The bundled decision-principles skill teaches your AI assistant 5 core patterns:
| Principle | What it prevents |
|---|---|
| Empirical-first | Building features without data |
| Ensemble | Locking into one approach |
| Degrees of freedom | Popularity bubbles, feedback loops |
| Challenge-first | Greedy/local-optimum decisions |
| Docs-first | Jumping to implementation without design |
To create your own custom skills, follow the Agent Skills standard. Place a SKILL.md file with YAML frontmatter (name, description) in ~/.copilot/skills/<skill-name>/.
Full setup walkthrough
1. pip install aidiary[mcp,graphs] Install package
2. aidiary-init ./my-project Scaffold memories + output
3. aidiary vscode install Install decision skill
4. Create graphs.toml Configure graph builds
5. rebuild-graphs --code-only Build code graph (instant)
6. Add MCP configs to mcp.json Memory server + graph servers
7. Add instructions to copilot-instructions.md
8. Start coding — agent calls briefing, recall, remember, reflect
Tech stack
Python 3.12+ · MCP protocol · Markdown + YAML metadata · No external dependencies (base package). Optional: mcp library for MCP server.
License
MIT
Author
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file aidiary-0.2.0.tar.gz.
File metadata
- Download URL: aidiary-0.2.0.tar.gz
- Upload date:
- Size: 96.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5fd1352f9a1b09522e0d3d1a06d0e4e6a5025c7d33ffbaa060c323dde35cd99c
|
|
| MD5 |
236bdecb759cda970e2ebb9f69ac2d3d
|
|
| BLAKE2b-256 |
eec5085245c3ec99c38853732941317130f2287fae67b70f5b0d9bb8bdd22a90
|
File details
Details for the file aidiary-0.2.0-py3-none-any.whl.
File metadata
- Download URL: aidiary-0.2.0-py3-none-any.whl
- Upload date:
- Size: 80.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1569694186794a4936630124313fa1395f2fe6aa5043238bfade3baba304b1d4
|
|
| MD5 |
937ff2b94e739aedee40c381e7fb8d47
|
|
| BLAKE2b-256 |
3badce85e99f3f0bc533b746baf27b9f94d41659d53048bf5684b112e3b80f37
|