Skip to main content

Adaptive, file-based project knowledge for AI coding agents

Project description

agent-knowledge

Persistent, file-based project memory for AI coding agents.

One command gives any project a knowledge vault that agents read on startup, maintain during work, and carry across sessions -- no database, no server, just markdown files and a CLI.

Install

pip install agent-knowledge-cli

PyPI package name: agent-knowledge-cli. CLI command and all docs: agent-knowledge.

Quick Start

cd your-project
agent-knowledge init

Open Cursor — the agent picks up from there automatically.

init does everything in one shot:

  • infers the project slug from the directory name
  • creates an external knowledge vault at ~/agent-os/projects/<slug>/
  • symlinks ./agent-knowledge into the repo as the local handle
  • installs .cursor/rules/agent-knowledge.mdc — always-on memory contract
  • installs .cursor/hooks.json — session lifecycle (start, stop, compaction)
  • installs .cursor/commands/memory-update.md and system-update.md — slash commands
  • detects Claude and Codex and installs their bridge files if present
  • bootstraps the memory tree and marks onboarding as pending
  • imports repo history into Evidence/ automatically
  • backfills lightweight history from git

How It Works

Knowledge lives outside the repo at ~/agent-os/projects/<slug>/ so it persists across branches, tools, and clones. The symlink ./agent-knowledge gives every tool a stable local handle.

Architecture boundaries

Folder Role Canonical?
Memory/ Curated, durable facts — source of truth Yes
History/ What happened over time — lightweight diary Yes (diary)
Evidence/ Imported/extracted material, event stream No
Outputs/ Generated views, indexes, HTML export No
Sessions/ Ephemeral session state, prune aggressively No

Evidence is never auto-promoted into Memory. Outputs are never treated as truth. Only agents and humans deliberately write to Memory or History.

Obsidian-ready

The knowledge vault at ~/agent-os/projects/<slug>/ is a valid Obsidian vault. Open it directly for backlinks, graph view, and note navigation.

Obsidian graph view of a project knowledge vault

For a spatial canvas of the knowledge graph:

agent-knowledge export-canvas
# produces: agent-knowledge/Outputs/knowledge-export.canvas

The vault is designed to work well in Obsidian — good markdown, YAML frontmatter, branch-note convention, internal links. But everything works without it too.

Automatic capture

Every sync and update event is automatically recorded in Evidence/captures/ as a small structured YAML file. This gives a lightweight history of what changed and when -- without a database or background service.

Captures are evidence, not memory. They accumulate quietly and can be pruned with agent-knowledge compact.

Progressive retrieval

The knowledge index (Outputs/knowledge-index.json and .md) is regenerated on every sync. It provides a compact catalog of all notes so agents can:

  1. Load the index first (cheap, a few KB)
  2. Identify relevant branches from the shortlist
  3. Load only the full note content they actually need

Use agent-knowledge search <query> to run a quick Layer 2 shortlist query from the command line or a hook.

Cursor-first runtime

Cursor is the primary supported runtime path. The project carries everything it needs — opening the repo in Cursor is enough to get automatic behavior:

What is installed What it does
.cursor/rules/agent-knowledge.mdc Always-on rule: loads memory context on every session
.cursor/hooks.json Lifecycle hooks: sync on start, update on write, sync on stop and pre-compact
.cursor/commands/memory-update.md /memory-update slash command
.cursor/commands/system-update.md /system-update slash command

Session lifecycle

When you open the project in Cursor, the hooks fire automatically:

  • session-start — runs agent-knowledge sync to load fresh vault state
  • post-write — runs agent-knowledge update after each file save
  • stop — runs agent-knowledge sync at end of each task
  • preCompact — runs agent-knowledge sync before context compaction

The rule ensures the agent reads STATUS.md and Memory/MEMORY.md at the start of every session, with no manual prompting required.

Slash commands

Inside any Cursor session in this project:

  • /memory-update — sync, review session work, write stable facts to Memory/, summarize
  • /system-update — refresh integration files to the latest framework version

These are project-local. They work because init installed them in .cursor/commands/.

Integration health

agent-knowledge doctor

Reports whether rules, hooks, and commands are all installed and current. If any file is stale or missing, doctor suggests agent-knowledge refresh-system.

Commands

Command What it does
init Set up a project — one command, no arguments needed
sync Full sync: memory, history, git evidence, index
doctor Validate setup, integration health, version staleness
ship Validate + sync + commit + push
search <query> Search the knowledge index (Memory-first)
export-html Build a polished static site from the vault
view Build site and open in browser
clean-import <url> Import a URL as cleaned, non-canonical evidence
refresh-system Refresh all integration files to the current framework version
backfill-history Rebuild lightweight project history from git
compact Prune stale captures and old session state

All write commands support --dry-run and --json. Run agent-knowledge --help for the full command list.

Static site export with graph

Build a polished standalone site from your knowledge vault — no Obsidian required:

agent-knowledge export-html
# produces: agent-knowledge/Outputs/site/index.html
#           agent-knowledge/Outputs/site/data/knowledge.json
#           agent-knowledge/Outputs/site/data/graph.json

Or generate and open immediately:

agent-knowledge view
# or
agent-knowledge export-html --open

The generated site includes:

  • Overview page — project summary, branch cards, recent changes, key decisions, open questions
  • Branch tree — sidebar navigation across all Memory/ branches with leaf drill-down
  • Note detail view — rendered markdown with metadata panel and related notes
  • Evidence view — all imported material, clearly marked non-canonical
  • Graph view — interactive force-directed graph of all knowledge nodes and relationships
  • Structured dataknowledge.json and graph.json machine-readable models of the vault

Graph view is a secondary exploration aid, not the primary navigation. The tree explorer and note detail view are the main interfaces. The graph shows:

  • Branches, leaf notes, decisions, evidence, and outputs as distinct node types
  • Structural edges (solid) and inferred relationships (dashed)
  • Color-coded node types with visual distinction between canonical (Memory) and non-canonical (Evidence/Outputs) content
  • Interactive zoom/pan, click-to-select with info panel, filter by node type and canonical status, and text search

The graph is built from graph.json, which is derived from knowledge.json. Neither file is canonical truth.

Memory/ notes are always primary. Evidence and Outputs items are clearly marked non-canonical. The site is a generated presentation layer — the vault remains the source of truth.

The site is a single index.html with all data embedded as JS variables, so it opens correctly via file:// without any server.

Skills

agent-knowledge ships a set of focused, composable agent skills. Install them globally:

agent-knowledge setup

Skills installed to ~/.cursor/skills/:

Skill Purpose
memory-management Session-start: tree structure, reading, writeback
project-memory-writing How to write high-quality memory notes
branch-note-convention Naming and structure convention
ontology-inference Infer project ontology from the repo
decision-recording Record architectural decisions as ADRs
evidence-handling Evidence rules and promotion process
clean-web-import Import web content cleanly
obsidian-compatible-writing Optional Obsidian-friendly authoring
session-management Session tracking and handoffs
memory-compaction Prune stale notes
project-ontology-bootstrap Bootstrap a new memory tree

Skills are plain markdown files and work with any skill-compatible agent (Cursor, Claude Code, Codex). See assets/skills/SKILLS.md for details.

Clean web import

Import a web page as cleaned, non-canonical evidence:

agent-knowledge clean-import https://docs.example.com/api-reference
# produces: agent-knowledge/Evidence/imports/2025-01-15-api-reference.md

Strips navigation, ads, scripts, and boilerplate. Writes clean markdown with YAML frontmatter marking it as non-canonical. Verify facts before promoting any content to Memory/.

Multi-Tool Support

init always installs Cursor integration. Claude and Codex are installed when detected:

Tool Bridge files When installed
Cursor .cursor/rules/ + .cursor/hooks.json + .cursor/commands/ Always
Claude CLAUDE.md When .claude/ directory is detected
Codex .codex/AGENTS.md When .codex/ directory is detected

Multiple tools in the same repo work together.

Custom Knowledge Home

export AGENT_KNOWLEDGE_HOME=~/my-knowledge
agent-knowledge init

Project history

init automatically backfills a lightweight history layer when run on an existing repo. You can also run it explicitly at any time:

agent-knowledge backfill-history

This creates History/ inside the vault with:

  • events.ndjson — compact append-only event log (one JSON object per line)
  • history.md — human-readable entrypoint with recent milestones
  • timeline/ — sparse milestone notes for significant events (init, releases)

History records what happened over time — releases, detected integrations, sync events. It is not a git replacement and not a second source of truth. Current truth lives in Memory/.

Layer Role
Memory/ What is true now (curated, authoritative)
History/ What happened over time (lightweight diary)
Evidence/ Imported/extracted material (non-canonical)
Outputs/ Generated helper artifacts
Sessions/ Temporary working state

History is idempotent. Run backfill-history --dry-run to preview without writing. doctor warns when History/ is missing.

Keeping up to date

When a new version of agent-knowledge is installed, refresh the project integration:

pip install -U agent-knowledge-cli
agent-knowledge refresh-system

refresh-system updates all integration bridge files — Cursor hooks, rules, commands, AGENTS.md header, CLAUDE.md, Codex config — and version markers in STATUS.md and .agent-project.yaml. It never touches Memory/, Evidence/, Sessions/, or any curated project knowledge.

Run --dry-run to preview changes without writing:

agent-knowledge refresh-system --dry-run

doctor also warns when the project integration is behind the installed version.

Troubleshooting

agent-knowledge doctor          # validate setup and report health
agent-knowledge doctor --json   # machine-readable health check
agent-knowledge validate        # check knowledge layout and links

Common issues:

  • ./agent-knowledge missing: run agent-knowledge init
  • Onboarding still pending: paste the init prompt into your agent
  • Stale index: run agent-knowledge index or agent-knowledge sync
  • Large notes: run agent-knowledge compact
  • Wrong binary: another tool (e.g. graphify) may install a Node.js agent-knowledge that shadows ours. Check with which -a agent-knowledge. Fix: add the Python bin to PATH before nvm — export PATH="$(python3 -c 'import sysconfig; print(sysconfig.get_path("scripts"))'):$PATH" — or invoke directly: python3 -m agent_knowledge

Platform Support

  • macOS and Linux are fully supported.
  • Windows is not currently supported (relies on bash and POSIX shell scripts).
  • Python 3.9+ is required.

Package naming

What Value
PyPI package agent-knowledge-cli
CLI command agent-knowledge
Python import agent_knowledge

Install: pip install agent-knowledge-cli Command: agent-knowledge --help

Development

git clone <repo-url>
cd agent-knowledge
python3 -m venv .venv && source .venv/bin/activate
pip install -e ".[dev]"
python -m pytest tests/ -q

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agent_knowledge_cli-0.2.4.tar.gz (155.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agent_knowledge_cli-0.2.4-py3-none-any.whl (198.8 kB view details)

Uploaded Python 3

File details

Details for the file agent_knowledge_cli-0.2.4.tar.gz.

File metadata

  • Download URL: agent_knowledge_cli-0.2.4.tar.gz
  • Upload date:
  • Size: 155.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.3

File hashes

Hashes for agent_knowledge_cli-0.2.4.tar.gz
Algorithm Hash digest
SHA256 d0683fc85131c74bb8b5c18bedecd4b6d6517673bd93a6e4f83a7acf92d46cdc
MD5 22cac380f667c5bf357ad0743d0cee06
BLAKE2b-256 c42b1f162ec6f8d8797688ac6715d5260a81954822c799e4e8c34574bd74b7c8

See more details on using hashes here.

File details

Details for the file agent_knowledge_cli-0.2.4-py3-none-any.whl.

File metadata

File hashes

Hashes for agent_knowledge_cli-0.2.4-py3-none-any.whl
Algorithm Hash digest
SHA256 ae0dd13d69dbef1a2d9fcc2312d22452e017b50be9c98382e78f7b093ae30bea
MD5 0c22ebe7f9f6ee81b771d051b6da0e33
BLAKE2b-256 6ab711a2b3793a39dd7316cd4e875a917ccb978c97da53568b705511e51c8e16

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page