Skip to main content

Shared memory foundation for AI development teams

Project description

project-bedrock: Shared Memory for AI Development Teams

Tell your AI developers how to work, and every session leaves the project smarter.

robotaitai

PyPI -- Claude Code -- Cursor -- Codex -- License: MIT

agent-knowledge tour

AI can write code fast.

What it does not do well by default is leave behind clear, shared project understanding.

Decisions disappear into chat history.
Architecture gets rediscovered.
New sessions start from zero.
And the next developer, human or AI, has to figure out again what changed, where, and why.

agent-knowledge gives every repo a shared memory layer for humans and AI developers.

It works like the operating discipline of a strong team lead:

  • every session starts with context
  • every important change leaves a trail
  • stable knowledge gets written down where the next developer can find it
  • the project becomes easier to understand over time, not harder

With one command, your project gets:

  • structured memory for architecture, decisions, conventions, and history
  • project-local integration for Claude Code and Cursor
  • lightweight git-friendly markdown that lives with the repo
  • HTML, graph, and Obsidian-ready views of what the project knows

Under the hood, it is just markdown files and a CLI.
No database. No server. No hosted backend. No black box.

The result: your AI developers stop behaving like disconnected sessions, and start behaving more like a team.

📦 Install

pip install project-bedrock

PyPI: project-bedrock  ·  CLI: agent-knowledge


🚀 Quick Start

cd your-project
agent-knowledge init

That's it. Open the project in Claude Code or Cursor and the agent has persistent memory automatically -- no manual prompting, no config, no setup.

What init does in one shot
Step What happens
1 Creates ./agent-knowledge/ as a real directory inside the repo (git-tracked)
2 Registers the project in ~/agent-os/projects/<slug>/ so every project shows up in one place -- open it in Obsidian for a unified cross-project vault
3 Adds noisy subfolders (Evidence/raw/, Outputs/site/, ...) to .gitignore automatically
4 Installs project-local integration for Claude Code and Cursor
5 Detects Codex and installs its bridge files if present
6 Bootstraps the memory tree and marks onboarding as pending
7 Imports repo history into Evidence/ and backfills lightweight history from git

💾 Storage Modes

By default, knowledge lives inside the repo (git-tracked). Curated knowledge is committed normally; noisy subfolders are gitignored.

# Default: in-repo (recommended)
agent-knowledge init

# External: knowledge outside the repo (not committed)
agent-knowledge init --external

# Convert external -> in-repo later
agent-knowledge migrate-to-local

🧠 How It Works

Think of the vault as your team's shared notebook. Casual scribbles don't get mistaken for confirmed facts, and you can always tell what is canon vs. chatter.

Knowledge Layers

Folder What goes here Canon?
📘 Memory/ Decisions, conventions, architecture, gotchas -- what you'd tell a new hire Yes
📅 History/ What happened and when -- releases, milestones, a dated trail Yes
📎 Evidence/ Raw imports: docs, ADRs, PRs, screenshots -- captured context No
📊 Outputs/ Generated views: HTML site, search index, knowledge map No

The rule: only Memory/ and History/ are truth. Nothing imported, captured, or generated is ever treated as canon on its own. A developer has to consciously promote something into Memory/ for it to count.


🔌 Project-Local Integration

The project carries everything it needs. Both Claude Code and Cursor get full integration installed automatically -- hooks, runtime contracts, and slash commands. No global config.

Claude Code  .claude/
File Purpose
settings.json Lifecycle hooks: sync on SessionStart, Stop, PreCompact
CLAUDE.md Runtime contract: knowledge layers, session protocol, onboarding
commands/memory-update.md /memory-update slash command
commands/system-update.md /system-update slash command
commands/absorb.md /absorb <file/folder> slash command
Cursor  .cursor/
File Purpose
rules/agent-knowledge.mdc Always-on rule: loads memory context on every session
hooks.json Lifecycle hooks: sync on start, update on write, sync on stop/compact
commands/memory-update.md /memory-update slash command
commands/system-update.md /system-update slash command
commands/absorb.md /absorb <file/folder> slash command
Codex  .codex/  (installed when detected)
File Purpose
AGENTS.md Agent contract with knowledge layer instructions

⚡ Session Lifecycle

Hooks fire automatically -- zero manual intervention:

Event Claude Code Cursor What runs
Session start SessionStart session-start agent-knowledge sync
File saved -- post-write agent-knowledge update
Task complete Stop stop agent-knowledge sync
Context compaction PreCompact preCompact agent-knowledge sync

The agent reads STATUS.md and Memory/MEMORY.md at the start of every session, with no prompting required.

💬 Slash Commands

These are how the team writes to the logbook. Both work in Claude Code and Cursor -- init installed them.

Command When to use it
/memory-update End of session, before logging off. The agent reviews what happened, writes stable facts into Memory/, and summarizes changes. This is the team handoff -- the next developer (or session) gets it for free.
/system-update After upgrading project-bedrock. Refreshes hooks, rules, commands. Purely infrastructure -- never touches knowledge content.

A developer should never finish a session without running /memory-update. It's the equivalent of a daily standup writeup -- short, factual, and always there for the next person.

🩺 Integration Health

agent-knowledge doctor

Reports whether all integration files are installed and current. If anything is stale or missing, doctor tells you exactly what to run.


🔮 Obsidian-Ready

Each project's ./agent-knowledge/ is a valid Obsidian vault on its own. But the real payoff is ~/agent-os/projects/: every project you've ever run init in is registered there. Open that folder in Obsidian and you have a unified vault across all your teams' projects -- backlinks, graph view, and full-text search spanning every codebase you manage.

One window. Every team.

agent-knowledge export-canvas
# produces: agent-knowledge/Outputs/knowledge-export.canvas

Works great with Obsidian. Works without it too.


🛠️ Commands

Command What it does
init Set up a project -- one command, no arguments
sync Full sync: memory, history, git evidence, index
ship Validate + sync + commit + push
view Build site and open in browser
doctor Validate setup, integration health, note staleness
All commands

absorb · search · export-html · export-canvas · clean-import · refresh-system · backfill-history · compact · migrate-to-local · init --external

All write commands support --dry-run and --json. Run agent-knowledge --help for the full list.

More

Star History

Star History Chart

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

project_bedrock-0.3.0.tar.gz (169.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

project_bedrock-0.3.0-py3-none-any.whl (212.4 kB view details)

Uploaded Python 3

File details

Details for the file project_bedrock-0.3.0.tar.gz.

File metadata

  • Download URL: project_bedrock-0.3.0.tar.gz
  • Upload date:
  • Size: 169.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for project_bedrock-0.3.0.tar.gz
Algorithm Hash digest
SHA256 008ef4ae1826e2aa442793fab30fd7d5a65ee04c0820f1e6633ba8d9ed285b97
MD5 7d58661ef36564f016ba99f9ec370840
BLAKE2b-256 7c033f51fe6549b8203a4f211133c6f17913a3731ca8ad804bc4a33a6bcd4fe9

See more details on using hashes here.

File details

Details for the file project_bedrock-0.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for project_bedrock-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 5ed8903ff4330e1bb9f981e9a09d100b99b2d6302fc3162afd3f3f9a41ed49b4
MD5 20dce4fb9aaaae50ba47d26465ed2036
BLAKE2b-256 68016a05e708c08cbf589cf72ad1ec5839e9a1c96f939789f3c637c5c4a78d4e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page