Skip to main content

Shared memory foundation for AI development teams

Project description

project-bedrock cover

project-bedrock: A team lead for your AI agents.

Every session starts with context.

Every important decision leaves a trail.

Every session leaves the project smarter.

robotaitai

PyPI Claude Code Cursor Codex License: MIT

project-bedrock demo

AI can write code fast.

What it does not do well by default is leave behind clear, shared project understanding.

Decisions disappear into chat history.
Architecture gets rediscovered.
New sessions start from zero.
And the next developer, human or AI, has to figure out again what changed, where, and why.

Project Bedrock gives every repo a shared memory layer for humans and AI developers.

It works like the operating discipline of a strong team lead:

  • every session starts with context
  • every important change leaves a trail
  • stable knowledge gets written down where the next developer can find it
  • the project becomes easier to understand over time, not harder

With one command, your project gets:

  • structured memory for architecture, decisions, conventions, and history
  • project-local integration for Claude Code and Cursor
  • lightweight git-friendly markdown that lives with the repo
  • HTML, graph, and Obsidian-ready views of what the project knows

Under the hood, it is just markdown files and a CLI.
No database. No server. No hosted backend. No black box.

The result: your AI developers stop behaving like disconnected sessions, and start behaving more like a team.

📦 Install

pip install project-bedrock

PyPI: project-bedrock  ·  CLI: bedrock  ·  alias: agent-knowledge (deprecated)


🚀 Quick Start

cd your-project
bedrock init

That's it. Open the project in Claude Code or Cursor and the agent has persistent memory automatically -- no manual prompting, no config, no setup.

What init does in one shot
Step What happens
1 Creates ./agent-knowledge/ as a real directory inside the repo (git-tracked)
2 Registers the project in ~/agent-os/projects/<slug>/ so every project shows up in one place -- open it in Obsidian for a unified cross-project vault
3 Adds noisy subfolders (Evidence/raw/, Outputs/site/, ...) to .gitignore automatically
4 Installs project-local integration for Claude Code and Cursor
5 Detects Codex and installs its bridge files if present
6 Bootstraps the memory tree and marks onboarding as pending
7 Imports repo history into Evidence/ and backfills lightweight history from git

💾 Storage Modes

By default, knowledge lives inside the repo (git-tracked). Curated knowledge is committed normally; noisy subfolders are gitignored.

# Default: in-repo (recommended)
bedrock init

# External: knowledge outside the repo (not committed)
bedrock init --external

# Convert external -> in-repo later
bedrock migrate-to-local

🧠 How It Works

Think of the vault as your team's shared notebook. Casual scribbles don't get mistaken for confirmed facts, and you can always tell what is canon vs. chatter.

Knowledge Layers

Folder What goes here Canon?
📘 Memory/ Decisions, conventions, architecture, gotchas -- what you'd tell a new hire Yes
📅 History/ What happened and when -- releases, milestones, a dated trail Yes
📎 Evidence/ Raw imports: docs, ADRs, PRs, screenshots -- captured context No
📊 Outputs/ Generated views: HTML site, search index, knowledge map No

The rule: only Memory/ and History/ are truth. Nothing imported, captured, or generated is ever treated as canon on its own. A developer has to consciously promote something into Memory/ for it to count.


🔌 Project-Local Integration

The project carries everything it needs. Both Claude Code and Cursor get full integration installed automatically -- hooks, runtime contracts, and slash commands. No global config.

Claude Code  .claude/
File Purpose
settings.json Lifecycle hooks: sync on SessionStart, Stop, PreCompact
CLAUDE.md Runtime contract: knowledge layers, session protocol, onboarding
commands/memory-update.md /memory-update slash command
commands/system-update.md /system-update slash command
commands/absorb.md /absorb <file/folder> slash command
Cursor  .cursor/
File Purpose
rules/agent-knowledge.mdc Always-on rule: loads memory context on every session
hooks.json Lifecycle hooks: sync on start, update on write, sync on stop/compact
commands/memory-update.md /memory-update slash command
commands/system-update.md /system-update slash command
commands/absorb.md /absorb <file/folder> slash command
Codex  .codex/  (installed when detected)
File Purpose
AGENTS.md Agent contract with knowledge layer instructions

⚡ Session Lifecycle

Hooks fire automatically -- zero manual intervention:

Event Claude Code Cursor What runs
Session start SessionStart session-start bedrock sync
File saved -- post-write bedrock update
Task complete Stop stop bedrock sync
Context compaction PreCompact preCompact bedrock sync

The agent reads STATUS.md and Memory/MEMORY.md at the start of every session, with no prompting required.

💬 Slash Commands

These are how the team writes to the logbook. Both work in Claude Code and Cursor -- init installed them.

Command When to use it
/memory-update End of session, before logging off. The agent reviews what happened, writes stable facts into Memory/, and summarizes changes. This is the team handoff -- the next developer (or session) gets it for free.
/system-update After upgrading project-bedrock. Refreshes hooks, rules, commands. Purely infrastructure -- never touches knowledge content.

A developer should never finish a session without running /memory-update. It's the equivalent of a daily standup writeup -- short, factual, and always there for the next person.

🩺 Integration Health

bedrock doctor

Reports whether all integration files are installed and current. If anything is stale or missing, doctor tells you exactly what to run.


🔮 Obsidian-Ready

Each project's ./agent-knowledge/ is a valid Obsidian vault on its own. But the real payoff is ~/agent-os/projects/: every project you've ever run init in is registered there. Open that folder in Obsidian and you have a unified vault across all your teams' projects -- backlinks, graph view, and full-text search spanning every codebase you manage.

One window. Every team.

bedrock export-canvas
# produces: agent-knowledge/Outputs/knowledge-export.canvas

Obsidian is optional. Works without it too.


🛠️ Commands

Command What it does
init Set up a project -- one command, no arguments
sync Full sync: memory, history, git evidence, index
ship Validate + sync + commit + push
view Build site and open in browser
doctor Validate setup, integration health, note staleness
All commands

absorb · search · export-html · export-canvas · clean-import · refresh-system · backfill-history · compact · migrate-to-local · init --external

All write commands support --dry-run and --json. Run bedrock --help for the full list.

More

Star History

Star History Chart

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

project_bedrock-0.3.2.tar.gz (169.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

project_bedrock-0.3.2-py3-none-any.whl (213.0 kB view details)

Uploaded Python 3

File details

Details for the file project_bedrock-0.3.2.tar.gz.

File metadata

  • Download URL: project_bedrock-0.3.2.tar.gz
  • Upload date:
  • Size: 169.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.3

File hashes

Hashes for project_bedrock-0.3.2.tar.gz
Algorithm Hash digest
SHA256 dcc2b6879b465335faa6fada9703f46358bcbf9600a07fbb4a1dd599221a0c6c
MD5 3c2a5506369bb1f3a7c83e18893a6fe9
BLAKE2b-256 4c713407bd1d607e1f090949bb5701d43782a07296dd77147d72466a5146b103

See more details on using hashes here.

File details

Details for the file project_bedrock-0.3.2-py3-none-any.whl.

File metadata

File hashes

Hashes for project_bedrock-0.3.2-py3-none-any.whl
Algorithm Hash digest
SHA256 758dc682a67c2cdeab44b6637bc712f7cc3cec326f061f09d8a502a961824931
MD5 07afb125242aa05fb74f9df7d61e3b3a
BLAKE2b-256 39af4dc75fc68064977c22d5517d621a3c529f3515c84462773b1278b200e315

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page