Skip to main content

Turn your Claude Code sessions into a LORE.md file your whole repo can read.

Project description

lore

Turn your Claude Code sessions into a LORE.md file your whole repo can read.

Every Claude Code session you run leaves behind a JSONL file in ~/.claude/projects/. They contain real knowledge — bug fixes you found, architecture decisions you made, gotchas the AI bumped into. That knowledge dies in your home directory.

lore distills those sessions into a docs/LORE.md file inside the repo itself, so:

  • Knowledge is git-tracked — versioned, diffable, mergeable
  • Knowledge moves with the project — clone the repo, get the context
  • Multiple devs on the same project all feed the same LORE.md
  • New people onboarding read LORE.md first and skip a week of confused poking

Quick start

# install
pipx install claude-lore

# from inside any git repo
lore extract        # reads relevant sessions, appends to docs/LORE.md

That's it. Run it again anytime — it remembers what it's already processed.

Auto mode (recommended)

lore install-hook

This adds a Claude Code Stop hook to ~/.claude/settings.json. From then on, every time a Claude Code session ends, lore extracts it in the background and appends to the relevant repo's LORE.md. Set it once, never think about it again.

Uninstall any time with lore uninstall-hook.

How it talks to Claude

Two backends, auto-selected:

  • CLI (default) — shells out to your local claude --print. Free if you have a Claude Code subscription. No API key needed.
  • API — falls back to the Anthropic SDK if claude isn't installed. Set ANTHROPIC_API_KEY in your env.

Force one with --backend cli or --backend api.

What it captures

Not chat transcripts. Not summaries. Specifically:

  • Decisions — "We use X over Y because Z"
  • Gotchas — "If you change A, B will silently break"
  • Patterns — "This codebase tends to do X by Y convention"
  • Workarounds — "Library Q has a bug with R; we work around it via S"
  • Domain knowledge — "User accounts on this system have these states: …"

It deliberately doesn't capture: session blow-by-blow, debugging detours that didn't pan out, generic AI chitchat.

Configuration

Drop a .lore.toml in your repo root:

[output]
path = "docs/LORE.md"          # where the lore file lives
style = "categorized"          # or "chronological"

[extraction]
model = "claude-sonnet-4-6"    # which Claude to use for distillation
since = "30d"                  # only process sessions in last 30 days
backend = "auto"               # "cli" | "api" | "auto"

[filter]
min_messages = 6               # skip tiny sessions
exclude_patterns = ["*test*"]  # skip sessions whose prompts match these

All fields are optional. Defaults are sensible.

Commands

lore extract              # extract from all unprocessed sessions for this repo
lore extract --since 7d   # only sessions in last 7 days
lore extract --session ID # extract just one session by ID
lore extract --quiet      # suppress progress output (writes still print)
lore preview ID           # extract one session, print to stdout, don't write (dry run)
lore status               # show which sessions are processed/pending
lore init                 # create a starter .lore.toml
lore install-hook         # auto-extract on every Claude Code session end
lore uninstall-hook       # remove the auto-extract hook

What real output looks like

Run lore preview against any of your past sessions to see what it'd write. Here's the actual output from one of mine — a session where I'd debugged a TTS gotcha in a voice-assistant repo:

### Decisions

- `extract_spoken` in `~/axel/axel.py` strips fenced code blocks, inline backtick
  spans, and shell-prompt lines (`$`, `#`, `>`, paths) from the fallback text
  before TTS. Why: the fallback was reading raw terminal output and code aloud
  when no `SAY:` line was present.

### Gotchas

- The `SAY:` convention in Axel replies is load-bearing for TTS quality. If the
  assistant omits `SAY:` lines, `extract_spoken` falls back to the full reply —
  even with code fences. The stripping heuristic catches most cases but isn't a
  substitute for always tagging spoken lines.
- After patching `axel.py`, the daemon must be **reloaded** (not just signaled) —
  the old code stays in memory until the LaunchAgent restarts the process.
  Reload via `launchctl` against `io.tyler.axel.daemon`.

### Domain

- `~/axel/axel.py` is the voice loop daemon. Pipeline per turn:
  mic → wake word → record → Whisper → agent → `extract_spoken``say` → speakers.
- The Axel daemon registers three LaunchAgent labels:
  `application.io.tyler.axel.<pid>.<pid>` (session),
  `io.tyler.axel.daemon` (main background loop),
  `io.tyler.axel.menubar` (menu-bar UI). Target `io.tyler.axel.daemon` for reload.

That's two short bug-fix sessions distilled — versioned in the repo, future-you (or a teammate) reads LORE.md and skips the rediscovery.

How does this differ from X?

Tool What it does What lore does differently
crune Semantic graph of all your sessions, surfaces skills Personal vault, not in-repo
obsidian-second-brain Sync notes to Obsidian Personal vault, not in-repo
claude-conversation-extractor Dump JSONL → Markdown Raw transcripts, not distilled knowledge
claude-history Search/browse sessions in TUI Read/search, doesn't write

lore's wedge: in the repo, distilled to what matters, auto-maintained.

License

MIT

Status

v0.1 — works, ship it. Built by @inspirationnation635-sketch with Axel.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

claude_lore-0.2.0.tar.gz (19.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

claude_lore-0.2.0-py3-none-any.whl (19.5 kB view details)

Uploaded Python 3

File details

Details for the file claude_lore-0.2.0.tar.gz.

File metadata

  • Download URL: claude_lore-0.2.0.tar.gz
  • Upload date:
  • Size: 19.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for claude_lore-0.2.0.tar.gz
Algorithm Hash digest
SHA256 345a16e443cb567a8123ff84862819cd5a4c2455670fa56141e4ebcfc232b23f
MD5 db56aa4a0eafa6c9f209bd26fba4c1ed
BLAKE2b-256 95a734d6a2a2776844c60836b2b8dd10462441f2af746631e23e1589f0d4f97a

See more details on using hashes here.

File details

Details for the file claude_lore-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: claude_lore-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 19.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for claude_lore-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 3ac48b14f1ea7c9e98a74be653c91f1102a75925614e58e402b033b7e545529b
MD5 ac19321df6dc44ddfd23ea3993315ea7
BLAKE2b-256 15cc3c9f3f76d3d95adf42e7378fa7b516296dd2b9207a41d30a9774bc2cd895

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page