Skip to main content

Incremental, crash-resilient re-mining wrapper for mempalace — mine only what's new since last run, or a single session, without losing history or creating duplicates.

Project description

mempalace-refresh

Incremental, crash-resilient re-mining wrapper for mempalace.

Mempalace's built-in mine command is skip-if-filed by design — once a file has any drawer in the palace, mine never re-visits it. That's correct for static files, but wrong for live-appending Claude Code session logs that grow every message, across compactions, for days.

mempalace-refresh makes mempalace properly incremental:

  • Only mines what's new since the last run (per-file mtime tracking)
  • Subprocess per file — a ChromaDB segfault on file N doesn't nuke the batch
  • Additive — upsert semantics + stable drawer IDs mean re-mining never deletes or duplicates; old chunks no-op, new chunks are appended
  • source_file metadata stays correct — points at the real .jsonl, not a tmp path
  • Targeted — mine just one session with ONLY <uuid>
  • Fail-loud on API drift — if a mempalace update renames what we monkey-patch, the script exits 99 with a clear message

Install

pip install mempalace-refresh

Requires mempalace 3.2.x installed.

Usage

mempalace-refresh              # catch up: mine everything new since last run
mempalace-refresh STATUS       # per-file change status
mempalace-refresh ONLY <uuid>  # mine a single session (substring match)
mempalace-refresh RESET        # wipe state (next run re-mines everything)
mempalace-refresh REPAIR       # delegates to `mempalace repair --yes`

State lives at ~/.cache/mempalace-refresh/state.json. Nothing else is stored — the palace itself is mempalace's.

Environment

  • PROJECTS_DIR — override ~/.claude/projects/ (Claude Code session logs)
  • MEMPALACE_PALACE — override ~/.mempalace

How it works (short)

  1. Each .jsonl under PROJECTS_DIR is tracked by mtime in state.
  2. Changed files are mined one at a time via a fresh Python subprocess.
  3. The subprocess monkey-patches mempalace.palace.file_already_mined to bypass mempalace's skip-check, and mempalace.convo_miner.scan_convos to feed it exactly one file.
  4. Mempalace then scans the real .jsonl, runs its regex-based general_extractor, and upserts drawers. Because drawer IDs are hash(source_file + chunk_index):
    • Chunks that already existed → upsert updates metadata in place, no-op
    • Chunks for newly appended content → new IDs → genuinely new drawers
  5. State is committed per file so any later crash loses zero progress.

Result: your palace reflects every session exactly as it would if you'd mined each one from the start, plus incremental additions for all subsequent growth.

License

MIT. See LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mempalace_refresh-0.2.4.tar.gz (11.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mempalace_refresh-0.2.4-py3-none-any.whl (13.5 kB view details)

Uploaded Python 3

File details

Details for the file mempalace_refresh-0.2.4.tar.gz.

File metadata

  • Download URL: mempalace_refresh-0.2.4.tar.gz
  • Upload date:
  • Size: 11.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.13

File hashes

Hashes for mempalace_refresh-0.2.4.tar.gz
Algorithm Hash digest
SHA256 f30c5f79cf658f7191e040959d2bc387a7b9e8096a6cc1d41e1a107f40a9a851
MD5 cd3238ecd3be2b0dfb3d5d991d0a0b9e
BLAKE2b-256 464ea8036423451d34f320bfd879567de6c9ce2fca8c6bffc70c86fc72d7dc46

See more details on using hashes here.

File details

Details for the file mempalace_refresh-0.2.4-py3-none-any.whl.

File metadata

File hashes

Hashes for mempalace_refresh-0.2.4-py3-none-any.whl
Algorithm Hash digest
SHA256 bf89770394c6ed08a1e61c405229d5bfbfa202a908ac66cc13e9aa9afb1e18bd
MD5 4587125b798a9d0b5fc37e3993695c77
BLAKE2b-256 ee7ac4f93722a6102087faebc0e4e714b76e325cb892ee63f15b35e8f9d5c068

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page