Skip to main content

Lossless episodic memory for Claude Code and VS Code Copilot

Project description

wormlens

Kill pancake brain. Episodic memory handoff between agent sessions -- no compact required.

Pluggable chat history extraction for Claude Code and VS Code Copilot. Reads raw session logs and produces token-efficient, addressable extracts that agents can consume as context -- no more lossy compacts, no more 5-minute waits, no more drilling the wrong wall.

Has this ever happened to you? You're happily coding with your companion agent, lining 'em up and knocking 'em down. Then -- BAM! Blindsided by compact. Agent gets pancake brain. You get an aneurysm staring at a spinner for 5 minutes. And then, it all goes oh so very pear shaped. 🍐

Wormlens skips the compact entirely. Mechanically extract the prior session, hand it to the next one, keep going.

  • Extract, not compact. Compact is for garbage. Extract is for nectar.
  • Instant -- extracts in milliseconds, not minutes.
  • Lossless -- user/assistant text preserved verbatim by default; thinking, tool calls, and bash output opt-in via flags. Nothing is paraphrased or reduced by a model.
  • Addressable -- turn numbers map to source lines for random-access retrieval of any prior turn.
  • Historical -- chain recalls across sessions. Today's recall can include yesterday's, which includes the one before. Walk back as far as you need.
  • Agent-driven -- the agent decides whether to recall, what to recall, and when to hand off. Wormlens injects authoritative context_used_pct and time into every turn (~10 tokens) so the agent has the telemetry to make those calls.
  • Unified -- list, grep, search, summarize across providers (Claude Code, VS Code Copilot now; pluggable for others).

Why it's cheap

Native compact feeds the entire session through the model to generate a summary, paying full output-token rate at whichever tier the session is running on (Opus session compacts on Opus). Wormlens extraction is mechanical: zero model tokens.

Compact also reserves a chunk of the context window for the summary itself, leaving the active agent fewer tokens to actually work with. After a wormlens recall, the new session sits at ~6% of the window used. Compact sits at ~20% summary residue + ~25% reserved for the next auto-compact = ~45% committed before any work. Working room: ~94% (wormlens) vs ~55% (compact).

There are five cost layers (inference, prefill, degradation laundering, waste tokens in the danger zone, and developer flow state). Wormlens wins all five. The flow-state layer alone runs ~60x cheaper -- a senior developer at $100/hour costs roughly $100/session in compact-induced block + recovery vs ~$1.67/session of clean handoff.

See docs/token-economics.md for the five-layer accounting with current Anthropic pricing, and docs/agent-agency.md for the design philosophy.

* Percentages and dollar figures are swag-grade pending formal measurement. Numbers come from observed behavior on a 200K Opus context window; your workflow may vary.

Installation

pip install .
wl --help

This installs the wl command via the entry point defined in pyproject.toml.

Usage

# Installed command
wl [INPUT...] [options]

# Module invocation
python -m wormlens [INPUT...] [options]

# Zipapp (single-file distributable)
python wormlens.pyz [INPUT...] [options]

Quick Start

wl --list-sessions                   # list CC sessions (start here)
wl --list-sessions --source vscode   # list VS Code sessions
wl --recall --session <UUID>         # extract one session for agent recall
wl --session <UUID>                  # extract specific CC session
wl --session abc-123,def-456         # extract multiple sessions
wl session.jsonl                     # extract from explicit file (auto-detect source)
wl --source vscode --session <UUID>  # explicit VS Code session
wl --full --session <UUID>           # full session (ignore compact boundaries)
wl -t 20 --session <UUID>            # last 20 messages of a session
wl --index 5-10 --session <UUID>     # extract turns 5 through 10
wl --index 42 --session <UUID>       # extract a single turn
wl --grep "pattern"                  # search across all sessions
wl --format jsonl --all --session <UUID> -o full.jsonl
wl *.jsonl --merge -o merged.md      # merge explicit JSONL files
wl --summary-stats                   # show session statistics

Bare wl (no args) prints help. For extraction, always pass --session <UUID> -- use --list-sessions to discover IDs.

Sources

Source Flag S Auto-detect Session Location
Claude Code --source cc C type + sessionId + timestamp keys $CLAUDE_CONFIG_DIR/projects/**/*.jsonl
VS Code Copilot --source vscode V kind + v keys %APPDATA%/Code/User/workspaceStorage/*/chatSessions/*.jsonl
WormLens extract --source wl W <wormlens-extract> or <wl-recall-caveat> wrapper File input only (no discovery)

Auto-detection examines the first record in the file. --list-sessions scans all providers and shows a one-character source column (S). Timestamps are UTC.

Filtering

By default, only user and assistant messages are included. Add flags to include more:

Flag Content
--thinking Reasoning/thinking blocks
--tools Tool calls and results
--code-edits Code edit groups (VS Code)
--hooks Hook events (CC)
--bash Bash output (CC)
--teammates Teammate messages (CC)
--refs Inline references (VS Code)
--system-msgs System-injected messages (CC: isMeta, local-command, etc.)
--all Everything

Output Formats

Format Flag Notes
Chat --format chat (default) Token-efficient XML-style turn wrappers, agent-optimized
Markdown --format md Structured with headers, turn numbers, metadata
Plain text --format txt Session/role markers, no formatting
JSONL --format jsonl One JSON record per message

Chat format

The default. Designed for LLM context injection -- maximum signal, minimum chrome:

<session id="4a97ef42-beb2-41ba-81e1-fdc3b470b58b" source="vscode" date="2026-04-30" title="Parquet to CSV">
<!-- Sequential turn numbers. Source: C:\...\4a97ef42-....jsonl -->
<user turn=1>Write a python script to convert parquet files to CSV
<assistant turn=1>pyarrow is available. Script created at `parquet2csv.py`.
<user turn=2>Is there a way to do sql-like where clause?
<assistant turn=2>Both are doable. For (b) it's trivial with pyarrow column selection.
</session>

Turn numbering: CC uses JSONL line numbers (turn=80 -> line 80 of source file for full-fidelity retrieval). VS Code uses sequential numbers.

Escaping: Only at start-of-line -- \ -> \\, < -> \<. Mid-line < is untouched.

Record Selection

Flag Effect
-n N Limit to N output records
--rev Reverse: take last N (requires -n)
-t N / --tail N Last N records (shorthand for --rev -n N)
--newest-first Reverse chronological order
--index SPEC Subaddress retrieval -- extract specific turns or ranges (e.g. 5, 5-10, 5,8,12)
--session ID[,ID] Extract specific session(s) by UUID
--session-id ID Filter to specific sessionId within a file
--min-turns N Minimum user+assistant turns (default: 2 for --list-sessions)
--min-size SIZE Minimum file size, e.g. 10KB, 1MB

Session Noise Filtering

--list-sessions defaults to --min-turns 2, hiding throwaway sessions (someone starts Claude, checks something, exits). Override with --min-turns 0 to see everything, or increase the threshold:

wl --list-sessions --min-turns 5         # substantial sessions only
wl --list-sessions --min-size 100KB      # filter by file size
wl --list-sessions --min-turns 0          # show all including noise

System-Injected Messages

Claude Code sends certain messages as user role that are actually system-injected: local command output (<local-command-stdout>), command caveats, slash commands, etc. These are detected via the isMeta record flag and known XML tag patterns, and tagged as system_inject internally.

By default they are filtered out. Use --system-msgs (or --all) to include them.

Recovery Mode (Claude Code)

wl --recall --session <UUID> operates in recovery mode:

  1. Finds the last compact_boundary marker in the session file
  2. Extracts only messages after that point
  3. Wraps the output in <wl-recall-caveat> tags so the consuming agent recognizes it as recovered episodic memory, not live conversation

Use --full to extract the whole session file regardless of compact boundaries.

VS Code State Reconstruction

VS Code Copilot stores chat sessions as an incremental patch stream (kind 0=snapshot, 1=set, 2=splice). The backend replays the full patch sequence to reconstruct final session state before extracting messages.

Searching Chat History

wl --grep "pattern"                      # search all sessions, all sources
wl --grep "pattern" -i                   # case-insensitive
wl --grep "pattern" -B 2 -A 2           # with context messages
wl --grep "pattern" --source cc          # search specific source

Building the Zipapp

python3 build_pyz.py
# Output: .copilot/wormlens.pyz

Produces a single-file wormlens.pyz that can be distributed and run with python wormlens.pyz. No dependencies beyond the standard library.

Architecture

The repo uses a flat layout: the project root is the wormlens package (via [tool.setuptools.package-dir] mapping "wormlens" = "."). Modules like cli.py, pipeline.py, etc. live at the project root, not in a nested wormlens/ subdirectory.

wormlens/                  (project root = python package)
  __init__.py              # Package version
  __main__.py              # python -m entry point
  cli.py                   # Argument parsing, orchestration
  models.py                # ChatMessage, ChatSession, FilterOpts
  pipeline.py              # discover -> parse -> filter -> sort
  formatters.py            # md/txt/jsonl output
  build_pyz.py             # Zipapp builder
  skill.md                 # Skill manifest (also bundled in package)
  pyproject.toml
  README.md
  LICENSE
  AGENTS.md                # Instructions for AI agents working in this repo
  CHANGELOG.md
  tests/                   # pytest suite (see "Running tests")
  harness/
    __init__.py
    wormlens.py            # Outer loop (wl launch)
    wl-hook.py             # StatusLine + context injection hook
  providers/
    __init__.py            # Auto-discovery registry
    _base.py               # Provider ABC
    claude_code/parser.py
    vscode_copilot/parser.py
    wl_extract/parser.py

Diagnostics

wl --doctor

Checks provider availability, session directory paths, file permissions, and configuration health. Run this first when something is not working.

Session Continuity (Outer Loop)

wl launch runs the wormlens harness -- an outer loop that manages CC's lifecycle for infinite session continuity. When the agent reaches context limits, the harness restarts CC with episodic recall from the prior session.

wl launch                                # interactive, no initial prompt
wl launch --prompt "build a redis server" # start with a task
wl launch --ctx-limit 85 --hard-kill 95  # tighter thresholds
wl launch --grace 30                     # shorter grace period before kill
wl launch --project-dir /path/to/repo    # explicit project dir
Flag Default Effect
--prompt none Initial task prompt for the CC session
--ctx-limit 90 Context %% at which URGENT is injected
--hard-kill 99 Context %% at which to force kill
--grace 60 Seconds after URGENT before forced handoff
--poll-interval 2.0 Poll interval for context/handoff checks
--project-dir cwd Project directory for trust dialog

The harness requires the wormlens skill to be installed (wl --install-skill) so that context tracking hooks are active.

For debugging, the harness can also be run standalone:

python3 -m wormlens.harness.wormlens --prompt "echo hi"

Running tests

pip install -e .[dev]
pytest

The suite (tests/) covers CLI argparse, JSONL parser edge cases, formatter output shape, settings.json merge/unmerge, skill install/uninstall, recall and handoff gating, checkpoint extraction, and the .wl round-trip. All fixtures are synthetic ASCII files under tests/fixtures/ and tmp_path -- nothing touches your real ~/.claude tree.

Changelog

See CHANGELOG.md for release notes.

See also

  • Design notes:

    • docs/agent-agency.md -- why agent-driven memory wins; how telemetry + tools beat framework-curated context.
    • docs/token-economics.md -- five-layer cost analysis of compact vs. wormlens with current Anthropic pricing.
  • spad-mcp -- the autonomous, agent-controlled SSH harness we use for wormlens development. Two roles in the dev cycle:

    • Dev / test / debug: specs in, fully-tested ready-to-ship out. An agent installs wormlens, verifies the skill loads and hooks fire, exercises the outer-loop restart on handoff -- including the Claude-extension scaffolding (skill packaging, hook wiring, settings.json merge). Bugs kick back to a human; clean runs ship. Generalizes to other agent tools beyond CC.
    • Benchmarks: agent-as-proctor + agent-as-testee, fully autonomous across the comparison matrix (compact-only, wl+compact, wl-only, fresh-start). Real workloads, real numbers, no wetware.

    Despite urgency to ship wormlens, the debug cadence was too slow with humans in the loop and fair, consistent benchmarks were impractical without an autonomous runner. So we paused wormlens and pivoted to spad-mcp -- we needed it to properly test and finish wormlens at a reasonable pace. Dogfooding: spad runs long unattended sessions; wormlens keeps them coherent.

Known Limitations

  • VS Code splice reconstruction handles inserts and deletes but the d (deleteCount) key format is inferred from VS Code's source -- edge cases may exist

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

wormlens-0.1.0.tar.gz (73.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

wormlens-0.1.0-py3-none-any.whl (63.2 kB view details)

Uploaded Python 3

File details

Details for the file wormlens-0.1.0.tar.gz.

File metadata

  • Download URL: wormlens-0.1.0.tar.gz
  • Upload date:
  • Size: 73.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for wormlens-0.1.0.tar.gz
Algorithm Hash digest
SHA256 1184a6e79c7b784a4073f3891165b3e5528a4b41357db162df36007c114d7f00
MD5 0c6b6776208f5e0de7fee6fe88e0d90d
BLAKE2b-256 849dd09b6f35e995d027bec9679a7b6f4c49b902951b6edf6593f860e74df67b

See more details on using hashes here.

Provenance

The following attestation bundles were made for wormlens-0.1.0.tar.gz:

Publisher: release.yml on apresence/wormlens

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file wormlens-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: wormlens-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 63.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for wormlens-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ae71c357b6d3946de3b3e6f34156a36135737e09a47e0accf08940f5f5bb3966
MD5 c83cf1f83453457c8350b256a2bbdcc3
BLAKE2b-256 7652b2b82a91a57dc5c2e9d33e3b7daa95ea7202152fdba7378b111e1b0516c4

See more details on using hashes here.

Provenance

The following attestation bundles were made for wormlens-0.1.0-py3-none-any.whl:

Publisher: release.yml on apresence/wormlens

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page