Skip to main content

Persistent memory, self-evolving playbooks, and sandboxed REPL for Claude Code

Project description

CCR — Claude Context Reducer

Without CCR: "Can you remind me what we decided about the dataset preprocessing last week?"
With CCR: Claude already knows — months of decisions, experiments, and code reasoning recalled instantly.

CCR gives Claude Code persistent memory, self-evolving strategy playbooks, and a sandboxed Python REPL — no API keys needed. Works with Claude Max ($20/mo).

New to CCR? See the Student & Researcher Quickstart — setup in 3 minutes, before/after examples, PhD workflow guide.

Quick Start

# 1. Install
pip install ccr-memory  # or: pip install -e . (from source)

# 2. Configure hooks for automatic memory management
ccr install

# 3. Open Claude Code from your project directory — CCR handles the rest
cd /your/project && claude

That's it. Claude will automatically load your project memory on every session start and auto-commit progress when you finish.

What CCR Does

CCR is an MCP server that gives Claude Code three capabilities it doesn't have natively:

  1. Persistent Memory (GCC) — Git-style version-controlled memory that survives across sessions. Branch, merge, and search your project's decision history.
  2. Self-Evolving Playbooks (ACE) — Strategy bullets that track what works and what doesn't, with temporal decay and automatic pruning.
  3. Sandboxed REPL (RLM) — An isolated Python environment for iterative analysis, with repo search and structured output.

All tools run as pure logic with zero LLM calls. Claude Code itself provides the reasoning.

For Researchers and Students

On Claude Max ($20/mo), you're not paying per token — you're paying for continuity. CCR makes that continuity real: Claude carries your experiment history, design decisions, and open questions forward across every session.

A 3-month project means ~90 sessions. Without CCR, each starts from scratch. With CCR, each starts where the last left off.

Researcher-specific features:

  • gcc_commit(experiment={"metrics": {"val_loss": 0.23}}) — log ML runs with metrics and hypothesis
  • gcc_experiments(metric_filter={"val_loss": {"lt": 0.3}}) — find all runs meeting a metric threshold
  • gcc_discuss(topic=..., decision=..., rationale=...) — persistent decision log for architecture choices
  • gcc_search("preprocessing decision") — find any past decision across commits, discussions, and sessions

See the Student & Researcher Quickstart for a full PhD workflow guide.

Manual Setup (without ccr install)

Add to your project's .mcp.json:

{
  "mcpServers": {
    "ccr": {
      "command": ".venv/bin/python",
      "args": ["-m", "ccr.mcp_server", "--project", "."]
    }
  }
}

Then in your Claude Code session, call gcc_context(level=2) to load memory and gcc_commit after completing tasks.

Features

Persistent Memory (GCC)

  • Commits: Save what you did, why, files changed, and what's next
  • Branches: Isolate experiments with gcc_branch, merge when decided
  • Context levels: 5 levels of detail retrieval (summary → full history)
  • Pattern buffer: Transferable skills extracted from commits, with quality scoring
  • Cross-linking: Automatic bidirectional links between related commits
  • Semantic search: Find past work by meaning, not just keywords (ONNX embeddings)

Self-Evolving Playbooks (ACE)

  • Strategy bullets: "When X, do Y" rules with helpful/harmful counters
  • Temporal decay: Unused strategies fade (30 days → 21% weight, 90 days → 1%)
  • Two-tier scope: Global strategies (all projects) + project-specific strategies
  • Failure lessons: Structured analysis of what went wrong and prevention principles
  • Schema evolution: The playbook structure itself evolves based on usage metrics

Sandboxed REPL (RLM)

  • Kernel-level isolation: macOS Seatbelt sandbox (deny-default + allowlist)
  • Module allowlist: Only safe standard library modules permitted
  • Repo tools: search_repo(), get_file(), estimate_tokens() available in REPL
  • Structured output: FINAL_VAR termination pattern for clean results

Repo Indexing

  • Hybrid search: Keyword + semantic + combined modes
  • Per-language parsing: Symbol extraction for Python, TypeScript, Rust, Go, and more
  • ONNX embeddings: Optional dense embeddings (all-MiniLM-L6-v2, 384-dim)
  • Zero-config: Works immediately; semantic search available with pip install ccr-memory[semantic]

Session Logger

Every Q&A turn (user message + Claude's response) is persisted to .ccr/sessions.db (SQLite). Use it to replay any past session, debug unexpected Claude behaviour, or export conversation pairs for fine-tuning. Logging is automatic when hooks are active — Claude calls session_log_turn after each response. See docs/session-logger.md for the full reference.

Architecture

Claude Code ──stdio──> CCR MCP Server
                         ├── GCC Memory    (.ccr/commits, branches, patterns)
                         ├── ACE Playbook  (.ccr/playbook.txt, failure_lessons.json)
                         ├── RLM Sandbox   (isolated Python subprocess)
                         └── Repo Index    (.ccr/index.json, embeddings)

CCR stores all data in a .ccr/ directory within your project (like .git/). Global strategies live in ~/.ccr/.

Tools

Core (used in every session)

Tool Purpose
gcc_commit Save progress with what/why/files/next
gcc_context Retrieve memory at 5 detail levels
gcc_status Show current memory state
ace_get_playbook View strategies with stats
ace_update_counters Rate strategies helpful/harmful
ace_apply_delta Add/update/merge/remove strategies

Extended

Tool Purpose
gcc_branch / gcc_merge Experiment isolation
gcc_links Trace commit relationships
gcc_patterns Query transferable patterns
gcc_scratchpad Ephemeral working memory
gcc_consolidate Generate hierarchical summaries
ace_find_similar Find duplicate strategies
ace_prune Remove harmful strategies
rlm_init / rlm_execute / rlm_finalize Sandboxed REPL
index_build / index_search Repo search

Session Logger

Tool Purpose
session_log_turn Log the current Q&A turn (called automatically after each response)
session_get_history Retrieve recent turns for a session (defaults to current session)
session_search Full-text search across all session turns (FTS5)
session_export Export a session as json, jsonl (OpenAI fine-tune), or markdown

Research Foundation

CCR draws on 16 research papers across three tiers of implementation fidelity:

Implemented (>70% fidelity)

  • GCC (arXiv:2508.00031) — Git-style version-controlled agent memory
  • ACE (arXiv:2510.04618) — Evolving playbooks with structured bullets and delta operations
  • RLM (arXiv:2512.24601) — REPL-based execution with metadata-only stdout

Substantially Adapted (30-70% fidelity)

  • A-MAC (arXiv:2603.04549) — Admission control with 3 of 5 scoring factors
  • A-RAG (arXiv:2602.03442) — Hierarchical retrieval with keyword/semantic/hybrid modes
  • CER (arXiv:2506.06698) — Pattern buffer with dedup and quality scoring
  • MCE (arXiv:2601.21557) — Schema evolution with rule-based structural proposals
  • SkillRL (arXiv:2602.08234) — Failure-side skill distillation via structured lessons

Inspired By (<30% fidelity)

  • A-MEM/MAGMA — Commit cross-linking taxonomy
  • ERL — Trigger/action bullet structure
  • Memori — Semantic triple extraction
  • EverMemOS — Thematic commit clustering
  • EvolveR — Bayesian quality scoring for patterns
  • AgeMem — Working memory scratchpad
  • AgentEvolver — Contribution-weighted counters
  • ALMA — Meta-learned retrieval parameters

All implementations use mechanical heuristics (zero LLM calls). See CLAUDE.md for detailed limitation tables comparing CCR's implementation vs. each paper.

vs. Alternatives

Feature CCR Mem0 Letta/MemGPT Graphiti
Auto-manages memory Yes (hooks) Yes Yes Yes
Version control (branch/merge) Yes No No No
Self-evolving strategies Yes No No No
Sandboxed REPL Yes No No No
Zero LLM calls Yes No No No
Zero infrastructure Yes No No (DB) No (Neo4j)
Works with Claude Max only Yes No No No
Open source MIT Yes Apache 2.0 Apache 2.0

Configuration

Optional Dependencies

pip install ccr-memory[semantic]  # ONNX embeddings for semantic search
pip install ccr-memory[vector]    # sqlite-vec for persistent vector store
pip install ccr-memory[full]      # Both of the above

Environment Variables

Variable Purpose
CCR_PROJECT_ROOT Override project root detection
CCR_OLLAMA_MODEL Enable Ollama sub-model (e.g., qwen2.5:7b)
ANTHROPIC_API_KEY_SUB Enable Anthropic Haiku sub-model

Sub-models are optional — they enable LLM-powered features like rolling summary synthesis and automatic bullet generation.

Diagnostics

ccr doctor   # Check CCR health (deps, config, hooks)
ccr status   # Show memory state
ccr context  # Print project context

Development

git clone https://github.com/qbit-glitch/ccr.git
cd ccr
python -m venv .venv && source .venv/bin/activate
pip install -e ".[dev]"
pytest tests/unit/ tests/integration/ -x -q

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ccr_memory-0.2.7.tar.gz (235.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ccr_memory-0.2.7-py3-none-any.whl (269.3 kB view details)

Uploaded Python 3

File details

Details for the file ccr_memory-0.2.7.tar.gz.

File metadata

  • Download URL: ccr_memory-0.2.7.tar.gz
  • Upload date:
  • Size: 235.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for ccr_memory-0.2.7.tar.gz
Algorithm Hash digest
SHA256 c20bf8fda0466970583436d54ac696d7e99d871933c3f45542136ba66c10928c
MD5 a84ecb7145d50aadb7f4114d5e88fc2c
BLAKE2b-256 79ce4436a9dcbebdfc2d41422cb8cdb0149b7dbc1322f5591b66e8389b871c65

See more details on using hashes here.

File details

Details for the file ccr_memory-0.2.7-py3-none-any.whl.

File metadata

  • Download URL: ccr_memory-0.2.7-py3-none-any.whl
  • Upload date:
  • Size: 269.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for ccr_memory-0.2.7-py3-none-any.whl
Algorithm Hash digest
SHA256 895a4a55ed084f6d309b65496973a6fb54ed1200406ef5ac024bf29dce5ee420
MD5 44521bc85ef53bbf07e69b40d0692053
BLAKE2b-256 de55fe6d8fcefb75b42ba50d898f6b2d8a4e8e514f9631e95e5e8e58043c992a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page