Skip to main content

Outcome-based persistent memory for AI coding tools (Claude Code, OpenCode)

Project description

Roampal — Outcome-Based Persistent Memory MCP Server

Tests PyPI Downloads Stars License Python Discord

Two commands. Your AI coding assistant gets outcome-based memory.
Works with Claude Code and OpenCode.


Benchmarks

85.8% on LoCoMo (non-adversarial, end-to-end answer accuracy) — validated on 1,986 questions across 10 conversations with dual grading.

Result Score
Conversational learning vs raw ingestion +23 points (76.6% vs 53.0%, p<0.0001)
Architecture vs model effect Architecture ~10x larger contributor
Poison resilience (1,135 adversarial memories) -2.6 to -4.2 points only
TagCascade retrieval (tags-first + CE rerank) +1.9 Hit@1 vs pure CE (p<0.0001)

Benchmark pipeline runs on a single GPU with no cloud dependencies. Roampal itself runs on CPU — no GPU required. Full methodology, data, and evaluation scripts: roampal-labs

Paper: "Beyond Ingestion: What Conversational Memory Learning Reveals on a Corrected LoCoMo Benchmark" (Logan Teague, April 2026)


Quick Start

pip install roampal
roampal init

Auto-detects installed tools. Restart your editor and start chatting.

Target a specific tool: roampal init --claude-code or roampal init --opencode

roampal init demo

Platform Differences

The core loop is identical — both platforms inject context, capture exchanges, and score outcomes. The delivery mechanism differs:

Claude Code OpenCode
Context injection Hooks (stdout) Plugin (system prompt)
Exchange capture Stop hook Plugin session.idle event
Scoring Main LLM via score_memories tool Independent sidecar (your chosen model > Zen free)
Self-healing Hooks auto-restart server on failure Plugin auto-restarts server on failure

Claude Code prompts the main LLM to score each exchange via the score_memories tool. OpenCode never self-scores — an independent sidecar (a separate API call) reviews each exchange as a third party, removing self-assessment bias. The score_memories tool is not registered on OpenCode. During roampal init or roampal sidecar setup, Roampal detects local models (Ollama, LM Studio, etc.) and lets you choose a scoring model. If configured, these take priority (Zen is skipped for privacy). A cheap or local model works great — scoring doesn't need a powerful model. Defaults to Zen free models (remote, best-effort) if you skip setup.

How It Works

When you type a message, Roampal automatically injects relevant context before your AI sees it:

You type:

fix the auth bug

Your AI sees:

═══ KNOWN CONTEXT ═══
• JWT refresh pattern fixed auth loop [id:patterns_a1b2] (3d, 90% proven, patterns)
• User prefers: never stage git changes [id:mb_c3d4] (memory_bank)
═══ END CONTEXT ═══

fix the auth bug

No manual calls. No workflow changes. It just works.

The Loop

  1. You type a message
  2. Roampal injects relevant context automatically (hooks in Claude Code, plugin in OpenCode)
  3. AI responds with full awareness of your history, preferences, and what worked before
  4. Outcome scored — good advice gets promoted, bad advice gets demoted
  5. Repeat — the system gets smarter every exchange

Five Memory Collections

Collection Purpose Lifetime
working Current session context 24h — promotes if useful, deleted otherwise
history Past conversations 30 days, outcome-scored
patterns Proven solutions Persistent while useful, promoted from history
memory_bank Identity, preferences, goals Permanent
books Uploaded reference docs Permanent

Commands

roampal init                # Auto-detect and configure installed tools
roampal init --claude-code  # Configure Claude Code explicitly
roampal init --opencode     # Configure OpenCode explicitly
roampal init --no-input     # Non-interactive setup (CI/scripts)
roampal start               # Start the HTTP server manually
roampal stop                # Stop the HTTP server
roampal status              # Check if server is running
roampal status --json       # Machine-readable status (for scripting)
roampal stats               # View memory statistics
roampal stats --json        # Machine-readable statistics (for scripting)
roampal doctor              # Diagnose installation issues
roampal summarize           # Summarize long memories (retroactive cleanup)
roampal score               # Score the last exchange (manual/testing)
roampal context             # Output recent exchange context
roampal ingest <file>       # Add documents to books collection
roampal books               # List all ingested books
roampal remove <title>      # Remove a book by title
roampal sidecar status      # Check scoring model configuration (OpenCode)
roampal sidecar setup       # Configure scoring model (OpenCode)
roampal sidecar test        # Test scoring model response format (OpenCode)
roampal retag               # Re-extract tags on memories using sidecar LLM
roampal sidecar disable     # Remove scoring model configuration (OpenCode)

# Named memory profiles (v0.5.1) — isolate memory per project, per client, etc.
roampal profile list                         # List registered profiles
roampal profile show                         # Show active profile and its path
roampal profile create <name>                # Create auto-located profile
roampal profile register <name> --path <dir> # Register an existing directory
roampal profile use <name>                   # Persist as user-global default
roampal profile unuse                        # Clear persistence
roampal profile switch <name>                # Persist + kill running server
roampal profile delete <name>                # Remove from registry
roampal start --profile <name>               # One-off launch on a profile

Named Memory Profiles (v0.5.1)

Run separate memory stores for different contexts — per project, per client (Claude Code vs OpenCode), work vs home. Profiles are managed entirely through the CLI; no config files to hand-edit.

roampal profile create work          # auto-located at <appdata>/Roampal/data/work/
roampal profile switch work          # persist + kill running server
# next MCP tool call spawns a fresh server on 'work'

Register an existing directory as a profile (no data migration):

roampal profile register project-a --path /existing/custom/path

Precedence (highest wins):

  1. --profile <name> flag
  2. ROAMPAL_PROFILE=<name> env var (set per-project in opencode.json or .claude.json env: {})
  3. roampal profile use <name> persisted default
  4. "default" fallback

MCP Tools

Your AI gets these memory tools:

Tool Description Platforms
search_memory Deep search across all collections Both
add_to_memory_bank Store permanent facts (identity, preferences, goals) Both
update_memory Correct or update existing memories Both
delete_memory Remove outdated info Both
score_memories Score previous exchange outcomes Claude Code
record_response Store key takeaways from significant exchanges Both

How scoring works: Claude Code's hooks prompt the main LLM to call score_memories every turn. OpenCode uses an independent sidecar that scores silently in the background — the model never sees a scoring prompt and score_memories is not registered as a tool. If the sidecar is unavailable, a warning prompts the user to run roampal sidecar setup. Choose your scoring model during roampal init or via roampal sidecar setup.

How Roampal Compares

Feature Roampal Core Claude Code built-in (CLAUDE.md / auto memory) OpenCode built-in
Learns from outcomes Yes — bad advice demoted, good advice promoted No No
Semantic retrieval Yes — TagCascade + cross-encoder reranking No — files loaded in full, no search No memory system
Context injection Automatic — relevant memories per query Full CLAUDE.md every session, auto memory on demand None
Atomic fact extraction Yes — summaries + facts, two-lane retrieval No — saves what Claude decides is useful No
Works across projects Yes — shared memory across all projects Per-project only (per git repo) No memory
Scales with history Yes — 5 collections, promotion/demotion/decay CLAUDE.md unbounded, auto memory first 200 lines No memory
Fully local / private Yes — ChromaDB on your machine Yes Yes
Architecture
┌─────────────────────────────────────────────────────────┐
│  pip install roampal && roampal init                    │
│    Claude Code: hooks + MCP → ~/.claude/                │
│    OpenCode:    plugin + MCP → ~/.config/opencode/      │
└─────────────────────────────────────────────────────────┘
                         │
                         ▼
┌─────────────────────────────────────────────────────────┐
│  HTTP Hook Server (port 27182)                          │
│    Auto-started on first use, self-heals on failure     │
│    Manual control: roampal start / roampal stop         │
└─────────────────────────────────────────────────────────┘
                         │
                         ▼
┌─────────────────────────────────────────────────────────┐
│  User types message                                     │
│    → Hook/plugin calls HTTP server for context          │
│    → AI sees relevant memories, responds                │
│    → Exchange stored, scored (hooks or sidecar)         │
└─────────────────────────────────────────────────────────┘
                         │
                         ▼
┌─────────────────────────────────────────────────────────┐
│  Single-Writer Backend                                  │
│    FastAPI → UnifiedMemorySystem → ChromaDB             │
│    All clients share one server, isolated by session    │
└─────────────────────────────────────────────────────────┘

See dev/docs/ for full technical details.

Requirements

  • Python 3.10+
  • One of: Claude Code or OpenCode
  • Platforms: Windows, macOS, Linux (primarily developed and tested on Windows)
  • RAM: ~800MB available (cross-encoder reranker + embeddings + ChromaDB)
  • Disk: ~500MB for models (multilingual embedding + reranker, downloaded automatically on first use)
  • CPU: Any modern x86-64 processor with AVX2 (Intel Haswell 2013+ / AMD Excavator 2015+)
  • GPU: Not required — all inference runs on CPU via ONNX Runtime

Troubleshooting

Hooks not working? (Claude Code)
  • Restart Claude Code (hooks load on startup)
  • Check HTTP server: curl http://127.0.0.1:27182/api/health
MCP not connecting? (Claude Code)
  • Verify ~/.claude.json has the roampal-core MCP entry with correct Python path
  • Check Claude Code output panel for MCP errors
Context not appearing? (OpenCode)
  • Make sure you ran roampal init --opencode
  • Check that the server auto-started: curl http://127.0.0.1:27182/api/health
  • If not, start it manually: roampal start
Server crashes and recovers?

This is expected. Roampal has self-healing -- if the HTTP server stops responding, it is automatically restarted and retried.

Still stuck? Ask your AI for help — it can read logs and debug Roampal issues directly.

Support

Roampal Core is completely free and open source.

roampal-core MCP server

License

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

roampal-0.5.1.tar.gz (255.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

roampal-0.5.1-py3-none-any.whl (280.5 kB view details)

Uploaded Python 3

File details

Details for the file roampal-0.5.1.tar.gz.

File metadata

  • Download URL: roampal-0.5.1.tar.gz
  • Upload date:
  • Size: 255.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.3

File hashes

Hashes for roampal-0.5.1.tar.gz
Algorithm Hash digest
SHA256 f9679e977f3e46ec48d5d5d2f8bd94807af2f98895f6572f361579588dfa2575
MD5 3925817c6e8e55bc5e3da36a26edeeb0
BLAKE2b-256 9e66dae528e4402ba619c45f2fed625cfd025da480fd622be9e4dab24f4b378d

See more details on using hashes here.

File details

Details for the file roampal-0.5.1-py3-none-any.whl.

File metadata

  • Download URL: roampal-0.5.1-py3-none-any.whl
  • Upload date:
  • Size: 280.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.3

File hashes

Hashes for roampal-0.5.1-py3-none-any.whl
Algorithm Hash digest
SHA256 263b60a13b8be3d16aee73f6926213bf3958919c26efcf579c41a018a64d3589
MD5 2fa1413044064070267d84cd14440cac
BLAKE2b-256 215e0d2612fb8d92e6c2465e2e7f0890077e5fa4ef43224548c1ad7b3a81101f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page