MCP server providing a persistent Python sandbox for token-efficient codebase exploration across MCP clients
Project description
RLM Tools
Your AI coding agent spends most of its token budget just reading your code — not reasoning about it. Every grep, file read, and glob result gets dumped into the conversation. On a large codebase, that's 25-35% of your context (and cost) burned on raw data the model never needed to see.
RLM Tools gives your agent a persistent sandbox to explore code in. Data stays server-side. Only the conclusions come back.
# For Codex / generic MCP clients
uvx rlm-tools-setup --write-policy AGENTS.md
# For Claude Code
uvx rlm-tools-setup --write-policy CLAUDE.md
This auto-detects Claude/Codex on your PATH, registers the MCP server, and writes a policy block that nudges agents toward rlm-tools for repo exploration. Use CLAUDE.md for Claude Code (it reads this file automatically) or AGENTS.md for other clients.
If you prefer manual setup:
# Claude Code
claude mcp add rlm-tools -- uvx rlm-tools
# Codex
codex mcp add rlm-tools -- uvx rlm-tools
What Changes
Without RLM Tools — agent greps for import UIKit, gets 500 matches dumped into context. Reads 10 files, burns all their content as tokens. Context window fills up. Agent forgets what it was doing.
With RLM Tools — agent runs the same exploration in a server-side Python sandbox. Data stays in sandbox memory. Only the print() output enters context:
matches = grep("import UIKit")
by_module = {}
for m in matches:
module = m["file"].split("/")[0]
by_module.setdefault(module, []).append(m)
for module, ms in sorted(by_module.items(), key=lambda x: -len(x[1]))[:5]:
print(f"{module}: {len(ms)} files")
500 lines of grep results become 5 lines of summary. The agent sees what it needs, nothing more.
Real-World Impact
In typical coding workflows: 25-35% context reduction. That means your agent can explore roughly 40-50% more code before hitting context limits.
In heavy exploration tasks (reading many files, broad searches), savings go much further:
| Scenario | Standard Tools | RLM Tools | Saved |
|---|---|---|---|
| Grep across full app | 40,045 chars | 1,644 chars | 95.9% |
| Read 10 large files | 1,493,720 chars | 13,588 chars | 99.1% |
| Multi-step exploration | 136,102 chars | 5,285 chars | 96.1% |
| Grep then read matches | 340,408 chars | 6,022 chars | 98.2% |
| Find all usages of a pattern | 13,478 chars | 3,691 chars | 72.6% |
| Understand a module | 94,745 chars | 16,925 chars | 82.1% |
Full benchmark methodology and reproduction steps: docs/benchmarks.md
How It Works
Three MCP tools. That's the entire API:
| Tool | Purpose |
|---|---|
rlm_start(path, query) |
Open a session on a directory |
rlm_execute(session_id, code) |
Run Python in the sandbox |
rlm_end(session_id) |
Close session, free resources |
The sandbox provides built-in helpers:
read_file(path)/read_files(paths)— Read files into variables (cached across calls)grep(pattern)/grep_summary(pattern)/grep_read(pattern)— Searchglob_files(pattern)— Find files by patterntree(path, max_depth)— Directory structurellm_query(prompt, context)— Sub-LLM analysis (optional, requires API key)
Variables persist across rlm_execute calls within a session. The agent can build up understanding incrementally — search, filter, read, analyze — without any intermediate data touching the context window.
Works With
RLM Tools is a standard MCP server. It works with any MCP-compatible client: Claude Code, Codex, Cursor, and others.
Optional Strict Mode
If your MCP client supports tool permissions or hooks, disable/block default read/grep/glob tools for repository exploration and keep rlm-tools enabled. This is the strongest way to force the token-efficient path.
Other installation methods
JSON MCP config (Cursor, Windsurf, etc.)
{
"mcpServers": {
"rlm-tools": {
"command": "uvx",
"args": ["rlm-tools"]
}
}
}
Setup helper command
# Auto-detect Claude/Codex CLIs and configure rlm-tools
uvx rlm-tools-setup
# Print JSON config snippet for other clients
uvx rlm-tools-setup --print-json
# Print policy text instead of writing it
uvx rlm-tools-setup --print-policy
Direct run
uvx rlm-tools
From source
git clone https://github.com/stefanoshea/rlm-tools.git
cd rlm-tools
uv sync
uv run rlm-tools
Then point your MCP client to command: uv, args: ["--directory", "/path/to/rlm-tools", "run", "rlm-tools"].
Configuration
Copy .env.example to .env to customize. All settings are optional — RLM Tools works out of the box with zero config.
The core exploration features (read, grep, glob, tree) require no API key. The optional llm_query() helper calls the Anthropic API for semantic analysis within the sandbox — this is the only feature that requires a key.
| Variable | Default | Description |
|---|---|---|
ANTHROPIC_API_KEY |
— | Required for llm_query() only. Uses Anthropic's API (Claude). |
RLM_SUB_MODEL |
claude-haiku-4-5-20251001 |
Claude model used for llm_query() |
RLM_MAX_SESSIONS |
5 |
Max concurrent sessions |
RLM_SESSION_TIMEOUT |
10 |
Session timeout in minutes |
Security
The sandbox is read-only and restricted:
- Imports: Safe stdlib only (re, json, collections, math, etc.)
- Builtins: Blocks exec, eval, compile,
__import__, breakpoint - File access: Read-only, scoped to session directory, path traversal blocked
- Execution: Configurable per-call timeout (default 30s)
- Rate limits: Configurable max calls per session
Background
RLM Tools implements an RLM-style exploration loop: keep raw data in tool-side memory, send only compact outputs to the model. Built on the Model Context Protocol.
Development
git clone https://github.com/stefanoshea/rlm-tools.git
cd rlm-tools
uv sync --dev
pytest tests
Run comparative benchmarks (requires a local project checkout):
RLM_EVAL_PROJECT_PATH=/path/to/project pytest evals -q -s
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file rlm_tools-0.1.0.tar.gz.
File metadata
- Download URL: rlm_tools-0.1.0.tar.gz
- Upload date:
- Size: 92.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.17 {"installer":{"name":"uv","version":"0.9.17","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1bf1f312991808186e49dc08d83e30246e95179a4c1acedcb2b135b1c7f45f32
|
|
| MD5 |
0c8fa7e044460481d9aa1e40ac35de42
|
|
| BLAKE2b-256 |
001cbd20e4530a1280496886ef5eb104259f14676abbaf0dda4d48a2ce35e192
|
File details
Details for the file rlm_tools-0.1.0-py3-none-any.whl.
File metadata
- Download URL: rlm_tools-0.1.0-py3-none-any.whl
- Upload date:
- Size: 18.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.17 {"installer":{"name":"uv","version":"0.9.17","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fcd4cb368f8da52b27d28d4a7f6dea8b3e9a8d331494255953ac7481620dfb32
|
|
| MD5 |
a62f89a5566924f2dfc1c0c23b4fe1b1
|
|
| BLAKE2b-256 |
744b73619000660ec28bd4de1ae2697438c2818dbdf2a753c53c7ab6495cc300
|