MCP server for recursive LLM reasoning—load context, iterate with search/code/think tools, converge on answers
Project description
Aleph
"What my eyes beheld was simultaneous, but what I shall now write down will be successive, because language is successive." — Jorge Luis Borges, "The Aleph" (1945)
Aleph is an MCP server that lets AI assistants work with documents too large to fit in their context window.
It implements the Recursive Language Model (RLM) paradigm from arXiv:2512.24601.
The problem
LLMs have a fundamental limitation: they can only "see" what fits in their context window. When you paste a large document into a prompt, models often miss important details buried in the middle—a phenomenon called "lost in the middle."
The usual approach:
- Collect all relevant content
- Paste it into the prompt
- Hope the model attends to the right parts
The RLM approach (what Aleph enables):
- Store content outside the model's context
- Let the model explore it with tools (search, peek, compute)
- Keep a trail of evidence linking outputs to source text
- When needed, recurse: spawn sub-agents for chunks, then synthesize
Think of Borges' Aleph: a point containing all points. You don't hold it all in attention at once—you move through it, zooming and searching, returning with what matters.
What Aleph provides
Aleph is an MCP server—a standardized way for AI assistants to use external tools. It works with Claude Desktop, Cursor, Windsurf, VS Code, Claude Code, Codex CLI, and other MCP-compatible clients.
When you install Aleph, your AI assistant gains:
| Capability | What it means |
|---|---|
| External memory | Store documents outside the context window as searchable state |
| Navigation tools | Search by regex, view specific line ranges, jump to matches |
| Compute sandbox | Run Python code over the loaded content (parsing, stats, transforms) |
| Evidence tracking | Automatically cite which parts of the source informed each answer |
| Recursive agents | Spawn sub-agents to process chunks in parallel, then aggregate |
The content you load can be anything representable as text or JSON: code repositories, build logs, incident reports, database exports, API responses, research papers, legal documents, etc.
Quick start
pip install aleph-rlm[mcp]
# Auto-configure popular MCP clients
aleph-rlm install
# Verify installation
aleph-rlm doctor
Manual MCP configuration
Add to your MCP client config (Claude Desktop, Cursor, etc.):
{
"mcpServers": {
"aleph": {
"command": "aleph-mcp-local",
"args": ["--enable-actions"]
}
}
}
Claude Code configuration
Claude Code auto-discovers MCP servers. Run aleph-rlm install claude-code or add to ~/.claude/settings.json:
{
"mcpServers": {
"aleph": {
"command": "aleph-mcp-local",
"args": ["--enable-actions"]
}
}
}
Install the /aleph skill for the RLM workflow prompt:
mkdir -p ~/.claude/commands
cp /path/to/aleph/docs/prompts/aleph.md ~/.claude/commands/aleph.md
Codex CLI configuration
Add to ~/.codex/config.toml:
[mcp_servers.aleph]
command = "aleph-mcp-local"
args = ["--enable-actions"]
Or run: aleph-rlm install codex
Install the /aleph skill for Codex:
mkdir -p ~/.codex/skills/aleph
cp /path/to/aleph/ALEPH.md ~/.codex/skills/aleph/SKILL.md
How it works in practice
Once installed, you interact with Aleph through your AI assistant. Here's the typical flow:
1. Load your content
load_context(context="<your large document>", context_id="doc")
The assistant stores this externally—it doesn't consume context window tokens.
2. Explore with tools
search_context(pattern="error|exception|fail", context_id="doc")
peek_context(start=120, end=150, unit="lines", context_id="doc")
The assistant searches and views only the relevant slices.
3. Compute when needed
# exec_python — runs in the sandbox with your content as `ctx`
matches = search(r"timeout.*\d+ seconds")
stats = {"total_matches": len(matches), "lines": [m["line_no"] for m in matches]}
4. Get cited answers
The assistant's final answer includes evidence trails back to specific source locations.
Using the /aleph command
If you've installed the skill, just use:
/aleph: Find the root cause of this test failure and propose a fix.
For AI assistants using Aleph, see ALEPH.md for the detailed workflow.
Recursion: handling very large inputs
When content is too large even for slice-based exploration, Aleph supports recursive decomposition:
- Chunk the content into manageable pieces
- Spawn sub-agents to analyze each chunk
- Synthesize findings into a final answer
# exec_python
chunks = chunk(100_000) # split into ~100K char pieces
results = [sub_query("Extract key findings.", context_slice=c) for c in chunks]
final = sub_query("Synthesize into a summary:", context_slice="\n\n".join(results))
sub_query can use an API backend (OpenAI-compatible) or spawn a local CLI (Claude, Codex, Aider)—whichever is available.
Available tools
Core exploration:
| Tool | Purpose |
|---|---|
load_context |
Store text/JSON in external memory |
search_context |
Regex search with surrounding context |
peek_context |
View specific line or character ranges |
exec_python |
Run Python code over the content |
chunk_context |
Split content into navigable chunks |
Workflow management:
| Tool | Purpose |
|---|---|
think |
Structure reasoning for complex problems |
get_evidence |
Retrieve collected citations |
summarize_so_far |
Summarize progress on long tasks |
finalize |
Complete with answer and evidence |
Recursion:
| Tool | Purpose |
|---|---|
sub_query |
Spawn a sub-agent on a content slice |
Optional actions (disabled by default, enable with --enable-actions):
| Tool | Purpose |
|---|---|
load_file |
Load a workspace file into a context |
read_file, write_file |
File system access |
run_command, run_tests |
Shell execution |
save_session, load_session |
Persist/restore state |
Action tools that return JSON support output="object" for structured responses without double-encoding.
Configuration
Environment variables for sub_query:
# Backend selection (auto-detects by default)
export ALEPH_SUB_QUERY_BACKEND=auto # or: api | claude | codex | aider
# API credentials (for API backend)
export OPENAI_API_KEY=...
export OPENAI_BASE_URL=https://api.openai.com/v1
export ALEPH_SUB_QUERY_MODEL=gpt-4o-mini
Note: Some MCP clients don't reliably pass
envvars from their config to the server process. Ifsub_queryreports "API key not found" despite your client's MCP settings, add the exports to your shell profile (~/.zshrcor~/.bashrc) and restart your terminal/client.
See docs/CONFIGURATION.md for all options.
Changelog
Unreleased
- Added
load_fileand auto-created contexts for action tools when acontext_idis provided - Standardized line numbering to 1-based by default (configurable), clarified peek/search line ranges, and added
include_rawforread_file - Added
output="object"for structured responses and consistent JSON error payloads - Reduced evidence noise with search summary mode and
record_evidenceflags;citenow validates line ranges - Hardened
run_testsreporting (exit codes/errors) andsub_querybackend validation; added sandbox import introspection helpers
Security
- The Python sandbox is best-effort, not hardened—don't run untrusted code
- Action tools (file/command access) are off by default and workspace-scoped when enabled
- For untrusted inputs, run Aleph in a container with resource limits
Development
git clone https://github.com/Hmbown/aleph.git
cd aleph
pip install -e '.[dev,mcp]'
pytest
See DEVELOPMENT.md for architecture details.
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file aleph_rlm-0.5.2.tar.gz.
File metadata
- Download URL: aleph_rlm-0.5.2.tar.gz
- Upload date:
- Size: 7.1 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8f1ace926ac53b006b2052be3e53caceb3e0ec2d7187f3ba92aaeb36253ba41c
|
|
| MD5 |
6a586ed2f0bcda03140e1d9872a80c54
|
|
| BLAKE2b-256 |
8e8d2e1f1b4dfdbc1797a670c460e79dbe3fa0547103ad14455f12e743260faa
|
File details
Details for the file aleph_rlm-0.5.2-py3-none-any.whl.
File metadata
- Download URL: aleph_rlm-0.5.2-py3-none-any.whl
- Upload date:
- Size: 90.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9bf2750ad9772e7d5ace0b7c4be73ed723f9883bf554f94697399fab8910dae2
|
|
| MD5 |
b5e0d2a4d91194d3a170e89e310d388e
|
|
| BLAKE2b-256 |
eb515d06b7027514f130a2487968009c35c9623d1ebc4568c37784db1da16d4e
|