MCP server for recursive LLM reasoning—load context, iterate with search/code/think tools, converge on answers
Project description
Aleph
Your RAM is the new context window.
Aleph is an MCP server that gives any LLM access to gigabytes of local data without consuming context. Load massive files into a Python process—the model explores them via search, slicing, and sandboxed code execution. Only results enter the context window, never the raw content.
Based on the Recursive Language Model (RLM) architecture.
Use Cases
| Scenario | What Aleph Does |
|---|---|
| Large log analysis | Load 500MB of logs, search for patterns, correlate across time ranges |
| Codebase navigation | Load entire repos, find definitions, trace call chains, extract architecture |
| Data exploration | JSON exports, CSV files, API responses—explore interactively with Python |
| Mixed document ingestion | Load PDFs, Word docs, HTML, and logs like plain text |
| Semantic search | Find relevant sections by meaning, then zoom in with peek |
| Research sessions | Save/resume sessions, track evidence with citations, spawn sub-queries |
Requirements
- Python 3.10+
- An MCP-compatible client: Claude Code, Cursor, VS Code, Windsurf, Codex CLI, or Claude Desktop
Quickstart
1. Install
pip install "aleph-rlm[mcp]"
2. Configure your MCP client
Automatic (recommended):
aleph-rlm install
This auto-detects your installed clients and configures them.
Manual (any MCP client):
{
"mcpServers": {
"aleph": {
"command": "aleph",
"args": ["--enable-actions", "--workspace-mode", "any"]
}
}
}
Config file locations
| Client | macOS/Linux | Windows |
|---|---|---|
| Claude Code | ~/.claude/settings.json |
%USERPROFILE%\.claude\settings.json |
| Claude Desktop | ~/Library/Application Support/Claude/claude_desktop_config.json |
%APPDATA%\Claude\claude_desktop_config.json |
| Cursor | ~/.cursor/mcp.json |
%USERPROFILE%\.cursor\mcp.json |
| VS Code | ~/.vscode/mcp.json |
%USERPROFILE%\.vscode\mcp.json |
| Codex CLI | ~/.codex/config.toml |
%USERPROFILE%\.codex\config.toml |
See MCP_SETUP.md for detailed instructions.
3. Verify
In your assistant, run:
get_status()
If using Claude Code, tools are prefixed: mcp__aleph__get_status.
AI Assistant Setup (MCP + /aleph Skill) — Copy/Paste
Paste this into any AI coding assistant or agentic coder to add Aleph (MCP server + /aleph skill):
You are an AI coding assistant. Please set up Aleph (Model Context Protocol / MCP).
1) Add the Aleph MCP server config:
{
"mcpServers": {
"aleph": {
"command": "aleph",
"args": ["--enable-actions", "--workspace-mode", "any"]
}
}
}
2) Install the /aleph skill prompt:
- Claude Code: copy docs/prompts/aleph.md -> ~/.claude/commands/aleph.md
- Codex CLI: copy docs/prompts/aleph.md -> ~/.codex/skills/aleph/SKILL.md
- Gemini CLI: copy docs/prompts/aleph.md -> ~/.gemini/skills/aleph/SKILL.md
Ensure ~/.gemini/settings.json has "experimental": { "skills": true } and restart.
If this client uses a different skill/command folder, ask me where to place it.
3) Verify: run get_status() or list_contexts().
If tools are namespaced, use mcp__aleph__get_status or mcp__aleph__list_contexts.
4) (Optional) Enable sub_query (recursive sub-agent):
- Quick: just say "use claude backend" — the LLM will run set_backend("claude")
- Env var: set ALEPH_SUB_QUERY_BACKEND=claude|codex|gemini|api
- API backend: set ALEPH_SUB_QUERY_API_KEY + ALEPH_SUB_QUERY_MODEL
Runtime switching: the LLM can call set_backend() or configure() anytime—no restart needed.
5) Use the skill: /aleph (Claude Code) or $aleph (Codex CLI).
Gemini CLI: /skills list (use /skills enable aleph if disabled).
The /aleph Skill
The /aleph skill is a prompt that teaches your LLM how to use Aleph effectively. It provides workflow patterns, tool guidance, and troubleshooting tips.
Note: Aleph works best when paired with the skill prompt + MCP server together.
What it does
- Loads files into searchable in-memory contexts
- Tracks evidence with citations as you reason
- Supports semantic search and fast rg-based codebase search
- Enables recursive sub-queries for deep analysis
- Persists sessions for later resumption (memory packs)
Simplest Use Case
Just point at a file:
/aleph path/to/huge_log.txt
The LLM will load it into Aleph's external memory and immediately start analyzing using RLM patterns—no extra setup needed.
How to invoke
| Client | Command |
|---|---|
| Claude Code | /aleph |
| Codex CLI | $aleph |
For other clients, copy docs/prompts/aleph.md and paste it at session start.
Installing the skill
Option 1: Direct download (simplest)
Download docs/prompts/aleph.md and save it to:
- Claude Code:
~/.claude/commands/aleph.md(macOS/Linux) or%USERPROFILE%\.claude\commands\aleph.md(Windows) - Codex CLI:
~/.codex/skills/aleph/SKILL.md(macOS/Linux) or%USERPROFILE%\.codex\skills\aleph\SKILL.md(Windows)
Option 2: From installed package
macOS/Linux
# Claude Code
mkdir -p ~/.claude/commands
cp "$(python -c "import aleph; print(aleph.__path__[0])")/../docs/prompts/aleph.md" ~/.claude/commands/aleph.md
# Codex CLI
mkdir -p ~/.codex/skills/aleph
cp "$(python -c "import aleph; print(aleph.__path__[0])")/../docs/prompts/aleph.md" ~/.codex/skills/aleph/SKILL.md
Windows (PowerShell)
# Claude Code
New-Item -ItemType Directory -Force -Path "$env:USERPROFILE\.claude\commands"
$alephPath = python -c "import aleph; print(aleph.__path__[0])"
Copy-Item "$alephPath\..\docs\prompts\aleph.md" "$env:USERPROFILE\.claude\commands\aleph.md"
# Codex CLI
New-Item -ItemType Directory -Force -Path "$env:USERPROFILE\.codex\skills\aleph"
Copy-Item "$alephPath\..\docs\prompts\aleph.md" "$env:USERPROFILE\.codex\skills\aleph\SKILL.md"
How It Works
┌───────────────┐ tool calls ┌────────────────────────┐
│ LLM client │ ────────────────► │ Aleph (Python, RAM) │
│ (limited ctx) │ ◄──────────────── │ search/peek/exec │
└───────────────┘ small results └────────────────────────┘
- Load —
load_context(paste text) orload_file(from disk) - Explore —
search_context,semantic_search,peek_context - Compute —
exec_pythonwith 100+ built-in helpers - Reason —
think,evaluate_progress,get_evidence - Persist —
save_sessionto resume later
Quick Example
# Load log data
load_context(content=logs, context_id="logs")
# → "Context loaded 'logs': 445 chars, 7 lines, ~111 tokens"
# Search for errors
search_context(pattern="ERROR", context_id="logs")
# → Found 2 match(es):
# Line 1: 2026-01-15 10:23:45 ERROR [auth] Failed login...
# Line 4: 2026-01-15 10:24:15 ERROR [db] Connection timeout...
# Extract structured data
exec_python(code="emails = extract_emails(); print(emails)", context_id="logs")
# → [{'value': 'user@example.com', 'line_num': 0, 'start': 50, 'end': 66}, ...]
Advanced Workflows
Multi-Context Workflow (code + docs + diffs)
Load multiple sources, then compare or reconcile them:
# Load a design doc and a repo snapshot (or any two sources)
load_context(content=design_doc_text, context_id="spec")
rg_search(pattern="AuthService|JWT|token", paths=["."], load_context_id="repo_hits", confirm=true)
# Compare or reconcile
diff_contexts(a="spec", b="repo_hits")
search_context(pattern="missing|TODO|mismatch", context_id="repo_hits")
Advanced Querying with exec_python
Treat exec_python as a reasoning tool, not just code execution:
# Example: extract class names or key sections programmatically
exec_python(code="print(extract_classes())", context_id="repo_hits")
Tools
Core (always available):
load_context,list_contexts,diff_contexts— manage in-memory datasearch_context,semantic_search,peek_context,chunk_context— explore data; usesemantic_searchfor concepts/fuzzy queries,search_contextfor precise regexexec_python,get_variable— compute in sandbox (100+ built-in helpers)think,evaluate_progress,summarize_so_far,get_evidence,finalize— structured reasoningtasks— lightweight task tracking per contextget_status— session statesub_query— spawn recursive sub-agents (CLI or API backend)
exec_python helpers
The sandbox includes 100+ helpers that operate on the loaded context:
| Category | Examples |
|---|---|
| Extractors (25) | extract_emails(), extract_urls(), extract_dates(), extract_ips(), extract_functions() |
| Statistics (8) | word_count(), line_count(), word_frequency(), ngrams() |
| Line operations (12) | head(), tail(), grep(), sort_lines(), columns() |
| Text manipulation (15) | replace_all(), between(), truncate(), slugify() |
| Validation (7) | is_email(), is_url(), is_json(), is_numeric() |
| Core | peek(), lines(), search(), chunk(), cite(), sub_query(), sub_query_map(), sub_query_batch(), sub_query_strict() |
Extractors return list[dict] with keys: value, line_num, start, end.
Action tools (requires --enable-actions):
load_file,read_file,write_file— filesystem (PDFs, Word, HTML, .gz supported)run_command,run_tests,rg_search— shell + fast repo searchsave_session,load_session— persist state (memory packs)add_remote_server,list_remote_tools,call_remote_tool— MCP orchestration
Configuration
Workspace controls:
--workspace-root <path>— root for relative paths (default: git root from invocation cwd)--workspace-mode <fixed|git|any>— path restrictions--require-confirmation— requireconfirm=trueon action callsALEPH_WORKSPACE_ROOT— override workspace root via environment
Limits:
--max-file-size— max file read (default: 1GB)--max-write-bytes— max file write (default: 100MB)--timeout— sandbox/command timeout (default: 60s)--max-output— max command output (default: 50,000 chars)
See docs/CONFIGURATION.md for all options.
Documentation
- MCP_SETUP.md — client configuration
- docs/CONFIGURATION.md — CLI flags and environment variables
- docs/prompts/aleph.md — skill prompt and tool reference
- CHANGELOG.md — release history
- DEVELOPMENT.md — contributing guide
Development
git clone https://github.com/Hmbown/aleph.git
cd aleph
pip install -e ".[dev,mcp]"
pytest
References
Recursive Language Models
Zhang, A. L., Kraska, T., & Khattab, O. (2025)
arXiv:2512.24601
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file aleph_rlm-0.7.0.tar.gz.
File metadata
- Download URL: aleph_rlm-0.7.0.tar.gz
- Upload date:
- Size: 249.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0b2f7ccaca1540cefabac7bb88274c36ca3d91b322aab900283f0be6e9d27674
|
|
| MD5 |
6c304cebb3f1dc45506c1a7861bfd02b
|
|
| BLAKE2b-256 |
3a388966227dc74ac9878e280da0abd54f4c9c4cd78b6759cd183eb4564330d4
|
Provenance
The following attestation bundles were made for aleph_rlm-0.7.0.tar.gz:
Publisher:
publish.yml on Hmbown/aleph
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
aleph_rlm-0.7.0.tar.gz -
Subject digest:
0b2f7ccaca1540cefabac7bb88274c36ca3d91b322aab900283f0be6e9d27674 - Sigstore transparency entry: 846280480
- Sigstore integration time:
-
Permalink:
Hmbown/aleph@7d8b86e7bf86f489ec196b97ec1ce09cf8bbb40e -
Branch / Tag:
refs/tags/v0.7.0 - Owner: https://github.com/Hmbown
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@7d8b86e7bf86f489ec196b97ec1ce09cf8bbb40e -
Trigger Event:
release
-
Statement type:
File details
Details for the file aleph_rlm-0.7.0-py3-none-any.whl.
File metadata
- Download URL: aleph_rlm-0.7.0-py3-none-any.whl
- Upload date:
- Size: 125.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
64f3c9dd6936a90e6357191d5b9c0718add94642e9d6c5ea9aa601583cecc78e
|
|
| MD5 |
7d183110446db8e8193ac9597e5d58b6
|
|
| BLAKE2b-256 |
c43da970f2996015b94cea1f617c5b4e789b0f4f1c5c2a18b7d1ee6abc0917ac
|
Provenance
The following attestation bundles were made for aleph_rlm-0.7.0-py3-none-any.whl:
Publisher:
publish.yml on Hmbown/aleph
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
aleph_rlm-0.7.0-py3-none-any.whl -
Subject digest:
64f3c9dd6936a90e6357191d5b9c0718add94642e9d6c5ea9aa601583cecc78e - Sigstore transparency entry: 846280514
- Sigstore integration time:
-
Permalink:
Hmbown/aleph@7d8b86e7bf86f489ec196b97ec1ce09cf8bbb40e -
Branch / Tag:
refs/tags/v0.7.0 - Owner: https://github.com/Hmbown
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@7d8b86e7bf86f489ec196b97ec1ce09cf8bbb40e -
Trigger Event:
release
-
Statement type: