Skip to main content

MCP server for recursive LLM reasoning—load context, iterate with search/code/think tools, converge on answers

Project description

Aleph

License: MIT Python 3.10+ PyPI version

Your RAM is the new context window.

Aleph is an MCP server that gives any LLM access to gigabytes of local data without consuming context. Load massive files into a Python process—the model explores them via search, slicing, and sandboxed code execution. Only results enter the context window, never the raw content.

Based on the Recursive Language Model (RLM) architecture.

Use Cases

Scenario What Aleph Does
Large log analysis Load 500MB of logs, search for patterns, correlate across time ranges
Codebase navigation Load entire repos, find definitions, trace call chains, extract architecture
Data exploration JSON exports, CSV files, API responses—explore interactively with Python
Mixed document ingestion Load PDFs, Word docs, HTML, and logs like plain text
Semantic search Find relevant sections by meaning, then zoom in with peek
Research sessions Save/resume sessions, track evidence with citations, spawn sub-queries

Requirements

Quickstart

1. Install

pip install "aleph-rlm[mcp]"

2. Configure your MCP client

Automatic (recommended):

aleph-rlm install

This auto-detects your installed clients and configures them.

Manual (any MCP client):

{
  "mcpServers": {
    "aleph": {
      "command": "aleph",
      "args": ["--enable-actions", "--workspace-mode", "any"]
    }
  }
}
Config file locations
Client macOS/Linux Windows
Claude Code ~/.claude/settings.json %USERPROFILE%\.claude\settings.json
Claude Desktop ~/Library/Application Support/Claude/claude_desktop_config.json %APPDATA%\Claude\claude_desktop_config.json
Cursor ~/.cursor/mcp.json %USERPROFILE%\.cursor\mcp.json
VS Code ~/.vscode/mcp.json %USERPROFILE%\.vscode\mcp.json
Codex CLI ~/.codex/config.toml %USERPROFILE%\.codex\config.toml

See MCP_SETUP.md for detailed instructions.

3. Verify

In your assistant, run:

get_status()

If using Claude Code, tools are prefixed: mcp__aleph__get_status.

AI Assistant Setup (MCP + /aleph Skill) — Copy/Paste

Paste this into any AI coding assistant or agentic coder to add Aleph (MCP server + /aleph skill):

You are an AI coding assistant. Please set up Aleph (Model Context Protocol / MCP).

1) Add the Aleph MCP server config:
{
  "mcpServers": {
    "aleph": {
      "command": "aleph",
      "args": ["--enable-actions", "--workspace-mode", "any"]
    }
  }
}

2) Install the /aleph skill prompt:
- Claude Code: copy docs/prompts/aleph.md -> ~/.claude/commands/aleph.md
- Codex CLI: copy docs/prompts/aleph.md -> ~/.codex/skills/aleph/SKILL.md
- Gemini CLI: copy docs/prompts/aleph.md -> ~/.gemini/skills/aleph/SKILL.md
  Ensure ~/.gemini/settings.json has "experimental": { "skills": true } and restart.
If this client uses a different skill/command folder, ask me where to place it.

3) Verify: run get_status() or list_contexts().
If tools are namespaced, use mcp__aleph__get_status or mcp__aleph__list_contexts.

4) (Optional) Enable sub_query (recursive sub-agent):
- CLI backend (no API key): set ALEPH_SUB_QUERY_BACKEND=claude|codex|gemini
- API backend: set ALEPH_SUB_QUERY_API_KEY + ALEPH_SUB_QUERY_MODEL (+ optional ALEPH_SUB_QUERY_URL)
If env vars can't be set in the MCP config, add them to your shell profile and restart.

5) Use the skill: /aleph (Claude Code) or $aleph (Codex CLI).
Gemini CLI: /skills list (use /skills enable aleph if disabled).

The /aleph Skill

The /aleph skill is a prompt that teaches your LLM how to use Aleph effectively. It provides workflow patterns, tool guidance, and troubleshooting tips.

Note: Aleph works best when paired with the skill prompt + MCP server together.

What it does

  • Loads files into searchable in-memory contexts
  • Tracks evidence with citations as you reason
  • Supports semantic search and fast rg-based codebase search
  • Enables recursive sub-queries for deep analysis
  • Persists sessions for later resumption (memory packs)

How to invoke

Client Command
Claude Code /aleph
Codex CLI $aleph

For other clients, copy docs/prompts/aleph.md and paste it at session start.

Installing the skill

Option 1: Direct download (simplest)

Download docs/prompts/aleph.md and save it to:

  • Claude Code: ~/.claude/commands/aleph.md (macOS/Linux) or %USERPROFILE%\.claude\commands\aleph.md (Windows)
  • Codex CLI: ~/.codex/skills/aleph/SKILL.md (macOS/Linux) or %USERPROFILE%\.codex\skills\aleph\SKILL.md (Windows)

Option 2: From installed package

macOS/Linux
# Claude Code
mkdir -p ~/.claude/commands
cp "$(python -c "import aleph; print(aleph.__path__[0])")/../docs/prompts/aleph.md" ~/.claude/commands/aleph.md

# Codex CLI
mkdir -p ~/.codex/skills/aleph
cp "$(python -c "import aleph; print(aleph.__path__[0])")/../docs/prompts/aleph.md" ~/.codex/skills/aleph/SKILL.md
Windows (PowerShell)
# Claude Code
New-Item -ItemType Directory -Force -Path "$env:USERPROFILE\.claude\commands"
$alephPath = python -c "import aleph; print(aleph.__path__[0])"
Copy-Item "$alephPath\..\docs\prompts\aleph.md" "$env:USERPROFILE\.claude\commands\aleph.md"

# Codex CLI  
New-Item -ItemType Directory -Force -Path "$env:USERPROFILE\.codex\skills\aleph"
Copy-Item "$alephPath\..\docs\prompts\aleph.md" "$env:USERPROFILE\.codex\skills\aleph\SKILL.md"

How It Works

┌───────────────┐    tool calls     ┌────────────────────────┐
│   LLM client  │ ────────────────► │  Aleph (Python, RAM)   │
│ (limited ctx) │ ◄──────────────── │  search/peek/exec      │
└───────────────┘    small results  └────────────────────────┘
  1. Loadload_context (paste text) or load_file (from disk)
  2. Exploresearch_context, semantic_search, peek_context
  3. Computeexec_python with 100+ built-in helpers
  4. Reasonthink, evaluate_progress, get_evidence
  5. Persistsave_session to resume later

Quick Example

# Load log data
load_context(content=logs, context_id="logs")
# → "Context loaded 'logs': 445 chars, 7 lines, ~111 tokens"

# Search for errors
search_context(pattern="ERROR", context_id="logs")
# → Found 2 match(es):
#   Line 1: 2026-01-15 10:23:45 ERROR [auth] Failed login...
#   Line 4: 2026-01-15 10:24:15 ERROR [db] Connection timeout...

# Extract structured data
exec_python(code="emails = extract_emails(); print(emails)", context_id="logs")
# → [{'value': 'user@example.com', 'line_num': 0, 'start': 50, 'end': 66}, ...]

Advanced Workflows

Multi-Context Workflow (code + docs + diffs)

Load multiple sources, then compare or reconcile them:

# Load a design doc and a repo snapshot (or any two sources)
load_context(content=design_doc_text, context_id="spec")
rg_search(pattern="AuthService|JWT|token", paths=["."], load_context_id="repo_hits", confirm=true)

# Compare or reconcile
diff_contexts(a="spec", b="repo_hits")
search_context(pattern="missing|TODO|mismatch", context_id="repo_hits")

Advanced Querying with exec_python

Treat exec_python as a reasoning tool, not just code execution:

# Example: extract class names or key sections programmatically
exec_python(code="print(extract_classes())", context_id="repo_hits")

Tools

Core (always available):

  • load_context, list_contexts, diff_contexts — manage in-memory data
  • search_context, semantic_search, peek_context, chunk_context — explore data; use semantic_search for concepts/fuzzy queries, search_context for precise regex
  • exec_python, get_variable — compute in sandbox (100+ built-in helpers)
  • think, evaluate_progress, summarize_so_far, get_evidence, finalize — structured reasoning
  • tasks — lightweight task tracking per context
  • get_status — session state
  • sub_query — spawn recursive sub-agents (CLI or API backend)
exec_python helpers

The sandbox includes 100+ helpers that operate on the loaded context:

Category Examples
Extractors (25) extract_emails(), extract_urls(), extract_dates(), extract_ips(), extract_functions()
Statistics (8) word_count(), line_count(), word_frequency(), ngrams()
Line operations (12) head(), tail(), grep(), sort_lines(), columns()
Text manipulation (15) replace_all(), between(), truncate(), slugify()
Validation (7) is_email(), is_url(), is_json(), is_numeric()
Core peek(), lines(), search(), chunk(), cite()

Extractors return list[dict] with keys: value, line_num, start, end.

Action tools (requires --enable-actions):

  • load_file, read_file, write_file — filesystem (PDFs, Word, HTML, .gz supported)
  • run_command, run_tests, rg_search — shell + fast repo search
  • save_session, load_session — persist state (memory packs)
  • add_remote_server, list_remote_tools, call_remote_tool — MCP orchestration

Configuration

Workspace controls:

  • --workspace-root <path> — root for relative paths (default: git root from invocation cwd)
  • --workspace-mode <fixed|git|any> — path restrictions
  • --require-confirmation — require confirm=true on action calls
  • ALEPH_WORKSPACE_ROOT — override workspace root via environment

Limits:

  • --max-file-size — max file read (default: 1GB)
  • --max-write-bytes — max file write (default: 100MB)
  • --timeout — sandbox/command timeout (default: 60s)
  • --max-output — max command output (default: 50,000 chars)

See docs/CONFIGURATION.md for all options.

Documentation

Development

git clone https://github.com/Hmbown/aleph.git
cd aleph
pip install -e ".[dev,mcp]"
pytest

References

Recursive Language Models
Zhang, A. L., Kraska, T., & Khattab, O. (2025)
arXiv:2512.24601

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aleph_rlm-0.6.0.tar.gz (220.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aleph_rlm-0.6.0-py3-none-any.whl (91.2 kB view details)

Uploaded Python 3

File details

Details for the file aleph_rlm-0.6.0.tar.gz.

File metadata

  • Download URL: aleph_rlm-0.6.0.tar.gz
  • Upload date:
  • Size: 220.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for aleph_rlm-0.6.0.tar.gz
Algorithm Hash digest
SHA256 fba504cf12f874a9898063416ef685c88e80774033d924b5ca570ec71ef9efe0
MD5 2e06c2a1c3d160834f9073850a83da34
BLAKE2b-256 dae183cafc514da34e229d4b0e74f471427efa06d7ac5122ba01c82ab00524ab

See more details on using hashes here.

Provenance

The following attestation bundles were made for aleph_rlm-0.6.0.tar.gz:

Publisher: publish.yml on Hmbown/aleph

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file aleph_rlm-0.6.0-py3-none-any.whl.

File metadata

  • Download URL: aleph_rlm-0.6.0-py3-none-any.whl
  • Upload date:
  • Size: 91.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for aleph_rlm-0.6.0-py3-none-any.whl
Algorithm Hash digest
SHA256 a3d5f112f142354b139fd309df746f94dc34c22365a2fbd557ec3a92abba1cd1
MD5 871d7c889105ece2ffe6c8f7a0a66a73
BLAKE2b-256 2f4204a25f70fdd95d62c2324211ed377c8e798e53537087ea365f7b7a3571a5

See more details on using hashes here.

Provenance

The following attestation bundles were made for aleph_rlm-0.6.0-py3-none-any.whl:

Publisher: publish.yml on Hmbown/aleph

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page