Skip to main content

MCP server for recursive LLM reasoning—load context, iterate with search/code/think tools, converge on answers

Project description

Aleph

License: MIT Python 3.10+ PyPI version

Your RAM is the new context window.

Aleph is an MCP server that gives any LLM access to gigabytes of local data without consuming context. Load massive files into a Python process—the model explores them via search, slicing, and sandboxed code execution. Only results enter the context window, never the raw content.

Based on the Recursive Language Model (RLM) architecture.

Use Cases

Scenario What Aleph Does
Large log analysis Load 500MB of logs, search for patterns, correlate across time ranges
Codebase navigation Load entire repos, find definitions, trace call chains, extract architecture
Data exploration JSON exports, CSV files, API responses—explore interactively with Python
Research sessions Save/resume sessions, track evidence with citations, spawn sub-queries

Requirements

Quickstart

1. Install

pip install "aleph-rlm[mcp]"

2. Configure your MCP client

Automatic (recommended):

aleph-rlm install

This auto-detects your installed clients and configures them.

Manual (any MCP client):

{
  "mcpServers": {
    "aleph": {
      "command": "aleph",
      "args": ["--enable-actions", "--workspace-mode", "any"]
    }
  }
}
Config file locations
Client macOS/Linux Windows
Claude Code ~/.claude/settings.json %USERPROFILE%\.claude\settings.json
Claude Desktop ~/Library/Application Support/Claude/claude_desktop_config.json %APPDATA%\Claude\claude_desktop_config.json
Cursor ~/.cursor/mcp.json %USERPROFILE%\.cursor\mcp.json
VS Code ~/.vscode/mcp.json %USERPROFILE%\.vscode\mcp.json
Codex CLI ~/.codex/config.toml %USERPROFILE%\.codex\config.toml

See MCP_SETUP.md for detailed instructions.

3. Verify

In your assistant, run:

get_status()

If using Claude Code, tools are prefixed: mcp__aleph__get_status.

The /aleph Skill

The /aleph skill is a prompt that teaches your LLM how to use Aleph effectively. It provides workflow patterns, tool guidance, and troubleshooting tips.

What it does

  • Loads files into searchable in-memory contexts
  • Tracks evidence with citations as you reason
  • Enables recursive sub-queries for deep analysis
  • Persists sessions for later resumption

How to invoke

Client Command
Claude Code /aleph
Codex CLI $aleph

For other clients, copy docs/prompts/aleph.md and paste it at session start.

Installing the skill

Option 1: Direct download (simplest)

Download docs/prompts/aleph.md and save it to:

  • Claude Code: ~/.claude/commands/aleph.md (macOS/Linux) or %USERPROFILE%\.claude\commands\aleph.md (Windows)
  • Codex CLI: ~/.codex/skills/aleph/SKILL.md (macOS/Linux) or %USERPROFILE%\.codex\skills\aleph\SKILL.md (Windows)

Option 2: From installed package

macOS/Linux
# Claude Code
mkdir -p ~/.claude/commands
cp "$(python -c "import aleph; print(aleph.__path__[0])")/../docs/prompts/aleph.md" ~/.claude/commands/aleph.md

# Codex CLI
mkdir -p ~/.codex/skills/aleph
cp "$(python -c "import aleph; print(aleph.__path__[0])")/../docs/prompts/aleph.md" ~/.codex/skills/aleph/SKILL.md
Windows (PowerShell)
# Claude Code
New-Item -ItemType Directory -Force -Path "$env:USERPROFILE\.claude\commands"
$alephPath = python -c "import aleph; print(aleph.__path__[0])"
Copy-Item "$alephPath\..\docs\prompts\aleph.md" "$env:USERPROFILE\.claude\commands\aleph.md"

# Codex CLI  
New-Item -ItemType Directory -Force -Path "$env:USERPROFILE\.codex\skills\aleph"
Copy-Item "$alephPath\..\docs\prompts\aleph.md" "$env:USERPROFILE\.codex\skills\aleph\SKILL.md"

How It Works

┌───────────────┐    tool calls     ┌────────────────────────┐
│   LLM client  │ ────────────────► │  Aleph (Python, RAM)   │
│ (limited ctx) │ ◄──────────────── │  search/peek/exec      │
└───────────────┘    small results  └────────────────────────┘
  1. Load data via load_file or load_context
  2. Explore with search_context, peek_context
  3. Compute with exec_python (sandboxed)
  4. Track reasoning with think, get_evidence
  5. Save progress with save_session

Quick Example

# Load log data
load_context(content=logs, context_id="logs")
# → "Context loaded 'logs': 445 chars, 7 lines, ~111 tokens"

# Search for errors
search_context(pattern="ERROR", context_id="logs")
# → Found 2 match(es):
#   Line 1: 2026-01-15 10:23:45 ERROR [auth] Failed login...
#   Line 4: 2026-01-15 10:24:15 ERROR [db] Connection timeout...

# Extract structured data
exec_python(code="emails = extract_emails(); print(emails)", context_id="logs")
# → [{'value': 'user@example.com', 'line_num': 0, 'start': 50, 'end': 66}, ...]

Tools

Core (always available):

  • load_context, list_contexts — manage in-memory data
  • search_context, peek_context, chunk_context — explore loaded data
  • exec_python, get_variable — compute in sandbox (100+ built-in helpers)
  • think, evaluate_progress, get_evidence, finalize — structured reasoning
  • sub_query — spawn recursive sub-agents
exec_python helpers

The sandbox includes 100+ helpers that operate on the loaded context:

Category Examples
Extractors (25) extract_emails(), extract_urls(), extract_dates(), extract_ips(), extract_functions()
Statistics (8) word_count(), line_count(), word_frequency(), ngrams()
Line operations (12) head(), tail(), grep(), sort_lines(), columns()
Text manipulation (15) replace_all(), between(), truncate(), slugify()
Validation (7) is_email(), is_url(), is_json(), is_numeric()
Core peek(), lines(), search(), chunk(), cite()

Extractors return list[dict] with keys: value, line_num, start, end.

Action tools (requires --enable-actions):

  • load_file, read_file, write_file — filesystem access
  • run_command, run_tests — shell execution
  • save_session, load_session — persist state
  • Remote MCP orchestration tools

Configuration

Workspace controls:

  • --workspace-root <path> — root for relative paths (default: git root or cwd)
  • --workspace-mode <fixed|git|any> — path restrictions
  • --require-confirmation — require confirm=true on action calls

Limits:

  • --max-file-size — max file read (default: 1GB)
  • --max-write-bytes — max file write (default: 100MB)
  • --timeout — sandbox/command timeout (default: 30s)
  • --max-output — max command output (default: 10,000 chars)

See docs/CONFIGURATION.md for all options.

Documentation

Development

git clone https://github.com/Hmbown/aleph.git
cd aleph
pip install -e ".[dev,mcp]"
pytest

References

Recursive Language Models
Zhang, A. L., Kraska, T., & Khattab, O. (2025)
arXiv:2512.24601

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aleph_rlm-0.5.7.tar.gz (113.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aleph_rlm-0.5.7-py3-none-any.whl (79.9 kB view details)

Uploaded Python 3

File details

Details for the file aleph_rlm-0.5.7.tar.gz.

File metadata

  • Download URL: aleph_rlm-0.5.7.tar.gz
  • Upload date:
  • Size: 113.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for aleph_rlm-0.5.7.tar.gz
Algorithm Hash digest
SHA256 5301dae17e06225759e8a2e370440e93ad05db7424505641f6c3faeb2c517eb6
MD5 eeae44d4f382f2bbd65bb02ff36b182e
BLAKE2b-256 d28e0698a1aada97c94b38af2d396b051738f993bed50143e6ce1b49e408517b

See more details on using hashes here.

Provenance

The following attestation bundles were made for aleph_rlm-0.5.7.tar.gz:

Publisher: publish.yml on Hmbown/aleph

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file aleph_rlm-0.5.7-py3-none-any.whl.

File metadata

  • Download URL: aleph_rlm-0.5.7-py3-none-any.whl
  • Upload date:
  • Size: 79.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for aleph_rlm-0.5.7-py3-none-any.whl
Algorithm Hash digest
SHA256 5bca1f4d13e72f04628ed4786bd8b3b75747b53a2b9c77d399a26f7273d58afe
MD5 78b8a78005bcb4f2a56fa931e426f705
BLAKE2b-256 d4c1bfc47b76db06975e4a7b7607a785f9ec6f0fa91b0b8d154b87892b577418

See more details on using hashes here.

Provenance

The following attestation bundles were made for aleph_rlm-0.5.7-py3-none-any.whl:

Publisher: publish.yml on Hmbown/aleph

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page