MCP server for recursive LLM reasoning—load context, iterate with search/code/think tools, converge on answers
Project description
Aleph
Aleph is an MCP server for working on large repos, logs, and documents without stuffing them into the model prompt. It keeps the working context in a Python process, then exposes tools so the model can search, peek, run code, recurse, and return small derived results.
Recommended default: install the Codex CLI, then run aleph-rlm install.
Aleph still works with Claude Code, Cursor, VS Code, and other MCP clients, but
Codex is the cleanest shared-session sub-query path today.
Why Aleph:
- Load context once instead of pasting it over and over.
- Compute inside Aleph memory with
exec_pythoninstead of leaking raw data back through the prompt. - Use recursive sub-queries and recipes when a single pass is not enough.
- Save sessions and resume long investigations later.
+-----------------+ tool calls +--------------------------+
| LLM client | ---------------> | Aleph (Python process) |
| (context budget)| <--------------- | search / peek / exec |
+-----------------+ small results +--------------------------+
Quick Start
- Install Aleph:
pip install "aleph-rlm[mcp]"
- Configure your MCP client:
aleph-rlm install
- Verify Aleph is reachable in your assistant:
get_status()
# or
list_contexts()
- Run the skill flow on a real file:
/aleph /absolute/path/to/file.log
# or in Codex CLI
$aleph /absolute/path/to/file.log
The shortcut command is optional. If you want /aleph or $aleph, install
docs/prompts/aleph.md in your client's
command/skill folder. Exact paths are in MCP_SETUP.md.
First Workflow
Aleph is best when you load data once, do the heavy work inside Aleph, and only pull back compact answers.
load_file(path="/absolute/path/to/large_file.log", context_id="doc")
search_context(pattern="ERROR|WARN", context_id="doc")
peek_context(start=1, end=60, unit="lines", context_id="doc")
exec_python(code="""
errors = [line for line in ctx.splitlines() if "error" in line.lower()]
result = {
"error_count": len(errors),
"first_error": errors[0] if errors else None,
}
""", context_id="doc")
get_variable(name="result", context_id="doc")
save_session(context_id="doc", path=".aleph/doc.json")
The important habit is to compute server-side. Do not treat get_variable("ctx")
as the default path. Search, filter, chunk, or summarize first, then retrieve a
small result.
If you want terminal-only mode instead of MCP, use:
aleph run "Summarize this log" --provider cli --model codex --context-file app.log
Common Workloads
| Scenario | What Aleph Is Good At |
|---|---|
| Large log analysis | Load big files, trace patterns, correlate events |
| Codebase navigation | Search symbols, inspect routes, trace behavior |
| Data exploration | Analyze JSON, CSV, and mixed text with Python helpers |
| Long document review | Load PDFs, Word docs, HTML, and compressed logs |
| Recursive investigations | Split work into sub-queries instead of one giant prompt |
| Long-running sessions | Save and resume memory packs across sessions |
Core Tools
| Category | Primary tools | What they do |
|---|---|---|
| Load context | load_context, load_file, list_contexts, diff_contexts |
Put data into Aleph memory and inspect what is loaded |
| Navigate | search_context, semantic_search, peek_context, chunk_context, rg_search |
Find the relevant slice before asking for an answer |
| Compute | exec_python, get_variable |
Run code over the full context and retrieve only the derived result |
| Reason | think, evaluate_progress, get_evidence, finalize |
Structure progress and close out with evidence |
| Orchestrate | configure, validate_recipe, estimate_recipe, run_recipe, run_recipe_code |
Switch backends and automate repeated reasoning patterns |
| Persist | save_session, load_session |
Keep long investigations outside the prompt window |
Inside exec_python, Aleph also exposes helpers such as search(...),
chunk(...), lines(...), sub_query(...), sub_query_batch(...), and
sub_aleph(...). Recursive helpers live inside the REPL, not as top-level MCP
tools.
Safety Model
Aleph is built to keep raw context out of the model window unless you explicitly pull it back:
- Tool responses are capped and truncated.
get_variable("ctx")is policy-aware and should not be your default path.exec_pythonstdout, stderr, and return values are bounded independently.ALEPH_CONTEXT_POLICY=isolatedadds stricter session export/import rules and more defensive defaults.
The safest pattern is always:
- Load the large context into Aleph memory.
- Search or compute inside Aleph.
- Retrieve only the small result you need.
Docs Map
- MCP_SETUP.md: client-by-client MCP and skill installation.
- docs/prompts/aleph.md: the
/alephand$alephworkflow plus tool patterns. - docs/CONFIGURATION.md: flags, env vars, limits, and safety settings.
- docs/langgraph-rlm-default.md: LangGraph integration with Aleph-style tool usage.
- examples/langgraph_rlm_repo_improver.py: repo improvement example with optional LangSmith tracing.
- CHANGELOG.md: release history.
- DEVELOPMENT.md: contributor guide.
Development
git clone https://github.com/Hmbown/aleph.git
cd aleph
pip install -e ".[dev,mcp]"
pytest tests/ -v
ruff check aleph/ tests/
References
- Zhang, A. L., Kraska, T., Khattab, O. (2025) Recursive Language Models (arXiv:2512.24601)
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file aleph_rlm-0.8.8.tar.gz.
File metadata
- Download URL: aleph_rlm-0.8.8.tar.gz
- Upload date:
- Size: 314.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9c5033194034c8d048cbd81374e774bc391540f71cb32e56f0727a879c249cdd
|
|
| MD5 |
e9cc22c0cbaa5ef801870d5ca1bc2504
|
|
| BLAKE2b-256 |
5e9a45f2626c0a79195b259e9ae6eb006e979d4ffe70a7a89118f7fc995ba5e3
|
Provenance
The following attestation bundles were made for aleph_rlm-0.8.8.tar.gz:
Publisher:
publish.yml on Hmbown/aleph
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
aleph_rlm-0.8.8.tar.gz -
Subject digest:
9c5033194034c8d048cbd81374e774bc391540f71cb32e56f0727a879c249cdd - Sigstore transparency entry: 1086022062
- Sigstore integration time:
-
Permalink:
Hmbown/aleph@a62cb9f3d20bb4a948a2ecdcd9d6d2e989bb4dc3 -
Branch / Tag:
refs/tags/v0.8.8 - Owner: https://github.com/Hmbown
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@a62cb9f3d20bb4a948a2ecdcd9d6d2e989bb4dc3 -
Trigger Event:
release
-
Statement type:
File details
Details for the file aleph_rlm-0.8.8-py3-none-any.whl.
File metadata
- Download URL: aleph_rlm-0.8.8-py3-none-any.whl
- Upload date:
- Size: 145.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
cc69a01907f9b314ec8a747702fa70e7b6896e2444ba5a5c2f091627c9d9a800
|
|
| MD5 |
4869e11b3b31f2a82d5de8975695cb56
|
|
| BLAKE2b-256 |
5d1422d1c56d329df606e8528dce1e340399c0378cf19fae9d99a187aa6e1c82
|
Provenance
The following attestation bundles were made for aleph_rlm-0.8.8-py3-none-any.whl:
Publisher:
publish.yml on Hmbown/aleph
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
aleph_rlm-0.8.8-py3-none-any.whl -
Subject digest:
cc69a01907f9b314ec8a747702fa70e7b6896e2444ba5a5c2f091627c9d9a800 - Sigstore transparency entry: 1086022124
- Sigstore integration time:
-
Permalink:
Hmbown/aleph@a62cb9f3d20bb4a948a2ecdcd9d6d2e989bb4dc3 -
Branch / Tag:
refs/tags/v0.8.8 - Owner: https://github.com/Hmbown
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@a62cb9f3d20bb4a948a2ecdcd9d6d2e989bb4dc3 -
Trigger Event:
release
-
Statement type: