Build Recursive Language Models as inspectable execution graphs.
Project description
rlmflow
A Python library for building Recursive Language Models as inspectable execution graphs.
Recursive agents get messy fast: one parent spawns children, children spawn more children, some branches wait, some fail, and some resume later with partial results. A flat chat log hides that structure.
rlmflow turns the run into a graph instead, where every query, action, observation, child call, etc. is a typed node you can inspect, visualize, step, fork, and replay.
Install
pip install rlmflow # core
pip install rlmflow[openai] # + OpenAI client
pip install rlmflow[anthropic] # + Anthropic client
pip install rlmflow[viewer] # + Gradio viewer
pip install rlmflow[all] # all of the above
From source:
git clone https://github.com/shyamsn97/rlmflow && cd rlmflow
pip install -e .
Quick start
from rlmflow import OpenAIClient, RLMConfig, RLMFlow, Workspace
from rlmflow.runtime.local import LocalRuntime
from rlmflow.tools import FILE_TOOLS
from rlmflow.utils.trace import save_trace
from rlmflow.utils.viewer import open_viewer
workspace = Workspace.create("./myproject")
runtime = LocalRuntime(workspace=workspace)
# Sandbox agent code inside Docker instead: drop-in replacement,
# same interface. Build the image once with `docker build -t rlmflow:local .`
# from the repo root; see docs/runtimes.md and docs/security.md.
#
# from rlmflow.runtime.docker import DockerRuntime
# runtime = DockerRuntime("rlmflow:local", workspace=workspace)
runtime.register_tools(FILE_TOOLS)
agent = RLMFlow(
llm_client=OpenAIClient("gpt-5"),
runtime=runtime,
workspace=workspace,
config=RLMConfig(max_depth=3, max_iterations=15),
)
query = "Build a python text-based adventure game with combat and inventory."
states = [agent.start(query)]
while not states[-1].finished:
states.append(agent.step(states[-1]))
print(states[-1].tree())
save_trace(states, "traces/run1")
open_viewer(states)
RLMFlow as a Graph Engine
RLMFlow implements LLMClient, so it is a drop-in replacement for any LLM.
Call chat(messages) or run(query) and it runs a recursive agent loop
underneath:
def ask(llm: LLMClient, q: str) -> str:
return llm.chat([{"role": "user", "content": q}])
ask(OpenAIClient("gpt-4o-mini"), "2+2?") # one LLM call
ask(RLMFlow(llm_client=..., runtime=...), "2+2?") # full agent, same return type
Nest agents by passing one RLMFlow as another's llm_client.
The runtime model stays small:
RLMFlowis the lightweight interpreter that advances nodes.Nodeobjects are the source of truth for what happened.Workspace.sessionstores node/message history undersession/.Workspace.contextstores task payloads exposed in the REPL asCONTEXT.Runtimeowns live REPL execution state.
Each transition returns a new immutable node. A live tree might look like:
root [supervising] {default}
├── root.scanner_auth [result] {fast:gpt-5-mini} -> Found SQL injection in login.py
├── root.scanner_api [supervising] {default}
│ ├── root.scanner_api.chunk_0 [result] {fast:gpt-5-mini} -> Clean
│ └── root.scanner_api.chunk_1 [result] {fast:gpt-5-mini} -> Payment flow is safe
└── root.scanner_db [result] {fast:gpt-5-mini} -> No issues found
Each step(node) -> node' is one atomic graph transition:
ObservationNode -> LLM -> ActionNode -> Runtime -> ObservationNode
-> done() -> ResultNode
-> wait() -> SupervisingNode
SupervisingNode -> step child leaves -> ResumeNode -> LLM -> ...
ObservationNode: input to the next LLM call: query, REPL output, resume, error, or terminal result.ActionNode: raw LLM reply plus extracted REPL code.SupervisingNode: suspended action waiting on child agents.ResultNode: terminal answer fromdone(result).
Delegation:
h1 = delegate("searcher", "Find all TODOs in src/")
h2 = delegate("searcher", "Find all FIXMEs in src/") # auto-suffixed
results = yield wait(h1, h2)
done(f"Found {len(results)} batches")
Examples
All examples share flags like --no-viz, --docker-image rlmflow:local,
--max-depth, and --max-iterations. See examples/README.md.
| Example | What it shows |
|---|---|
showcase.py |
Typed nodes, checkpoints, session persistence, intervention, gym-style stepping. |
drop_in_llm.py |
RLMFlow as an LLMClient. Nested agents. |
coding-agent/agent.py |
Interactive coding agent that writes and edits files. |
needle_haystack.py |
Needle-in-a-haystack across 500 files with custom tools and runtime_factory. |
summarizer.py |
Recursive map-reduce over a long document. |
view_demo.py |
Launch the Gradio viewer on a saved trace. |
CLI
rlmflow view traces/run1/
rlmflow render checkpoint.json -f mermaid
rlmflow render traces/run1/ -f gantt-html -o run1.html
rlmflow version
view and render accept a trace directory, trace.json, or checkpoint.
Formats: mermaid, dot, tree, gantt-html.
Docs
- RLMs Are Graphs: the design thesis: typed execution graphs, flow interpreter, session/context/runtime split.
- Positioning: why rlmflow treats RLMs as graphs.
- Observability: nodes, session/context, traces, visualizations, and the viewer.
- Control: step loop, checkpoint, rewind, intervention, custom prompts, runtimes, and tools.
- Runtimes:
Runtimeprotocol, Local / Subprocess / Docker / Modal, writing your own. - Security: trust model, Docker isolation knobs, approval gates.
References
- Recursive Language Models: the original RLM paper and implementation.
- rlm-minimal: the single-file reference rlmflow grew from.
- Scaling Managed Agents: Decoupling the brain from the hands: Anthropic's writeup on separating harness, session, and sandbox interfaces for long-horizon agents.
- ypi: recursive coding agent built
on Pi. Our session layout and much of the default prompt
(size-up → delegate → combine, guardrails, aggressive delegation) come
from ypi's
SYSTEM_PROMPT.md.
License
See LICENSE.
Citation
@misc{sudhakaran2025rlmflow,
author = {Sudhakaran, Shyam},
title = {rlmflow},
year = {2025},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/shyamsn97/rlmflow}},
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file rlmflow-0.1.3.tar.gz.
File metadata
- Download URL: rlmflow-0.1.3.tar.gz
- Upload date:
- Size: 7.9 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
467ac4066c092d777bdadce2402fe50e7a7eb7d10bca43fc6278a3827827e96d
|
|
| MD5 |
cc3f78ef7ff6cdba084116565bca40cb
|
|
| BLAKE2b-256 |
f4e1c1d8cad0afc0c6d1869887c1c43c6a8550307b9a136d951c126714295e46
|
Provenance
The following attestation bundles were made for rlmflow-0.1.3.tar.gz:
Publisher:
release.yml on shyamsn97/rlmflow
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
rlmflow-0.1.3.tar.gz -
Subject digest:
467ac4066c092d777bdadce2402fe50e7a7eb7d10bca43fc6278a3827827e96d - Sigstore transparency entry: 1408393201
- Sigstore integration time:
-
Permalink:
shyamsn97/rlmflow@4eb7f839f0a6afa592849f741fc6269e9a5195d6 -
Branch / Tag:
refs/tags/v0.1.3 - Owner: https://github.com/shyamsn97
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@4eb7f839f0a6afa592849f741fc6269e9a5195d6 -
Trigger Event:
push
-
Statement type:
File details
Details for the file rlmflow-0.1.3-py3-none-any.whl.
File metadata
- Download URL: rlmflow-0.1.3-py3-none-any.whl
- Upload date:
- Size: 54.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6ca5fde7b9d3d6f684582bb411118eccf0afd5d637c435a84680fd983d982d3a
|
|
| MD5 |
173461ac4a531e79d81c10097c205bbc
|
|
| BLAKE2b-256 |
4bb077d31bb05511ab059a3afafacedf4326c5042a6502e68c37fefad41d058d
|
Provenance
The following attestation bundles were made for rlmflow-0.1.3-py3-none-any.whl:
Publisher:
release.yml on shyamsn97/rlmflow
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
rlmflow-0.1.3-py3-none-any.whl -
Subject digest:
6ca5fde7b9d3d6f684582bb411118eccf0afd5d637c435a84680fd983d982d3a - Sigstore transparency entry: 1408393244
- Sigstore integration time:
-
Permalink:
shyamsn97/rlmflow@4eb7f839f0a6afa592849f741fc6269e9a5195d6 -
Branch / Tag:
refs/tags/v0.1.3 - Owner: https://github.com/shyamsn97
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@4eb7f839f0a6afa592849f741fc6269e9a5195d6 -
Trigger Event:
push
-
Statement type: