Skip to main content

Recursive Language Models.

Project description


Recursive Language Models (RLMs)

Full PaperBlogpostDocumentationRLM Minimal

Style Test

Paper Preview

Overview

Recursive Language Models (RLMs) are a task-agnostic inference paradigm for language models (LMs) to handle near-infinite length contexts by enabling the LM to programmatically examine, decompose, and recursively call itself over its input. RLMs replace the canonical llm.completion(prompt, model) call with a rlm.completion(prompt, model) call. RLMs offload the context as a variable in a REPL environment that the LM can interact with and launch sub-LM calls inside of.

This repository provides an extensible inference engine for using RLMs around standard API-based and local LLMs. The initial experiments and idea were proposed in a blogpost in 2025, with expanded results in an arXiv preprint.

[!NOTE] This repository contains inference code for RLMs with support for various sandbox environments. Open-source contributions are welcome. This repository is maintained by the authors of the paper from the MIT OASYS lab.

Quick Setup

You can try out RLMs quickly by installing from PyPi:

pip install rlms

The default RLM client uses a REPL environment that runs on the host process through Python exec calls. It uses the same virtual environment as the host process (i.e. it will have access to the same dependencies), but with some limitations in its available global modules. As an example, we can call RLM completions using GPT-5-nano:

from rlm import RLM

rlm = RLM(
    backend="openai",
    backend_kwargs={"model_name": "gpt-5-nano"},
    verbose=True,  # For printing to console with rich, disabled by default.
)

print(rlm.completion("Print me the first 100 powers of two, each on a newline.").response)
Manual Setup

Set up the dependencies with uv (or your virtual environment of choice):

curl -LsSf https://astral.sh/uv/install.sh | sh
uv init && uv venv --python 3.12  # change version as needed
uv pip install -e .

This project includes a Makefile to simplify common tasks.

  • make install: Install base dependencies.
  • make check: Run linter, formatter, and tests.

To run a quick test, the following will run an RLM query with the OpenAI client using your environment variable OPENAI_API_KEY (feel free to change this). This will generate console output as well as a log which you can use with the visualizer to explore the trajectories.

make quickstart

REPL Environments

We support two types of REPL environments -- isolated, and non-isolated. Non-isolated environments (default) run code execution on the same machine as the RLM (e.g. through exec), which is pretty reasonable for some local low-risk tasks, like simple benchmarking, but can be problematic if the prompts or tool calls can interact with malicious users. Fully isolated environments use cloud-based sandboxes (e.g. Prime Sandboxes, Modal Sandboxes) to run code generated by the RLM, ensuring complete isolation from the host process. Environments can be added, but we natively support the following: local (default), docker, modal, prime, daytona, e2b.

rlm = RLM(
    environment="...", # "local", "docker", "modal", "prime", "daytona", "e2b"
    environment_kwargs={...},
)

Local Environments

The default local environment LocalREPL runs in the same process as the RLM itself, with specified global and local namespaces for minimal security. Using this REPL is generally safe, but should not be used for production settings. It also shares the same virtual environment (e.g. Conda or uv) as the host process.

Docker Docker (requires Docker installed)

We also support a Docker-based environment called DockerREPL that launches the REPL environment as a Docker image. By default, we use the python:3.11-slim image, but the user can specify custom images as well.

Isolated Environments

We support several different REPL environments that run on separate, cloud-based machines. Whenever a recursive sub-call is made in these instances, it is requested from the host process.

Modal Sandboxes Modal

To use Modal Sandboxes as the REPL environment, you need to install and authenticate your Modal account.

uv add modal  # add modal library
modal setup   # authenticate account

Prime Intellect Sandboxes Prime Intellect

[!NOTE] Prime Intellect Sandboxes are currently a beta feature. See the documentation for more information. We noticed slow runtimes when using these sandboxes, which is currently an open issue.

To use Prime Sandboxes, install the SDK and set your API key:

uv pip install -e ".[prime]"
export PRIME_API_KEY=...

Model Providers

We currently support most major clients (OpenAI, Anthropic), as well as the router platforms (OpenRouter, Portkey, LiteLLM). For local models, we recommend using vLLM (which interfaces with the OpenAI client). To view or add support for more clients, start by looking at rlm/clients/.

Relevant Reading

If you use this code or repository in your research, please cite:

@misc{zhang2026recursivelanguagemodels,
      title={Recursive Language Models},
      author={Alex L. Zhang and Tim Kraska and Omar Khattab},
      year={2026},
      eprint={2512.24601},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2512.24601},
}

Optional: Trajectory metadata and logging

RLMChatCompletion has an optional metadata field (default None) that holds the full trajectory (run config + all iterations and sub-calls) so you can reconstruct the run. Pass an RLMLogger to capture it:

  • In-memory only (trajectory on completion.metadata): logger=RLMLogger() (no log_dir).
  • Also save to disk (JSONL for the visualizer): logger=RLMLogger(log_dir="./logs").

Optional Debugging: Visualizing RLM Trajectories

We provide a simple visualizer to inspect code, sub-LM, and root-LM calls. Use RLMLogger(log_dir="./logs") so each completion writes a .jsonl file:

from rlm.logger import RLMLogger
from rlm import RLM

logger = RLMLogger(log_dir="./logs")
rlm = RLM(..., logger=logger)

To run the visualizer locally, we use Node.js and shadcn/ui:

cd visualizer/
npm run dev        # default localhost:3001

You'll have the option to select saved .jsonl files

RLM Visualizer Example

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

rlms-0.1.1.tar.gz (86.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

rlms-0.1.1-py3-none-any.whl (84.4 kB view details)

Uploaded Python 3

File details

Details for the file rlms-0.1.1.tar.gz.

File metadata

  • Download URL: rlms-0.1.1.tar.gz
  • Upload date:
  • Size: 86.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for rlms-0.1.1.tar.gz
Algorithm Hash digest
SHA256 926487e3512526f27505a8306bcc2e20648991f99b232d9be6c5b76e8fec8731
MD5 b94aeda97acfb06303715c9b70849211
BLAKE2b-256 640913fd4abcf935fbbfe33c06f14d2c814513ee43d4ff1e40f6aee51f8b2039

See more details on using hashes here.

Provenance

The following attestation bundles were made for rlms-0.1.1.tar.gz:

Publisher: publish.yml on alexzhang13/rlm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rlms-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: rlms-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 84.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for rlms-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 3dff7d40013065ef62d57deb50e2578ba658045829ed340f94de9b59a49d5369
MD5 cffc09821dad59d2d57c76fd7bcdd91b
BLAKE2b-256 5ca96538aec016b4e056c5553336a1b42134c108a9305db3e4ca8945dc00cebd

See more details on using hashes here.

Provenance

The following attestation bundles were made for rlms-0.1.1-py3-none-any.whl:

Publisher: publish.yml on alexzhang13/rlm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page