Skip to main content

groknroll - CLI coding agent with unlimited context via Recursive Language Models (RLM). Local, unlimited context, autonomous.

Project description


Recursive Language Models (RLMs)

Full PaperBlogpostDocumentationRLM Minimal

Style Test

Paper Preview

Overview

Recursive Language Models (RLMs) are a task-agnostic inference paradigm for language models (LMs) to handle near-infinite length contexts by enabling the LM to programmatically examine, decompose, and recursively call itself over its input. RLMs replace the canonical llm.completion(prompt, model) call with a rlm.completion(prompt, model) call. RLMs offload the context as a variable in a REPL environment that the LM can interact with and launch sub-LM calls inside of.

This repository provides an extensible inference engine for using RLMs around standard API-based and local LLMs. The initial experiments and idea were proposed in a blogpost in 2025, with expanded results in an arXiv preprint.

[!NOTE] This repository contains inference code for RLMs with support for various sandbox environments. Open-source contributions are welcome. This repository is maintained by the authors of the paper from the MIT OASYS lab.

Installation

groknroll is now available on PyPI! 🎉

pip install groknroll

To install the latest from main:

pip install git+https://github.com/tekcin/groknroll.git

⚠️ Installation Troubleshooting

If you get an error like Requires-Python >=3.11, you need Python 3.11 or higher:

# Option 1: Use the project's virtual environment (RECOMMENDED)
cd /Users/claude/RLM/rlm
source .venv/bin/activate
uv pip install groknroll

# Option 2: Install Python 3.11+ system-wide
brew install python@3.11
python3.11 -m venv groknroll-env
source groknroll-env/bin/activate
pip install groknroll

See QUICK_FIX.md and INSTALLATION_FIX_GUIDE.md for detailed troubleshooting.

Quick Setup

Set up the dependencies with uv (or your virtual environment of choice):

curl -LsSf https://astral.sh/uv/install.sh | sh
uv init && uv venv --python 3.12  # change version as needed
uv pip install -e .

This project includes a Makefile to simplify common tasks.

  • make install: Install base dependencies.
  • make check: Run linter, formatter, and tests.

To run a quick test, the following will run an RLM query with the OpenAI client using your environment variable OPENAI_API_KEY (feel free to change this). This will generate console output as well as a log which you can use with the visualizer to explore the trajectories.

make quickstart

The default RLM client uses a REPL environment that runs on the host process through Python exec calls. It uses the same virtual environment as the host process (i.e. it will have access to the same dependencies), but with some limitations in its available global modules. As an example, we can call RLM completions using GPT-5-nano:

from groknroll import RLM

rlm = RLM(
    backend="openai",
    backend_kwargs={"model_name": "gpt-5-nano"},
    verbose=True,  # For printing to console with rich, disabled by default.
)

print(rlm.completion("Print me the first 100 powers of two, each on a newline.").response)

Oracle Agent - Codebase Knowledge System 🔮

The Oracle Agent is an RLM-powered tool that has unlimited context and knows everything about your codebase. It can answer any question about your code by leveraging RLM's infinite context capabilities.

from groknroll import OracleAgent

# Initialize Oracle for your project
oracle = OracleAgent(
    project_path=".",
    backend="openai",
    model="gpt-4o-mini"
)

# Ask any question about your codebase
response = oracle.ask("Where is the RLM class defined?")
print(response.answer)

# Use convenience methods
oracle.find_class("RLM")
oracle.find_function("completion")
oracle.get_architecture_overview()
oracle.how_to_add_feature("support for Claude AI backend")

Features:

  • Unlimited Context: Handles arbitrarily large codebases via RLM
  • Automatic Indexing: Parses files, functions, classes, imports using AST
  • Semantic Understanding: Understands what your code does, not just keywords
  • Comprehensive Answers: Detailed explanations with code examples and sources

See ORACLE_AGENT.md for full documentation.

REPL Environments

We support two types of REPL environments -- isolated, and non-isolated. Non-isolated environments (default) run code execution on the same machine as the RLM (e.g. through exec), which is pretty reasonable for some local low-risk tasks, like simple benchmarking, but can be problematic if the prompts or tool calls can interact with malicious users. Fully isolated environments used Cloud-based sandboxes (e.g. Prime Sandboxes, Modal Sandboxes) to run code generated by the RLM, ensuring completely isolation from the host process. Environments can be added, but we natively support the following: local (default), modal, prime.

rlm = RLM(
    environment="...", # "local", "docker", "modal", "prime"
    environment_kwargs={...},
)

Local Environments

The default local environment LocalREPL runs in the same process as the RLM itself, with specified global and local namespaces for minimal security. Using this REPL is generally safe, but should not be used for production settings. It also shares the same virtual environment (e.g. Conda or uv) as the host process.

Docker Docker (requires Docker installed)

We also support a Docker-based environment called DockerREPL that launches the REPL environment as a Docker image. By default, we use the python:3.11-slim image, but the user can specify custom images as well.

Isolated Environments

We support several different REPL environments that run on separate, cloud-based machines. Whenever a recursive sub-call is made in these instances, it is requested from the host process.

Modal Sandboxes Modal

To use Modal Sandboxes as the REPL environment, you need to install and authenticate your Modal account.

uv add modal  # add modal library
modal setup   # authenticate account

Prime Intellect Sandboxes Prime Intellect

[!NOTE] Prime Intellect Sandboxes are currently a beta feature. See the documentation for more information. We noticed slow runtimes when using these sandboxes, which is currently an open issue.

To use Prime Sandboxes, install the SDK and set your API key:

uv pip install -e ".[prime]"
export PRIME_API_KEY=...

Model Providers

We currently support most major clients (OpenAI, Anthropic), as well as the router platforms (OpenRouter, Portkey, LiteLLM). For local models, we recommend using vLLM (which interfaces with the OpenAI client). To view or add support for more clients, start by looking at rlm/clients/.

Relevant Reading

If you use this code or repository in your research, please cite:

@misc{zhang2025recursivelanguagemodels,
      title={Recursive Language Models}, 
      author={Alex L. Zhang and Tim Kraska and Omar Khattab},
      year={2025},
      eprint={2512.24601},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2512.24601}, 
}

Optional Debugging: Visualizing RLM Trajectories

We additionally provide a simple visualizer tool to examine and view the code, sub-LM, and root-LM calls of an RLM trajectory. To save log files (.jsonl) on every completion call that can be viewed in the visualizer, initialize the RLMLogger object and pass it into the RLM on initialization:

from groknroll.logger import RLMLogger
from groknroll import RLM

logger = RLMLogger(log_dir="./logs")
rlm = RLM(
    ...
    logger=logger
)

To run the visualizer locally, we use Node.js and shadcn/ui:

cd visualizer/
npm run dev        # default localhost:3001

You'll have the option to select saved .jsonl files

RLM Visualizer Example

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

groknroll-2.1.1.tar.gz (230.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

groknroll-2.1.1-py3-none-any.whl (281.1 kB view details)

Uploaded Python 3

File details

Details for the file groknroll-2.1.1.tar.gz.

File metadata

  • Download URL: groknroll-2.1.1.tar.gz
  • Upload date:
  • Size: 230.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.26 {"installer":{"name":"uv","version":"0.9.26","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for groknroll-2.1.1.tar.gz
Algorithm Hash digest
SHA256 d142a052543c2128c30d02e03b3eb8c37b5577b2e3c72d62f1eb7a69f205aa0f
MD5 195af063c179539f407368634db47985
BLAKE2b-256 1e80e17483b69b43c4832b1dd656faf4b851f120fb2c2ba8d365848b7ab0dbec

See more details on using hashes here.

File details

Details for the file groknroll-2.1.1-py3-none-any.whl.

File metadata

  • Download URL: groknroll-2.1.1-py3-none-any.whl
  • Upload date:
  • Size: 281.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.26 {"installer":{"name":"uv","version":"0.9.26","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for groknroll-2.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 f68974da79116c4eeb8ea59958ac4e300c8f282c57629954d38700d134177096
MD5 024f6089e84bc4c78e0a25329c0b8c84
BLAKE2b-256 6ee62dbf38a7c3e6af0078aac67e4d590e2a6857bb6bb7c667edcdaa2d59edb1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page