Skip to main content

Agentic codebase navigator built on Recursive Language Model (RLM) patterns.

Project description

agentic-codebase-navigator

Python 3.12+ License: MIT

An agentic codebase navigator built on Recursive Language Model (RLM) patterns. RLM enables LLMs to execute Python code iteratively, inspect results, and refine their approach until reaching a final answer.

  • PyPI / distribution name: agentic-codebase-navigator
  • Python import package: rlm

Installation

Via pip

pip install agentic-codebase-navigator

Via uv (recommended)

uv pip install agentic-codebase-navigator

With optional LLM providers

# OpenAI (included by default)
pip install agentic-codebase-navigator

# Anthropic
pip install "agentic-codebase-navigator[llm-anthropic]"

# Google Gemini
pip install "agentic-codebase-navigator[llm-gemini]"

# Azure OpenAI
pip install "agentic-codebase-navigator[llm-azure-openai]"

# LiteLLM (unified provider)
pip install "agentic-codebase-navigator[llm-litellm]"

# Portkey
pip install "agentic-codebase-navigator[llm-portkey]"

Quick Start

Basic usage with OpenAI

from rlm import create_rlm
from rlm.adapters.llm import OpenAIAdapter

# Create RLM with OpenAI (requires OPENAI_API_KEY env var)
rlm = create_rlm(
    OpenAIAdapter(model="gpt-4o"),
    environment="local",
    max_iterations=10,
)

# Run a completion
result = rlm.completion("What is 2 + 2? Use Python to calculate.")
print(result.response)

Using MockLLM for testing (no API keys required)

from rlm import create_rlm
from rlm.adapters.llm import MockLLMAdapter

# Deterministic mock for testing
rlm = create_rlm(
    MockLLMAdapter(model="test", script=["```repl\nx = 42\n```\nFINAL_VAR('x')"]),
    environment="local",
    max_iterations=2,
)

result = rlm.completion("test prompt")
assert result.response == "42"

Execution Environments

RLM supports multiple execution environments for running Python code blocks:

Local Environment (default)

Executes code in-process with a persistent namespace. Fast and convenient for development.

rlm = create_rlm(
    llm,
    environment="local",
    environment_kwargs={
        "execute_timeout_s": 30.0,           # Execution timeout (SIGALRM-based)
        "broker_timeout_s": 60.0,            # Timeout for nested LLM calls
        "allowed_import_roots": {"json", "math", "collections"},  # Allowed imports
    },
)

Default allowed imports: collections, dataclasses, datetime, decimal, functools, itertools, json, math, pathlib, random, re, statistics, string, textwrap, typing, uuid

Docker Environment

Executes code in an isolated container. Recommended for untrusted code or production use.

rlm = create_rlm(
    llm,
    environment="docker",
    environment_kwargs={
        "image": "python:3.12-slim",         # Docker image
        "subprocess_timeout_s": 120.0,       # Container execution timeout
        "proxy_http_timeout_s": 60.0,        # HTTP proxy timeout for LLM calls
    },
)

Requirements:

  • Docker daemon running (docker info succeeds)
  • Docker 20.10+ (for --add-host host.docker.internal:host-gateway)

Multi-Backend Routing

RLM supports registering multiple LLM backends. Code blocks can route nested calls to specific models:

from rlm import create_rlm
from rlm.adapters.llm import MockLLMAdapter

# Root model generates code that calls a sub-model
root_script = """```repl\nresponse = llm_query("What is the capital of France?", model="sub")```\nFINAL_VAR('response')"""

rlm = create_rlm(
    MockLLMAdapter(model="root", script=[root_script]),
    other_llms=[MockLLMAdapter(model="sub", script=["Paris"])],
    environment="local",
    max_iterations=3,
)

result = rlm.completion("hello")
assert result.response == "Paris"

# Usage is aggregated across all models
print(result.usage_summary.model_usage_summaries["root"].total_calls)  # 1
print(result.usage_summary.model_usage_summaries["sub"].total_calls)   # 1

Batched LLM queries

For efficiency, code can batch multiple LLM calls:

# Inside a ```repl block:
responses = llm_query_batched([
    "Question 1",
    "Question 2",
    "Question 3",
], model="fast-model")

CLI Usage

# Show version
rlm --version

# Run a completion with mock backend (no API keys)
rlm completion "What is 2+2?" --backend mock --model-name test

# Run with OpenAI
rlm completion "Explain recursion" --backend openai --model-name gpt-4o

# Output full JSON response
rlm completion "Calculate pi" --backend mock --json

# Enable JSONL logging
rlm completion "Hello" --backend mock --jsonl-log-dir ./logs

Configuration-Driven Usage

For complex setups, use configuration objects:

from rlm import create_rlm_from_config, RLMConfig, LLMConfig, EnvironmentConfig

config = RLMConfig(
    llm=LLMConfig(backend="openai", model_name="gpt-4o"),
    other_llms=[
        LLMConfig(backend="anthropic", model_name="claude-3-5-sonnet-20241022"),
    ],
    env=EnvironmentConfig(environment="docker"),
    max_iterations=15,
    max_depth=1,
)

rlm = create_rlm_from_config(config)
result = rlm.completion("Solve this step by step...")

Async Support

import asyncio
from rlm import create_rlm
from rlm.adapters.llm import MockLLMAdapter

async def main():
    rlm = create_rlm(
        MockLLMAdapter(model="test", script=["FINAL('done')"]),
        environment="local",
    )
    result = await rlm.acompletion("async test")
    print(result.response)

asyncio.run(main())

Tool Calling (Agent Mode)

RLM supports native tool calling across all LLM providers, enabling true agentic workflows where the model can invoke functions and use their results.

Basic Tool Usage

from rlm import create_rlm
from rlm.adapters.llm import OpenAIAdapter
from rlm.adapters.tools import tool, ToolRegistry

# Define tools using the @tool decorator
@tool
def get_weather(city: str) -> str:
    """Get the current weather for a city."""
    return f"The weather in {city} is sunny, 72°F"

@tool
def calculate(expression: str) -> float:
    """Evaluate a mathematical expression."""
    return eval(expression)

# Create a tool registry
registry = ToolRegistry()
registry.register(get_weather)
registry.register(calculate)

# Create RLM with tools
rlm = create_rlm(
    OpenAIAdapter(model="gpt-4o"),
    environment="local",
    tools=registry,
)

# The model can now call tools automatically
result = rlm.completion("What's the weather in Tokyo and what's 15 * 7?")

Tool Choice Control

Control how the model uses tools:

# Let model decide when to use tools (default)
result = rlm.completion("...", tool_choice="auto")

# Force tool usage
result = rlm.completion("...", tool_choice="required")

# Disable tools for this call
result = rlm.completion("...", tool_choice="none")

# Force a specific tool
result = rlm.completion("...", tool_choice="get_weather")

Structured Outputs with Pydantic

Use Pydantic models for type-safe structured outputs:

from pydantic import BaseModel
from rlm.adapters.tools import pydantic_to_schema

class WeatherReport(BaseModel):
    city: str
    temperature: float
    conditions: str
    humidity: int

# Pydantic models are automatically converted to JSON Schema
schema = pydantic_to_schema(WeatherReport)

Extension Protocols

Customize RLM's orchestrator behavior using duck-typed protocols:

from rlm import create_rlm
from rlm.domain import StoppingPolicy, ContextCompressor, NestedCallPolicy

# Custom stopping policy - stop after specific conditions
class TokenBudgetPolicy(StoppingPolicy):
    def __init__(self, max_tokens: int):
        self.max_tokens = max_tokens
        self.used = 0

    def should_stop(self, iteration: int, response: str, usage: dict) -> bool:
        self.used += usage.get("total_tokens", 0)
        return self.used >= self.max_tokens

# Use custom policy
rlm = create_rlm(
    llm,
    environment="local",
    stopping_policy=TokenBudgetPolicy(max_tokens=10000),
)

Available protocols:

  • StoppingPolicy: Control when the tool/iteration loop terminates
  • ContextCompressor: Compress conversation context between iterations
  • NestedCallPolicy: Configure handling of nested llm_query() calls

See docs/extending.md for detailed documentation.

Relay Pipeline Library

Build type-safe, composable multi-step LLM workflows using a pipeline DSL:

from rlm.domain.relay import StateSpec, Pipeline, Baton
from rlm.adapters.relay.states import FunctionStateExecutor
from rlm.adapters.relay.executors import SyncPipelineExecutor

# Define states with typed inputs/outputs
analyze = StateSpec[str, dict]("analyze", str, dict, FunctionStateExecutor(analyze_fn))
summarize = StateSpec[dict, str]("summarize", dict, str, FunctionStateExecutor(summarize_fn))

# Compose with operators: >> (sequence), | (parallel), .when() (conditional)
pipeline = Pipeline(analyze >> summarize)

# Execute with typed baton
executor = SyncPipelineExecutor(pipeline)
result = executor.run(Baton(payload="Analyze this codebase"))

Key capabilities:

  • Conditional routing: state.when(predicate) >> target with .otherwise() fallback
  • Parallel execution: (left | right).join(mode="all") with fan-out/fan-in
  • Nested pipelines: Use pipelines or full RLM agents as pipeline states
  • Token budgets: Track and enforce token consumption across pipeline runs
  • Compile-time validation: Type compatibility, reachability, and cycle detection

See docs/relay/overview.md for the full guide.

LLM Provider Configuration

Provider Extra Environment Variables
OpenAI (default) OPENAI_API_KEY
Anthropic llm-anthropic ANTHROPIC_API_KEY
Google Gemini llm-gemini GOOGLE_API_KEY
Azure OpenAI llm-azure-openai AZURE_OPENAI_API_KEY, AZURE_OPENAI_ENDPOINT
LiteLLM llm-litellm (varies by provider)
Portkey llm-portkey PORTKEY_API_KEY

Architecture

RLM uses a hexagonal (ports & adapters) architecture:

src/rlm/
├── domain/          # Pure business logic, ports (protocols), models
│   └── relay/       # Pipeline DSL: states, baton, validation, composition
├── application/     # Use cases, configuration
│   └── relay/       # Pipeline registry composer
├── infrastructure/  # Wire protocol, execution policies
├── adapters/
│   ├── llm/         # LLM providers (OpenAI, Anthropic, Gemini, etc.)
│   ├── environments/# Execution environments (local, docker)
│   ├── relay/       # Pipeline executors, state implementations
│   ├── tools/       # Tool calling infrastructure
│   ├── policies/    # Extension protocol implementations
│   ├── broker/      # TCP broker for nested LLM calls
│   └── loggers/     # Logging adapters (JSONL, console)
└── api/             # Public facade, factories, registries

Key design principles:

  • Domain layer has zero external dependencies
  • Adapters implement domain ports (protocols)
  • Dependencies flow inward (adapters -> application -> domain)
  • All LLM provider SDKs are lazy-imported (optional extras)
  • Extension protocols enable customization without modifying core code

Development

Setup

# Clone and setup
git clone https://github.com/Luiz-Frias/agentic-codebase-navigator.git
cd agentic-codebase-navigator

# Create venv with Python 3.12
uv python install 3.12
uv venv --python 3.12 .venv
source .venv/bin/activate

# Install with dev dependencies
uv sync --group dev --group test

Running Tests

# Unit tests (fast, hermetic)
uv run --group test pytest -m unit

# Integration tests (multi-component boundaries)
uv run --group test pytest -m integration

# End-to-end tests (public API flows). Docker-marked tests auto-skip if Docker isn't available.
uv run --group test pytest -m e2e

# Packaging smoke tests (build/install/import/CLI)
uv run --group test pytest -m packaging

# Performance/regression tests (opt-in)
uv run --group test pytest -m performance

# All tests
uv run --group test pytest

# With coverage
uv run --group test pytest --cov=rlm --cov-report=term-missing

Live provider smoke tests (opt-in)

These tests are skipped by default to avoid accidental spend. Enable with RLM_RUN_LIVE_LLM_TESTS=1 and the relevant API key:

RLM_RUN_LIVE_LLM_TESTS=1 OPENAI_API_KEY=... uv run --group test pytest -m "integration and live_llm"
RLM_RUN_LIVE_LLM_TESTS=1 ANTHROPIC_API_KEY=... uv run --group test pytest -m "integration and live_llm"

Code Quality

# Format
uv run --group dev ruff format src tests

# Lint
uv run --group dev ruff check src tests --fix

# Type check
uv run --group dev ty check src/rlm

API Reference

Core Classes

  • RLM - Main facade for running completions
  • ChatCompletion - Result object with response, usage, iterations
  • RLMConfig - Configuration dataclass for create_rlm_from_config

Factory Functions

  • create_rlm(llm, ...) - Create RLM with pre-built LLM adapter
  • create_rlm_from_config(config) - Create RLM from configuration object

Adapters

  • LLM: MockLLMAdapter, OpenAIAdapter, AnthropicAdapter, GeminiAdapter, AzureOpenAIAdapter, LiteLLMAdapter, PortkeyAdapter
  • Environment: LocalEnvironmentAdapter, DockerEnvironmentAdapter
  • Logger: JsonlLoggerAdapter, ConsoleLoggerAdapter, NoopLoggerAdapter
  • Tools: ToolRegistry, tool decorator, NativeToolAdapter

Relay Pipeline

  • StateSpec - Type-safe state descriptor with operators (>>, |, .when())
  • Pipeline - State graph builder with validation
  • Baton - Immutable request-response envelope
  • SyncPipelineExecutor / AsyncPipelineExecutor - Pipeline orchestrators
  • State Executors: FunctionStateExecutor, LLMStateExecutor, RLMStateExecutor, AsyncStateExecutor

Extension Protocols

  • StoppingPolicy - Control iteration termination
  • ContextCompressor - Compress context between iterations
  • NestedCallPolicy - Configure nested llm_query() handling

Acknowledgments

This project is built upon the excellent Recursive Language Models (RLM) research by Alex Zhang and colleagues from MIT OASYS Lab.

Resource Link
Original Repository github.com/alexzhang13/rlm
Research Paper arXiv:2512.24601
Authors Alex L. Zhang, Tim Kraska, Omar Khattab

This repository refactors the original RLM implementation into a hexagonal/modular monolith architecture while maintaining API compatibility. See ATTRIBUTION.md for full details.

Citation

@misc{zhang2025recursivelanguagemodels,
      title={Recursive Language Models},
      author={Alex L. Zhang and Tim Kraska and Omar Khattab},
      year={2025},
      eprint={2512.24601},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2512.24601},
}

License

MIT License - see LICENSE for details.

  • Original work: Copyright (c) 2025 Alex Zhang
  • Refactored work: Copyright (c) 2026 Luiz Frias

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agentic_codebase_navigator-1.3.1.tar.gz (385.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agentic_codebase_navigator-1.3.1-py3-none-any.whl (195.6 kB view details)

Uploaded Python 3

File details

Details for the file agentic_codebase_navigator-1.3.1.tar.gz.

File metadata

  • Download URL: agentic_codebase_navigator-1.3.1.tar.gz
  • Upload date:
  • Size: 385.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.10.1 {"installer":{"name":"uv","version":"0.10.1","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for agentic_codebase_navigator-1.3.1.tar.gz
Algorithm Hash digest
SHA256 0dee0ecbe90d0a987f519bec80f26b1ba77687074d0cdaa3f41a7607ae0d9995
MD5 7cbb4a06d48c9a855749567386a6e9ff
BLAKE2b-256 3919e29ce5993818d77771c43fac8187ddbb67f37b00bdccda0512aacc521bad

See more details on using hashes here.

File details

Details for the file agentic_codebase_navigator-1.3.1-py3-none-any.whl.

File metadata

  • Download URL: agentic_codebase_navigator-1.3.1-py3-none-any.whl
  • Upload date:
  • Size: 195.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.10.1 {"installer":{"name":"uv","version":"0.10.1","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for agentic_codebase_navigator-1.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 3b38ca51626f8901e49acf19a1b649d12a1c0362601b736b3b44f3d76f62a1a6
MD5 ec8f73b549647f993f465b325fefceb2
BLAKE2b-256 87aa35147c3980782885e4474684d331851af5ea7d4673bfc80dd5cb44ff10e7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page