Skip to main content

Grep-shaped CLI search powered by DSPy RLM

Project description

rlmgrep

Grep-shaped search powered by DSPy RLM. It accepts a natural-language query, scans the files you point at, and prints matching lines in a grep-like format.

Quickstart

uv tool install --python 3.11 rlmgrep
# or from GitHub:
# uv tool install --python 3.11 git+https://github.com/halfprice06/rlmgrep.git

export OPENAI_API_KEY=...  # or set keys in ~/.rlmgrep
rlmgrep "where are API keys read" rlmgrep/

Requirements

  • Python 3.11+
  • Deno runtime (DSPy RLM uses a Deno-based interpreter)
  • API key for your chosen provider (OpenAI, Anthropic, Gemini, etc.)

Install Deno

DSPy requires the Deno runtime. Install it with the official scripts:

macOS/Linux:

curl -fsSL https://deno.land/install.sh | sh

Windows PowerShell:

irm https://deno.land/install.ps1 | iex

Verify it is on your PATH:

deno --version

Usage

rlmgrep [options] "query" [paths...]

Common options:

  • -n show line numbers (default)
  • -C N context lines before/after (grep-style)
  • -A N context lines after
  • -B N context lines before
  • -m N max matching lines per file
  • -g GLOB include files matching glob (repeatable, comma-separated)
  • --type T include file types (repeatable, comma-separated)
  • --no-recursive do not recurse directories
  • -a, --text treat binary files as text
  • -y, --yes skip file count confirmation
  • --stdin-files treat stdin as newline-delimited file paths
  • --model, --sub-model override model names
  • --api-key, --api-base, --model-type override provider settings
  • --max-iterations, --max-llm-calls cap RLM search effort
  • -v, --verbose show verbose RLM output

Examples:

# Natural-language query over a repo
rlmgrep -n -C 2 "token parsing" rlmgrep/

# Restrict to Python files
rlmgrep "where config is read" --type py rlmgrep/

# Glob filters (repeatable or comma-separated)
rlmgrep "error handling" -g "**/*.py" -g "**/*.md" .

# Read from stdin (only when no paths are provided)
cat README.md | rlmgrep "install"

# Use rg/grep to find candidate files, then rlmgrep over that list
rg -l "token" . | rlmgrep --stdin-files --answer "what does this token control?"

Input selection

  • Directories are searched recursively by default. Use --no-recursive to stop recursion.
  • --type uses built-in type mappings (e.g., py, js, md); unknown values are treated as file extensions.
  • -g/--glob matches path globs against normalized paths (forward slashes).
  • Paths are printed relative to the current working directory when possible.
  • If no paths are provided, rlmgrep reads from stdin and uses the synthetic path <stdin>; if stdin is empty, it exits with code 2.
  • rlmgrep asks for confirmation when more than 200 files would be loaded (use -y/--yes to skip), and aborts when more than 1000 files would be loaded.

Output contract (stable for agents)

  • Matches are written to stdout; warnings go to stderr.
  • Output uses rg-style headings by default:
    • A file header line like ./path/to/file
    • Then line:\ttext for matches, line-\ttext for context lines
  • Line numbers are 1-based.
  • When context ranges are disjoint, a -- line separates groups.
  • Exit codes:
    • 0 = at least one match
    • 1 = no matches
    • 2 = usage/config/error

Agent tip: use -n and no context for parse-friendly output, then key off exit codes.

Regex-style queries (best effort)

rlmgrep can interpret traditional regex-style patterns inside a natural-language prompt. The RLM may use Python (including re) in its internal REPL to approximate regex logic, but it is not guaranteed to behave exactly like grep/rg.

Example (best-effort regex semantics + extra context):

rlmgrep -n "Find Python functions that look like `def test_\\w+` and are marked as slow or flaky in nearby comments." .

If you need strict, deterministic regex behavior, use rg/grep.

Configuration

rlmgrep creates a default config automatically if missing. The config path is:

  • ~/.rlmgrep/config.toml

Default config values (from rlmgrep/config.py):

model = "openai/gpt-5.2"
sub_model = "openai/gpt-5-mini"
api_base = "https://api.openai.com/v1"
model_type = "responses"
temperature = 1.0
max_tokens = 64000
max_iterations = 10
max_llm_calls = 20
file_warn_threshold = 200
file_hard_max = 1000
# markitdown_enable_images = false
# markitdown_image_llm_model = "gpt-5-mini"
# markitdown_image_llm_provider = "openai"
# markitdown_image_llm_api_key = ""
# markitdown_image_llm_api_base = ""
# markitdown_image_llm_prompt = ""
# markitdown_enable_audio = false
# markitdown_audio_model = "gpt-4o-mini-transcribe-2025-12-15"
# markitdown_audio_provider = "openai"
# markitdown_audio_api_key = ""
# markitdown_audio_api_base = ""

CLI flags override config values. Model keys are resolved as:

  1. CLI flags (--api-key, --sub-api-key)
  2. Config values (api_key, sub_api_key)
  3. Provider env vars inferred from the model name:
    • OPENAI_API_KEY
    • ANTHROPIC_API_KEY
    • GEMINI_API_KEY

If more than one provider key is set and the model does not make the provider obvious, rlmgrep emits a warning and requires an explicit --api-key.

Non-text files (PDF, images, audio)

  • PDF files are parsed with pypdf. Each page gets a marker line ===== Page N =====, and output lines include a page=N suffix.
  • Images and audio are converted via markitdown when enabled in config. Image conversion supports openai, anthropic, and gemini providers; audio conversion currently supports openai only.
  • Converted image/audio text is cached in sidecar files named <original>.<ext>.md next to the original file and reused on subsequent runs.
  • Use -a/--text to force binary files to be read as text (UTF-8 with replacement).

Agent usage notes

  • Prefer narrow corpora (globs/types) to reduce token usage.
  • Use --max-llm-calls to cap costs; combine with small --max-iterations for safety.
  • For reproducible parsing, use -n and avoid context (-C/-A/-B).

Development

  • Install locally: pip install -e . or uv tool install .
  • Run: rlmgrep "query" .
  • No test suite is configured yet.

Security

Do not commit API keys. Use environment variables or ~/.rlmgrep/config.toml.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

rlmgrep-0.1.5.tar.gz (18.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

rlmgrep-0.1.5-py3-none-any.whl (18.2 kB view details)

Uploaded Python 3

File details

Details for the file rlmgrep-0.1.5.tar.gz.

File metadata

  • Download URL: rlmgrep-0.1.5.tar.gz
  • Upload date:
  • Size: 18.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.9.6

File hashes

Hashes for rlmgrep-0.1.5.tar.gz
Algorithm Hash digest
SHA256 3c461e2e92e2faaf997819ee6a9c4a0b41cbffbd9de1676c0afa1c8ef6d630bf
MD5 5b74ad9fea0dadde602ce27917091ded
BLAKE2b-256 c3efe54d04b6f2c9239182ea029db874f944cca5941ece1634fb94b2e4700830

See more details on using hashes here.

File details

Details for the file rlmgrep-0.1.5-py3-none-any.whl.

File metadata

  • Download URL: rlmgrep-0.1.5-py3-none-any.whl
  • Upload date:
  • Size: 18.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.9.6

File hashes

Hashes for rlmgrep-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 4283a9432ad2bafecd17f8edbd2d87d488faa1281087a83fe0e1881693528e68
MD5 5ce969fb18177d0fcd589b9e8fc78ef5
BLAKE2b-256 85507ddcea640e1b039d1ff92e0eace73149dbcc11a75e83eae3ca8a65383c85

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page