Skip to main content

A Python 2.7+ REPL for interacting with LLMs with an OpenAI Chat Completions-compatible API.

Project description

chatrepl

A minimal single-file coding agent and chat REPL for OpenAI-compatible Chat Completions APIs.

chatrepl.py is no longer just a plain chat interface. It now exposes a small tool-enabled agent loop with four built-in tools:

  • read
  • write
  • edit
  • bash

The model can call these tools automatically, receive their results, and continue for multiple steps until it finishes.

Features

  • Minimal coding agent with automatic tool calling
  • Four local tools: read, write, edit, bash
  • Streaming assistant responses
  • Interactive Python-powered REPL
  • Non-interactive CLI mode for piped input or one-shot prompts
  • Conversation save/load as JSON
  • Markdown export
  • Editable multiline input through your editor
  • Optional discovery of AGENTS.md and CLAUDE.md from the working directory up to /
  • Works with OpenAI-compatible /chat/completions endpoints
  • Python 2.7+ and Python 3 compatible

Tool model

The agent is intentionally small and constrained.

read

Reads a text file with optional offset and limit arguments.

  • Truncates output to 2000 lines or 50 KB
  • Detects binary files and refuses to display them
  • Resolves paths relative to the current working directory

write

Writes full file contents.

  • Creates parent directories automatically
  • Rewrites the destination file completely

edit

Applies exact text replacements to an existing file.

Rules:

  • each oldText must match exactly once
  • edits must not overlap
  • all edits are matched against the original file

Returns a unified diff after a successful edit.

bash

Runs a shell command in the current working directory.

  • live output is streamed to the terminal
  • output returned to the model is truncated to the last 2000 lines or 50 KB
  • default timeout is 30 seconds
  • long outputs are saved to a temporary log file

Installation

From source

git clone <repo-url>
cd chatrepl
pip install -r requirements.txt

Or run the script directly if its dependencies are already available.

Usage

Interactive REPL

python chatrepl.py \
  --api-key "your-api-key" \
  --base-url "https://api.openai.com/v1" \
  --model "gpt-4o"

You enter a Python interactive console with helper functions preloaded.

Available commands:

Function Description
send(text='', image_path=None) Send a message and let the agent complete tool calls, optionally with a local image
append(text) Append a user message without sending
multiline() Append multiline input from your editor
txt(path) Append a UTF-8 text file as a user message
img(path) Append a local image as a user message; files are embedded as data URLs
load(path) Load a conversation from JSON
save(path) Save the current conversation to JSON
export(path) Export the conversation to Markdown
correct() Edit the last assistant response
show() Print all conversation messages
reset() Reset to only the system prompt
model(name=None) Show or change the model ID
base_url(url=None) Show or change the API base URL
stream(enabled=None) Show or change streaming mode
context_files(enabled=None) Enable or disable AGENTS.md / CLAUDE.md discovery
context_paths() Show discovered context file paths

Exit with exit() or EOF.

One-shot prompt

python chatrepl.py \
  --api-key "your-api-key" \
  --base-url "https://api.openai.com/v1" \
  --model "gpt-4o" \
  "Inspect this repository and summarize the build system"

Piped input

cat prompt.txt | python chatrepl.py \
  --api-key "your-api-key" \
  --base-url "https://api.openai.com/v1" \
  --model "gpt-4o"

CLI options

-k, --api-key           API key for the OpenAI-compatible endpoint
-u, --base-url          Base URL, e.g. http://localhost:11434/v1
-m, --model             Model ID
-l, --load              Load a conversation JSON file
--no-stream             Disable streaming
--no-context-files      Disable AGENTS.md and CLAUDE.md discovery

Agent behavior

The core agent loop is implemented in AgentConversation.send().

For each turn it:

  1. sends the current messages and tool schemas to the model
  2. receives assistant content and optional tool calls
  3. executes tool calls locally
  4. appends tool results as tool messages
  5. asks the model again
  6. repeats until there are no more tool calls or MAX_AGENT_STEPS is reached

Current limit:

  • MAX_AGENT_STEPS = 32

Context files

When enabled, the script searches from the current working directory up to the filesystem root for:

  • AGENTS.md
  • CLAUDE.md

Their contents are appended to the base system prompt.

This lets you keep project-local instructions outside the script itself.

Notes and limitations

  • This is a minimal agent, not a full task planner framework.
  • Tool execution is local and unsandboxed.
  • bash can modify the current machine and repository.
  • Tool support depends on your model and server correctly implementing OpenAI-compatible tool calling.
  • write overwrites full files; use edit for precise changes.
  • Output truncation is intentional to keep context bounded.

Example session

$ python chatrepl.py --api-key ... --base-url http://localhost:11434/v1 --model your-model
Welcome to pi-single-chatrepl. Use one of the following commands to interact with your-model:

>>> send('Inspect chatrepl.py and tell me whether it is agentic now.')
The script now has agentic behavior.

[tool read]
#!/usr/bin/env python
...

It defines tools, sends them to the model, executes tool calls, and loops until completion.
[tokens in=... out=... total_in=... total_out=...]

License

This project is licensed under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

chatrepl-0.3.0a7.tar.gz (16.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

chatrepl-0.3.0a7-py2.py3-none-any.whl (16.4 kB view details)

Uploaded Python 2Python 3

File details

Details for the file chatrepl-0.3.0a7.tar.gz.

File metadata

  • Download URL: chatrepl-0.3.0a7.tar.gz
  • Upload date:
  • Size: 16.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.12

File hashes

Hashes for chatrepl-0.3.0a7.tar.gz
Algorithm Hash digest
SHA256 5836cfbe7b6485b0780c4d7510f7b1e5c25f9e624ce2fb6d7a7f89aead6aa9b8
MD5 64e409802c6dc44529b3d8bc1439cccc
BLAKE2b-256 d2c1b783a3e7ef2cc2cdbeac644034fb38184464f7ff8d0e2f06bd1c995b4d99

See more details on using hashes here.

File details

Details for the file chatrepl-0.3.0a7-py2.py3-none-any.whl.

File metadata

  • Download URL: chatrepl-0.3.0a7-py2.py3-none-any.whl
  • Upload date:
  • Size: 16.4 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.12

File hashes

Hashes for chatrepl-0.3.0a7-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 8d30df3768bc0f303a203c8346ec79ba519932144858c2f12e84e116cb59b3f7
MD5 97d7438e5e3207f4ed82aaa9e6330685
BLAKE2b-256 0f02231a7c720cd130df712e3e455f8d8673c1fdea1d8a73398c940b7313f94e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page