Run a prompt against multiple coding agents in parallel and compare results
Project description
AgentTester
⚠️ Experimental — This project is under active development. APIs, config format, and CLI flags may change without notice.
Send a single prompt to multiple coding agents running in parallel and compare the results. Each agent works in its own git worktree on a separate branch so they never interfere with each other. Optionally, configure LLM evaluators to review each agent's diff and drive an iterative refinement loop.
Install
uv pip install -e ".[dev]"
Quick Start
# List built-in agents
agent-tester agents
# Run two agents on the same prompt
agent-tester run "Add unit tests for the auth module" --agents claude,aider
# Give the run a descriptive name (used in branch and report filenames)
agent-tester run "Refactor auth module" --agents claude,aider --name auth-refactor
# Use a prompt file
agent-tester run --prompt-file task.md --agents claude,codex,aider
# Keep worktrees for manual inspection
agent-tester run "Refactor logging" --agents claude,aider --keep-worktrees
How It Works
- You provide a prompt and select agents
- AgentTester creates a git worktree + branch for each agent from the current HEAD
- All agents run concurrently, each in its own worktree
- Agent output streams to the terminal with colored prefixes
- A markdown comparison report is generated with diff stats and timing
- Worktrees are cleaned up (branches are preserved for
git diff)
Branches are named agenttester/<agent-name>/<run-name> so you can compare results:
git diff agenttester/claude/auth-refactor agenttester/aider/auth-refactor
When no --name is given, a slug is derived from the first six words of the prompt plus a short hash (e.g. add-unit-tests-for-the-auth-a3f2c1).
Configuration
Copy config.example.yaml to agent-tester.yaml (or agent-tester.yml) in your target repo to customize agents. Built-in presets are available for claude, aider, and codex.
Config file discovery
Auto-detected local config files must use a .yml or .yaml extension. The following names are checked in order:
agent-tester.yaml
agent-tester.yml
.agent-tester.yaml
.agent-tester.yml
You can also pass a config file explicitly — no extension required:
agent-tester run "Fix the bug" --agents claude --config /path/to/myconfig
A global config at ~/.config/agenttester/config.yml or ~/.config/agenttester/config.yaml is merged automatically. Local project config takes precedence over global, which takes precedence over built-in presets.
Reports
Reports are written to ~/.config/agenttester/projects/<repo-name>/ by default. You can override this per-project:
Local config (agent-tester.yaml in your repo):
reports_dir: ~/my-reports/myproject
Global config (~/.config/agenttester/config.yml), per named project:
projects:
myproject:
reports_dir: ~/my-reports/myproject
Local config takes priority over the global projects: setting.
Command Placeholders
{prompt}— replaced with the shell-escaped prompt text{prompt_file}— replaced with a path to a temp file containing the prompt- If neither placeholder is present, the prompt is piped to the agent via stdin
Agent Settings
| Field | Description | Default |
|---|---|---|
command |
Shell command template | (required) |
commit_style |
auto (agent commits) or manual (agenttester commits) |
auto |
timeout |
Max seconds before the agent is killed | 600 |
env |
Extra environment variables (key-value map) | {} |
Skills
Skills are markdown instruction files prepended to every agent prompt. They tell agents what they are allowed to do and how to behave. AgentTester ships with four built-in skills:
| Skill | Description |
|---|---|
editing.md |
Permission to read and edit files freely; look for reusable code before writing new code; prioritise readability |
testing.md |
Run the test suite and linter after making changes; don't mark a task complete until tests pass |
git.md |
Permitted git operations (branch, commit, push, pull, rebase); never push to the default branch |
bash.md |
Permitted bash operations scoped to code editing and testing; no system-level changes outside the worktree |
Overriding or extending skills
You can override any built-in skill or add new ones at two levels:
Global (~/.config/agenttester/skills/): applies to all projects.
Local (.agent-tester/skills/ inside your repo): applies to this project only.
A skill file with the same name as a built-in replaces it entirely. New filenames add additional instructions. Skills are always output in priority order — built-ins first, global skills second, local skills last — so user-defined instructions appear closest to the prompt and carry the most weight with the model.
~/.config/agenttester/skills/testing.md # overrides built-in testing skill globally
your-repo/.agent-tester/skills/testing.md # overrides for this project only
your-repo/.agent-tester/skills/style.md # adds a new skill for this project
LLM-Based Code Evaluation
Configure one or more LLM evaluators to review each agent's diff after it runs. Multiple independent reviewers reduce the risk of hallucinated assessments, and an aggregate report is synthesized from all of them.
Add an evaluators block to your agent-tester.yaml:
evaluators:
- name: claude
api: anthropic # uses ANTHROPIC_API_KEY
model: claude-opus-4-7
- name: llama3
endpoint: http://localhost:8004 # any OpenAI-compatible endpoint
model: meta-llama/Meta-Llama-3-70B-Instruct
evaluation:
inject_raw_reports: false # true → send raw reports instead of aggregate
max_aggregate_tokens: 2000 # aggregate is summarized before injection if too long
Cloud providers (Azure, Bedrock, Vertex)
Define a providers block to share credentials across multiple evaluators or REPL model agents. Each provider entry requires a type field. Model-level fields override the provider defaults.
Provider types
type |
Description | Install |
|---|---|---|
openai |
Any OpenAI-compatible endpoint (Azure AI Foundry, GCP Vertex, vLLM, etc.) | built-in |
anthropic |
Direct Anthropic Messages API | built-in |
bedrock |
AWS Bedrock Converse API via boto3 | pip install agenttester[aws] |
OpenAI-compatible providers (Azure, Vertex, etc.)
providers:
azure:
type: openai
endpoint: https://my-resource.openai.azure.com
api_key_env: AZURE_OPENAI_KEY # env var holding the API key
vertex:
type: openai
endpoint: https://us-central1-aiplatform.googleapis.com/v1beta1/projects/my-project/locations/us-central1/endpoints/openapi
api_key_env: VERTEX_AI_KEY
evaluators:
- name: gpt-4o
provider: azure # inherits endpoint + api_key_env
model: gpt-4o
- name: gemini
provider: vertex
model: google/gemini-2.0-flash-001
api_key_env: CUSTOM_KEY # model-level override of api_key_env
AWS Bedrock
Requires pip install agenttester[aws]. Three auth modes are supported; the first configured wins:
providers:
# 1. Named AWS CLI profile (SSO, assumed roles, etc.)
bedrock-sso:
type: bedrock
region: us-east-1
aws_profile: my-sso-profile
# 2. Explicit credentials from environment variables
bedrock-keys:
type: bedrock
region: us-east-1
aws_access_key_id_env: MY_AWS_KEY_ID
aws_secret_access_key_env: MY_AWS_SECRET
aws_session_token_env: MY_AWS_TOKEN # optional
# 3. Default boto3 credential chain (env vars, ~/.aws/credentials, IAM role)
bedrock-default:
type: bedrock
region: us-east-1
evaluators:
- name: claude-bedrock
provider: bedrock-sso
model: anthropic.claude-3-5-sonnet-20241022-v2:0
REPL models support any provider type — including Bedrock — through a models: section that accepts the same provider references as evaluators:
models:
claude-bedrock:
provider: bedrock-sso # references a named bedrock provider
model: anthropic.claude-3-5-sonnet-20241022-v2:0
azure-gpt4o:
provider: azure # references a named openai provider
model: gpt-4o
local-llm:
endpoint: http://localhost:8001 # inline OpenAI-compatible endpoint
model: meta-llama/Meta-Llama-3-8B-Instruct
api_key_env: MY_KEY # optional bearer token
Agent entries whose command matches agent-tester query <endpoint> <model> {prompt} are also discovered automatically for backward compatibility.
After each iteration, each evaluator independently critiques every agent's diff for:
- Accuracy — does the code implement what was asked?
- Readability — is it clear and well-named?
- Code smells — duplication, dead code, poor design
- Correctness — bugs, missed edge cases, unsafe patterns
An aggregate assessment is then synthesized across evaluators. The terminal shows the aggregate; raw per-evaluator reports are preserved in the markdown report.
Iterative Refinement
When evaluators are configured, AgentTester enters a refinement loop:
- Agents run and commit their changes (
iter-1commit message) - Evaluators review each agent's diff
- You select which agents to re-run (1–all, or press Enter to stop)
- Selected agents re-run with the aggregate feedback injected into their prompt
- New commits are appended to the same branch (
iter-2,iter-3, …) - New evaluator reports are generated for each iteration
All iterations land on the same branch — use git log to see the progression.
Interactive Model REPL
For comparing responses from vLLM model servers interactively, with persistent conversation history within a session:
agent-tester repl # auto-discovers agent-tester.yaml
agent-tester repl --config custom.yaml # explicit config path
agent-tester repl --session my-session # save/restore conversation history
agent-tester repl --workdir /path/to/repo # enable tool use with a target repo
The REPL fans out each prompt to all configured models in parallel and maintains separate
conversation history per model. Use /reset to clear history, @modelname message to
address a single model, or exit to quit. Tab-completes model names after @.
Sessions
Pass --session <name> to persist conversation history across REPL invocations. On exit,
each model's history is saved to ~/.config/agenttester/sessions/<name>.json. The next
time you run repl --session <name>, history is restored and the conversation continues
where it left off.
Tool use and branches
Pass --workdir <dir> to enable an agent loop for OpenAI-compatible models. Each model
gains access to bash, read_file, write_file, git_clone, git_commit, and
git_push tools. When --workdir is a git repo, each model automatically works in its
own worktree on a dedicated branch:
agenttester/<model-name>/<session-name>
Use --pem <path> to authenticate git operations over SSH. Combine flags for a full
multi-model coding workflow:
agent-tester repl \
--session sprint-42 \
--workdir ~/dev/my-project \
--pem ~/.ssh/deploy_key
Config resolution follows the same priority as run: global config first, then local
(or explicit) config, with local taking precedence on conflicts.
See config.example.yaml for full configuration examples.
Development
uv pip install -e ".[dev]"
ruff check src/ tests/
ruff format src/ tests/
pytest
Docker
# Run against the current directory
docker compose run --rm agent-tester run "Fix the bug" --agents claude
# Run against a different repo
REPO_PATH=/path/to/repo docker compose run --rm agent-tester run "Add tests" --agents claude,aider
Library Usage
import asyncio
from pathlib import Path
from rich.console import Console
from agenttester import Orchestrator, load_config
from agenttester.config import get_reports_dir
async def main():
repo = Path(".").resolve()
agents = load_config()
selected = [agents["claude"], agents["aider"]]
orch = Orchestrator(repo, Console(), get_reports_dir(repo))
results = await orch.run("Add unit tests", selected, run_name="add-tests")
for r in results:
print(f"{r.agent_name}: exit={r.exit_code} duration={r.duration:.1f}s")
asyncio.run(main())
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agenttester-0.11.3.tar.gz.
File metadata
- Download URL: agenttester-0.11.3.tar.gz
- Upload date:
- Size: 84.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fbf12ec9e3e56e38bfc1a52bfc672a955d787c25c00d9924d6acb88a5979f4b9
|
|
| MD5 |
40bc522ead0466fea677e9891c12658a
|
|
| BLAKE2b-256 |
285b5adbd5302c65ca7672a83cba8b12e80ba572a50e9d78b18080dee684a322
|
File details
Details for the file agenttester-0.11.3-py3-none-any.whl.
File metadata
- Download URL: agenttester-0.11.3-py3-none-any.whl
- Upload date:
- Size: 48.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c30c9cca798b2f891b1ce677aed7955d05925c0b2c97471a725440a1c4e4a9e4
|
|
| MD5 |
b10a57214ecfd06c01645330197066df
|
|
| BLAKE2b-256 |
8184d43223f183634da48cb40b785501725261b2a266be75599e1f7da0fd72b9
|