Skip to main content

No project description provided

Project description

AutoArena

Create leaderboards ranking LLM outputs against one another using automated judge evaluation

Apache-2.0 License CI Test Coverage PyPI Version Supported Python Versions Slack


  • 🏆 Rank outputs from different LLMs, RAG setups, and prompts to find the best configuration of your system
  • ⚔️ Perform automated head-to-head evaluation using judges from OpenAI, Anthropic, Cohere, and more
  • 🤖 Define and run your own custom judges, connecting to internal services or implementing bespoke logic
  • 💻 Run application locally, getting full control over your environment and data

AutoArena user interface

🤔 Why Head-to-Head Evaluation?

  • LLMs are better at judging responses head-to-head than they are in isolation (arXiv:2408.08688) — leaderboard rankings computed using Elo scores from many automated side-by-side comparisons should be more trustworthy than leaderboards using metrics computed on each model's responses independently!
  • The LMSYS Chatbot Arena has replaced benchmarks for many people as the trusted true leaderboard for foundation model performance (arXiv:2403.04132). Why not apply this approach to your own foundation model selection, RAG system setup, or prompt engineering efforts?
  • Using a "jury" of multiple smaller models from different model families like gpt-4o-mini, command-r, and claude-3-haiku generally yields better accuracy than a single frontier judge like gpt-4o — while being faster and much cheaper to run. AutoArena is built around this technique, called PoLL: Panel of LLM evaluators (arXiv:2404.18796).
  • Automated side-by-side comparison of model outputs is one of the most prevalent evaluation practices (arXiv:2402.10524) — AutoArena makes this process easier than ever to get up and running.

🔥 Getting Started

Install from PyPI:

pip install autoarena

Run as a module and visit localhost:8899 in your browser:

python -m autoarena

With the application running, getting started is simple:

  1. Create a project via the UI.
  2. Add responses from a model by selecting a CSV file with prompt and response columns.
  3. Configure an automated judge via the UI. Note that most judges require credentials, e.g. X_API_KEY in the environment where you're running AutoArena.
  4. Add responses from a second model to kick off an automated judging task using the judges you configured in the previous step to decide which of the models you've uploaded provided a better response to a given prompt.

That's it! After these steps you're fully set up for automated evaluation on AutoArena.

📄 Formatting Your Data

AutoArena requires two pieces of information to test a model: the input prompt and corresponding model response.

  • prompt: the inputs to your model. When uploading responses, any other models that have been run on the same prompts are matched and evaluated using the automated judges you have configured.
  • response: the output from your model. Judges decide which of two models produced a better response, given the same prompt.

📂 Data Storage

Data is stored in ./data/<project>.sqlite files in the directory where you invoked AutoArena. See data/README.md for more details on data storage in AutoArena.

🦾 Development

AutoArena uses uv to manage dependencies. To set up this repository for development, run:

uv venv && source .venv/bin/activate
uv pip install --all-extras -r pyproject.toml
uv tool run pre-commit install
uv run python3 -m autoarena serve --dev

To run AutoArena for development, you will need to run both the backend and frontend service:

  • Backend: uv run python3 -m autoarena serve --dev (the --dev/-d flag enables automatic service reloading when source files change)
  • Frontend: see ui/README.md

To build a release tarball in the ./dist directory:

./scripts/build.sh

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

autoarena-0.1.0b11.tar.gz (1.3 MB view details)

Uploaded Source

File details

Details for the file autoarena-0.1.0b11.tar.gz.

File metadata

  • Download URL: autoarena-0.1.0b11.tar.gz
  • Upload date:
  • Size: 1.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for autoarena-0.1.0b11.tar.gz
Algorithm Hash digest
SHA256 07161e34abb6a0d482a8e15c444707044464f63ed9d178743a9b7827a6f91d60
MD5 b0b39549a91eecf804d2dc318eb972a6
BLAKE2b-256 cb1b9f0533df0b0997776a30b4d7fa03de7e3ef7f98b69e37a4079e177e7b09e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page