Skip to main content

The official Python SDK for Eval Protocol (EP.) EP is an open protocol that standardizes how developers author evals for large language model (LLM) applications.

Project description

Eval Protocol (EP)

PyPI - Version

The open-source toolkit for building your internal model leaderboard.

When you have multiple AI models to choose from—different versions, providers, or configurations—how do you know which one is best for your use case?

🚀 Features

  • Custom Evaluations: Write evaluations tailored to your specific business needs
  • Auto-Evaluation: Stack-rank models using LLMs as judges with just model traces using out-of-the-box evaluators
  • RL Environments via MCP: Build reinforcement learning environments using the Model Control Protocol (MCP) to simulate user interactions and advanced evaluation scenarios
  • Consistent Testing: Test across various models and configurations with a unified framework
  • Resilient Runtime: Automatic retries for unstable LLM APIs and concurrent execution for long-running evaluations
  • Rich Visualizations: Built-in pivot tables and visualizations for result analysis
  • Data-Driven Decisions: Make informed model deployment decisions based on comprehensive evaluation results

Quick Examples

Basic Model Comparison

Compare models on a simple formatting task:

from eval_protocol.models import EvaluateResult, EvaluationRow, Message
from eval_protocol.pytest import default_single_turn_rollout_processor, evaluation_test

@evaluation_test(
    input_messages=[
        [
            Message(role="system", content="Use bold text to highlight important information."),
            Message(role="user", content="Explain why evaluations matter for AI agents. Make it dramatic!"),
        ],
    ],
    completion_params=[
        {"model": "fireworks/accounts/fireworks/models/llama-v3p1-8b-instruct"},
        {"model": "openai/gpt-4"},
        {"model": "anthropic/claude-3-sonnet"}
    ],
    rollout_processor=default_single_turn_rollout_processor,
    mode="pointwise",
)
def test_bold_format(row: EvaluationRow) -> EvaluationRow:
    """Check if the model's response contains bold text."""
    assistant_response = row.messages[-1].content

    if assistant_response is None:
        row.evaluation_result = EvaluateResult(score=0.0, reason="No response")
        return row

    has_bold = "**" in str(assistant_response)
    score = 1.0 if has_bold else 0.0
    reason = "Contains bold text" if has_bold else "No bold text found"

    row.evaluation_result = EvaluateResult(score=score, reason=reason)
    return row

Using Datasets

Evaluate models on existing datasets:

from eval_protocol.pytest import evaluation_test
from eval_protocol.adapters.huggingface import create_gsm8k_adapter

@evaluation_test(
    input_dataset=["development/gsm8k_sample.jsonl"],  # Local JSONL file
    dataset_adapter=create_gsm8k_adapter(),  # Adapter to convert data
    completion_params=[
        {"model": "openai/gpt-4"},
        {"model": "anthropic/claude-3-sonnet"}
    ],
    mode="pointwise"
)
def test_math_reasoning(row: EvaluationRow) -> EvaluationRow:
    # Your evaluation logic here
    return row

📚 Resources

Installation

This library requires Python >= 3.10.

Basic Installation

Install with pip:

pip install eval-protocol

Recommended Installation with uv

For better dependency management and faster installs, we recommend using uv:

# Install uv if you haven't already
curl -LsSf https://astral.sh/uv/install.sh | sh

# Install eval-protocol
uv add eval-protocol

Optional Dependencies

Install with additional features:

# For Langfuse integration
pip install 'eval-protocol[langfuse]'

# For HuggingFace datasets
pip install 'eval-protocol[huggingface]'

# For all adapters
pip install 'eval-protocol[adapters]'

# For development
pip install 'eval-protocol[dev]'

License

MIT

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

eval_protocol-0.2.26.tar.gz (1.9 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

eval_protocol-0.2.26-py3-none-any.whl (1.9 MB view details)

Uploaded Python 3

File details

Details for the file eval_protocol-0.2.26.tar.gz.

File metadata

  • Download URL: eval_protocol-0.2.26.tar.gz
  • Upload date:
  • Size: 1.9 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for eval_protocol-0.2.26.tar.gz
Algorithm Hash digest
SHA256 2f0c355c75e3e3f93d1e36d5945a151c0523adbc78981bbb9fde0c38b7a137fe
MD5 70f79836f91b2002b3772bc5bbef0871
BLAKE2b-256 aa860c8c093075f10467977aa6b33b85016d588574c9e6f23cb20a82724fda28

See more details on using hashes here.

Provenance

The following attestation bundles were made for eval_protocol-0.2.26.tar.gz:

Publisher: release.yml on eval-protocol/python-sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file eval_protocol-0.2.26-py3-none-any.whl.

File metadata

  • Download URL: eval_protocol-0.2.26-py3-none-any.whl
  • Upload date:
  • Size: 1.9 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for eval_protocol-0.2.26-py3-none-any.whl
Algorithm Hash digest
SHA256 5a266ea2f31561f1790e93622f913b3e796536fdc5b671f0b63e721b11465191
MD5 8933a03c083cfcaa2a44b96375ed3847
BLAKE2b-256 b558b477251625b022c7c9bc8038e7ece000ce597bef7affb8f015504f4d8c1d

See more details on using hashes here.

Provenance

The following attestation bundles were made for eval_protocol-0.2.26-py3-none-any.whl:

Publisher: release.yml on eval-protocol/python-sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page