Skip to main content

The official Python SDK for Eval Protocol (EP.) EP is an open protocol that standardizes how developers author evals for large language model (LLM) applications.

Project description

Eval Protocol (EP)

PyPI - Version

The open-source toolkit for building your internal model leaderboard.

When you have multiple AI models to choose from—different versions, providers, or configurations—how do you know which one is best for your use case?

🚀 Features

  • Custom Evaluations: Write evaluations tailored to your specific business needs
  • Auto-Evaluation: Stack-rank models using LLMs as judges with just model traces using out-of-the-box evaluators
  • RL Environments via MCP: Build reinforcement learning environments using the Model Control Protocol (MCP) to simulate user interactions and advanced evaluation scenarios
  • Consistent Testing: Test across various models and configurations with a unified framework
  • Resilient Runtime: Automatic retries for unstable LLM APIs and concurrent execution for long-running evaluations
  • Rich Visualizations: Built-in pivot tables and visualizations for result analysis
  • Data-Driven Decisions: Make informed model deployment decisions based on comprehensive evaluation results

Quick Examples

Basic Model Comparison

Compare models on a simple formatting task:

from eval_protocol.models import EvaluateResult, EvaluationRow, Message
from eval_protocol.pytest import default_single_turn_rollout_processor, evaluation_test

@evaluation_test(
    input_messages=[
        [
            Message(role="system", content="Use bold text to highlight important information."),
            Message(role="user", content="Explain why evaluations matter for AI agents. Make it dramatic!"),
        ],
    ],
    completion_params=[
        {"model": "fireworks/accounts/fireworks/models/llama-v3p1-8b-instruct"},
        {"model": "openai/gpt-4"},
        {"model": "anthropic/claude-3-sonnet"}
    ],
    rollout_processor=default_single_turn_rollout_processor,
    mode="pointwise",
)
def test_bold_format(row: EvaluationRow) -> EvaluationRow:
    """Check if the model's response contains bold text."""
    assistant_response = row.messages[-1].content

    if assistant_response is None:
        row.evaluation_result = EvaluateResult(score=0.0, reason="No response")
        return row

    has_bold = "**" in str(assistant_response)
    score = 1.0 if has_bold else 0.0
    reason = "Contains bold text" if has_bold else "No bold text found"

    row.evaluation_result = EvaluateResult(score=score, reason=reason)
    return row

Using Datasets

Evaluate models on existing datasets:

from eval_protocol.pytest import evaluation_test
from eval_protocol.adapters.huggingface import create_gsm8k_adapter

@evaluation_test(
    input_dataset=["development/gsm8k_sample.jsonl"],  # Local JSONL file
    dataset_adapter=create_gsm8k_adapter(),  # Adapter to convert data
    completion_params=[
        {"model": "openai/gpt-4"},
        {"model": "anthropic/claude-3-sonnet"}
    ],
    mode="pointwise"
)
def test_math_reasoning(row: EvaluationRow) -> EvaluationRow:
    # Your evaluation logic here
    return row

📚 Resources

Installation

This library requires Python >= 3.10.

Basic Installation

Install with pip:

pip install eval-protocol

Recommended Installation with uv

For better dependency management and faster installs, we recommend using uv:

# Install uv if you haven't already
curl -LsSf https://astral.sh/uv/install.sh | sh

# Install eval-protocol
uv add eval-protocol

Optional Dependencies

Install with additional features:

# For Langfuse integration
pip install 'eval-protocol[langfuse]'

# For HuggingFace datasets
pip install 'eval-protocol[huggingface]'

# For all adapters
pip install 'eval-protocol[adapters]'

# For development
pip install 'eval-protocol[dev]'

License

MIT

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

eval_protocol-0.2.25.tar.gz (1.9 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

eval_protocol-0.2.25-py3-none-any.whl (1.9 MB view details)

Uploaded Python 3

File details

Details for the file eval_protocol-0.2.25.tar.gz.

File metadata

  • Download URL: eval_protocol-0.2.25.tar.gz
  • Upload date:
  • Size: 1.9 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for eval_protocol-0.2.25.tar.gz
Algorithm Hash digest
SHA256 e47da9b2c7452311d9ac445699541fb1f1b4026032bbaae6b8a2d2e40b85f03b
MD5 c871623bf7f54fd21ced8f141bdd26c7
BLAKE2b-256 605e8e56d3890cfa0e7e4e9eecb4e04aa467eccab1af3ca8b39c1841dfc4d9da

See more details on using hashes here.

Provenance

The following attestation bundles were made for eval_protocol-0.2.25.tar.gz:

Publisher: release.yml on eval-protocol/python-sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file eval_protocol-0.2.25-py3-none-any.whl.

File metadata

  • Download URL: eval_protocol-0.2.25-py3-none-any.whl
  • Upload date:
  • Size: 1.9 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for eval_protocol-0.2.25-py3-none-any.whl
Algorithm Hash digest
SHA256 fff0daa15c9bc5a796eddf981884b3f34188792dddba53ad77f5b1ed2c19bd02
MD5 a8d83402b6dddf6fecd8dcf14d6d3af2
BLAKE2b-256 7f93588d71e81b8cce14d4f2c765764f94519f28e7e0ce9fb375b249919a1c7f

See more details on using hashes here.

Provenance

The following attestation bundles were made for eval_protocol-0.2.25-py3-none-any.whl:

Publisher: release.yml on eval-protocol/python-sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page