Skip to main content

The official Python SDK for Eval Protocol (EP.) EP is an open protocol that standardizes how developers author evals for large language model (LLM) applications.

Project description

Eval Protocol (EP)

PyPI - Version

The open-source toolkit for building your internal model leaderboard.

When you have multiple AI models to choose from—different versions, providers, or configurations—how do you know which one is best for your use case?

🚀 Features

  • Custom Evaluations: Write evaluations tailored to your specific business needs
  • Auto-Evaluation: Stack-rank models using LLMs as judges with just model traces using out-of-the-box evaluators
  • RL Environments via MCP: Build reinforcement learning environments using the Model Control Protocol (MCP) to simulate user interactions and advanced evaluation scenarios
  • Consistent Testing: Test across various models and configurations with a unified framework
  • Resilient Runtime: Automatic retries for unstable LLM APIs and concurrent execution for long-running evaluations
  • Rich Visualizations: Built-in pivot tables and visualizations for result analysis
  • Data-Driven Decisions: Make informed model deployment decisions based on comprehensive evaluation results

Quick Examples

Basic Model Comparison

Compare models on a simple formatting task:

from eval_protocol.models import EvaluateResult, EvaluationRow, Message
from eval_protocol.pytest import default_single_turn_rollout_processor, evaluation_test

@evaluation_test(
    input_messages=[
        [
            Message(role="system", content="Use bold text to highlight important information."),
            Message(role="user", content="Explain why evaluations matter for AI agents. Make it dramatic!"),
        ],
    ],
    completion_params=[
        {"model": "fireworks/accounts/fireworks/models/llama-v3p1-8b-instruct"},
        {"model": "openai/gpt-4"},
        {"model": "anthropic/claude-3-sonnet"}
    ],
    rollout_processor=default_single_turn_rollout_processor,
    mode="pointwise",
)
def test_bold_format(row: EvaluationRow) -> EvaluationRow:
    """Check if the model's response contains bold text."""
    assistant_response = row.messages[-1].content

    if assistant_response is None:
        row.evaluation_result = EvaluateResult(score=0.0, reason="No response")
        return row

    has_bold = "**" in str(assistant_response)
    score = 1.0 if has_bold else 0.0
    reason = "Contains bold text" if has_bold else "No bold text found"

    row.evaluation_result = EvaluateResult(score=score, reason=reason)
    return row

Using Datasets

Evaluate models on existing datasets:

from eval_protocol.pytest import evaluation_test
from eval_protocol.adapters.huggingface import create_gsm8k_adapter

@evaluation_test(
    input_dataset=["development/gsm8k_sample.jsonl"],  # Local JSONL file
    dataset_adapter=create_gsm8k_adapter(),  # Adapter to convert data
    completion_params=[
        {"model": "openai/gpt-4"},
        {"model": "anthropic/claude-3-sonnet"}
    ],
    mode="pointwise"
)
def test_math_reasoning(row: EvaluationRow) -> EvaluationRow:
    # Your evaluation logic here
    return row

📚 Resources

Installation

This library requires Python >= 3.10.

Basic Installation

Install with pip:

pip install eval-protocol

Recommended Installation with uv

For better dependency management and faster installs, we recommend using uv:

# Install uv if you haven't already
curl -LsSf https://astral.sh/uv/install.sh | sh

# Install eval-protocol
uv add eval-protocol

Optional Dependencies

Install with additional features:

# For Langfuse integration
pip install 'eval-protocol[langfuse]'

# For HuggingFace datasets
pip install 'eval-protocol[huggingface]'

# For all adapters
pip install 'eval-protocol[adapters]'

# For development
pip install 'eval-protocol[dev]'

License

MIT

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

eval_protocol-0.2.25.post1.tar.gz (1.9 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

eval_protocol-0.2.25.post1-py3-none-any.whl (1.9 MB view details)

Uploaded Python 3

File details

Details for the file eval_protocol-0.2.25.post1.tar.gz.

File metadata

  • Download URL: eval_protocol-0.2.25.post1.tar.gz
  • Upload date:
  • Size: 1.9 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for eval_protocol-0.2.25.post1.tar.gz
Algorithm Hash digest
SHA256 83cf3aea7ad1b9cfea81357c79e0c2a0cbd6877cb7cb5b6d951752939b89134f
MD5 d15574b936921de111a54725b0c7dd1c
BLAKE2b-256 d7831181d25e0cac5be9f34d14a05f98ca5272d219af0a1d2d658325174f78c4

See more details on using hashes here.

Provenance

The following attestation bundles were made for eval_protocol-0.2.25.post1.tar.gz:

Publisher: release.yml on eval-protocol/python-sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file eval_protocol-0.2.25.post1-py3-none-any.whl.

File metadata

File hashes

Hashes for eval_protocol-0.2.25.post1-py3-none-any.whl
Algorithm Hash digest
SHA256 e4b37c5c074fe6a98aa939ac4e8114fa43d555917697c9b91f3b8cb8437e50e2
MD5 b9bc9d6deffb8c37c570c2535bde949e
BLAKE2b-256 251e590f266d974f298f91d5f90232324abee8a741009681efca7b8faf433702

See more details on using hashes here.

Provenance

The following attestation bundles were made for eval_protocol-0.2.25.post1-py3-none-any.whl:

Publisher: release.yml on eval-protocol/python-sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page