Skip to main content

The official Python SDK for Eval Protocol (EP.) EP is an open protocol that standardizes how developers author evals for large language model (LLM) applications.

Project description

Eval Protocol (EP)

PyPI - Version

EP is an open specification, Python SDK, pytest wrapper, and suite of tools that provides a standardized way to write evaluations for large language model (LLM) applications. Start with simple single-turn evals for model selection and prompt engineering, then scale up to complex multi-turn reinforcement learning (RL) for agents using Model Context Protocol (MCP). EP ensures consistent patterns for writing evals, storing traces, and saving results—enabling you to build sophisticated agent evaluations that work across real-world scenarios, from markdown generation tasks to customer service agents with tool calling capabilities.

UI
Log Viewer: Monitor your evaluation rollouts in real time.

Quick Example

Here's a simple test function that checks if a model's response contains bold text formatting:

from eval_protocol.models import EvaluateResult, EvaluationRow
from eval_protocol.pytest import default_single_turn_rollout_processor, evaluation_test

@evaluation_test(
    input_messages=[
        [
            Message(role="system", content="You are a helpful assistant. Use bold text to highlight important information."),
            Message(role="user", content="Explain why **evaluations** matter for building AI agents. Make it dramatic!"),
        ],
    ],
    model=["accounts/fireworks/models/llama-v3p1-8b-instruct"],
    rollout_processor=default_single_turn_rollout_processor,
    mode="pointwise",
)
def test_bold_format(row: EvaluationRow) -> EvaluationRow:
    """
    Simple evaluation that checks if the model's response contains bold text.
    """

    assistant_response = row.messages[-1].content

    # Check if response contains **bold** text
    has_bold = "**" in assistant_response

    if has_bold:
        result = EvaluateResult(score=1.0, reason="✅ Response contains bold text")
    else:
        result = EvaluateResult(score=0.0, reason="❌ No bold text found")

    row.evaluation_result = result
    return row

Documentation

See our documentation for more details.

Installation

This library requires Python >= 3.10.

Install with pip:

pip install eval-protocol

License

MIT

Project details


Release history Release notifications | RSS feed

This version

0.2.4

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

eval_protocol-0.2.4.tar.gz (600.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

eval_protocol-0.2.4-py3-none-any.whl (534.4 kB view details)

Uploaded Python 3

File details

Details for the file eval_protocol-0.2.4.tar.gz.

File metadata

  • Download URL: eval_protocol-0.2.4.tar.gz
  • Upload date:
  • Size: 600.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for eval_protocol-0.2.4.tar.gz
Algorithm Hash digest
SHA256 5fee92f66e6bfcf45efa5f438f0713d9f8cac7bdd129e92eefee91ee3ae0bfe3
MD5 1555904100151be33a8f6185505b58ab
BLAKE2b-256 47ccbced71492eac57315ffa3507299d61bd19a905aea323dd9a6dbf8ee53543

See more details on using hashes here.

Provenance

The following attestation bundles were made for eval_protocol-0.2.4.tar.gz:

Publisher: release.yml on eval-protocol/python-sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file eval_protocol-0.2.4-py3-none-any.whl.

File metadata

  • Download URL: eval_protocol-0.2.4-py3-none-any.whl
  • Upload date:
  • Size: 534.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for eval_protocol-0.2.4-py3-none-any.whl
Algorithm Hash digest
SHA256 40d27cad5b430544e1643a3567f37efe9e18d4a61158a5b66429fda7db569ec2
MD5 d4933b04d65f08d4ce02ac71a270a02a
BLAKE2b-256 57fe7dcd8464b77038be6924fe0ceb0492d8b91cc015d8ca12c992df3045a9ac

See more details on using hashes here.

Provenance

The following attestation bundles were made for eval_protocol-0.2.4-py3-none-any.whl:

Publisher: release.yml on eval-protocol/python-sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page