Skip to main content

The official Python SDK for Eval Protocol (EP.) EP is an open protocol that standardizes how developers author evals for large language model (LLM) applications.

Project description

Eval Protocol (EP)

PyPI - Version

Eval Protocol (EP) is the open-source standard and toolkit for practicing Eval-Driven Development.

Building with AI is different. Traditional software is deterministic, but AI systems are probabilistic. How do you ship new features without causing silent regressions? How do you prove a new prompt is actually better?

The answer is a new engineering discipline: Eval-Driven Development (EDD). It adapts the rigor of Test-Driven Development for the uncertain world of AI. With EDD, you define your AI's desired behavior as a suite of executable tests, creating a safety net that allows you to innovate with confidence.

EP provides a consistent way to write evals, store traces, and analyze results.

UI
Log Viewer: Monitor your evaluation rollouts in real time.

Quick Example

Here's a simple test function that checks if a model's response contains bold text formatting:

from eval_protocol.models import EvaluateResult, EvaluationRow, Message
from eval_protocol.pytest import SingleTurnRolloutProcessor, evaluation_test

@evaluation_test(
    input_messages=[
        [
            Message(role="system", content="You are a helpful assistant. Use bold text to highlight important information."),
            Message(role="user", content="Explain why **evaluations** matter for building AI agents. Make it dramatic!"),
        ],
    ],
    completion_params=[{"model": "accounts/fireworks/models/llama-v3p1-8b-instruct"}],
    rollout_processor=SingleTurnRolloutProcessor(),
    mode="pointwise",
)
def test_bold_format(row: EvaluationRow) -> EvaluationRow:
    """
    Simple evaluation that checks if the model's response contains bold text.
    """

    assistant_response = row.messages[-1].content

    # Check if response contains **bold** text
    has_bold = "**" in assistant_response

    if has_bold:
        result = EvaluateResult(score=1.0, reason="✅ Response contains bold text")
    else:
        result = EvaluateResult(score=0.0, reason="❌ No bold text found")

    row.evaluation_result = result
    return row

Documentation

See our documentation for more details.

Installation

This library requires Python >= 3.10.

Install with pip:

pip install eval-protocol

License

MIT

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

eval_protocol-0.2.20.tar.gz (1.8 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

eval_protocol-0.2.20-py3-none-any.whl (1.8 MB view details)

Uploaded Python 3

File details

Details for the file eval_protocol-0.2.20.tar.gz.

File metadata

  • Download URL: eval_protocol-0.2.20.tar.gz
  • Upload date:
  • Size: 1.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for eval_protocol-0.2.20.tar.gz
Algorithm Hash digest
SHA256 da6494cf5f01178bac4be2ab7ee33042c0c094bec1a35946929445805104b2f5
MD5 5075fa42583bf1d85286e9fddb10d7c1
BLAKE2b-256 464eaab94af0402ef8de640e99f8ed144da370a5ef064f70512d5458c02f3045

See more details on using hashes here.

Provenance

The following attestation bundles were made for eval_protocol-0.2.20.tar.gz:

Publisher: release.yml on eval-protocol/python-sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file eval_protocol-0.2.20-py3-none-any.whl.

File metadata

  • Download URL: eval_protocol-0.2.20-py3-none-any.whl
  • Upload date:
  • Size: 1.8 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for eval_protocol-0.2.20-py3-none-any.whl
Algorithm Hash digest
SHA256 e3039f4373eb62c5a9e0ba35ed6430ef697b1a3e0dbab05e5cd2119eec9b9814
MD5 8c8e6934b5338f2de27b70d8b34f0802
BLAKE2b-256 a8aca4859831cd45216e63f08c2099b30213d8371dae76b78cf313b166a56dfe

See more details on using hashes here.

Provenance

The following attestation bundles were made for eval_protocol-0.2.20-py3-none-any.whl:

Publisher: release.yml on eval-protocol/python-sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page