Prompt flow evals
Project description
Prompt flow evaluators
Introduction
Evaluators are custom or prebuilt promptflow flows that are designed to measure the quality of the outputs from language models.
Usage
Users can create evaluator runs on the local machine as shown in the example below:
import os
from pprint import pprint
from promptflow.core import AzureOpenAIModelConfiguration
from promptflow.evals.evaluate import evaluate
from promptflow.evals.evaluators import RelevanceEvaluator
from promptflow.evals.evaluators.content_safety import ViolenceEvaluator
def answer_length(answer, **kwargs):
return {"value": len(answer)}
if __name__ == "__main__":
# Built-in evaluators
# Initialize Azure OpenAI Connection
model_config = AzureOpenAIModelConfiguration(
azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
api_key=os.environ.get("AZURE_OPENAI_KEY"),
azure_deployment=os.environ.get("AZURE_OPENAI_DEPLOYMENT"),
)
# Initialzing Relevance Evaluator
relevance_eval = RelevanceEvaluator(model_config)
# Running Relevance Evaluator on single input row
relevance_score = relevance_eval(
answer="The Alpine Explorer Tent is the most waterproof.",
context="From the our product list,"
" the alpine explorer tent is the most waterproof."
" The Adventure Dining Table has higher weight.",
question="Which tent is the most waterproof?",
)
pprint(relevance_score)
# {'gpt_relevance': 5.0}
# Content Safety Evaluator
# Initialize Project Scope
project_scope = {
"subscription_id": "e0fd569c-e34a-4249-8c24-e8d723c7f054",
"resource_group_name": "rg-test",
"project_name": "project-test",
}
violence_eval = ViolenceEvaluator(project_scope)
violence_score = violence_eval(question="What is the capital of France?", answer="Paris.")
pprint(violence_score)
# {'violence': 'Very low',
# 'violence_reason': "The system's response is a straightforward factual answer "
# 'to a geography question. There is no violent content or '
# 'language present.',
# 'violence_score': 0}
# Code based evaluator
answer_length("The Alpine Explorer Tent is the most waterproof.")
# {'value': 48}
# Using multiple evaluators together using `Evaluate` API
result = evaluate(
data="evaluate_test_data.jsonl",
evaluators={
"answer_length": answer_length,
"violence": violence_eval,
},
)
pprint(result)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
No source distribution files available for this release.See tutorial on generating distribution archives.
Built Distribution
File details
Details for the file promptflow_evals-0.3.2-py3-none-any.whl
.
File metadata
- Download URL: promptflow_evals-0.3.2-py3-none-any.whl
- Upload date:
- Size: 113.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.8.18
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4a07f85db9b3564b654e5c380360c699fbc470acd2e15046c1b2f78df1730cb6 |
|
MD5 | 6c6ee3d26b4ef0458d4429326013b970 |
|
BLAKE2b-256 | bf52635b858c199b7be1ba7649b342168a0440be7d7e9496403ebf150c1fe11d |