LLM Evaluations
Project description
arize-phoenix-evals
Phoenix Evals provides lightweight, composable building blocks for writing and running evaluations on LLM applications, including tools to determine relevance, toxicity, hallucination detection, and much more.
Features
- Works with your preferred model SDKs via adapters (OpenAI, LiteLLM, LangChain)
- Powerful input mapping and binding for working with complex data structures
- Several pre-built metrics for common evaluation tasks like hallucination detection
- Evaluators are natively instrumented via OpenTelemetry tracing for observability and dataset curation
- Blazing fast performance - achieve up to 20x speedup with built-in concurrency and batching
- Tons of convenience features to improve the developer experience!
Installation
Install Phoenix Evals 2.0 using pip:
pip install 'arize-phoenix-evals>=2.0.0' openai
Quick Start
from phoenix.evals import create_classifier
from phoenix.evals.llm import LLM
# Create an LLM instance
llm = LLM(provider="openai", model="gpt-4o")
# Create an evaluator
evaluator = create_classifier(
name="helpfulness",
prompt_template="Rate the response to the user query as helpful or not:\n\nQuery: {input}\nResponse: {output}",
llm=llm,
choices={"helpful": 1.0, "not_helpful": 0.0},
)
# Simple evaluation
scores = evaluator.evaluate({"input": "How do I reset?", "output": "Go to settings > reset."})
scores[0].pretty_print()
# With input mapping for nested data
scores = evaluator.evaluate(
{"data": {"query": "How do I reset?", "response": "Go to settings > reset."}},
input_mapping={"input": "data.query", "output": "data.response"}
)
scores[0].pretty_print()
Pre-Built Evaluators
The phoenix.evals.metrics module provides ready-to-use evaluators for common tasks:
| Evaluator | Class | Description |
|---|---|---|
| Faithfulness | FaithfulnessEvaluator |
Detects hallucinations — checks if output is grounded in context |
| Conciseness | ConcisenessEvaluator |
Evaluates whether the response is appropriately concise |
| Correctness | CorrectnessEvaluator |
Checks if the output is factually correct |
| Document Relevance | DocumentRelevanceEvaluator |
Measures how relevant a retrieved document is to a query |
| Refusal | RefusalEvaluator |
Detects whether the model refused to answer |
| Tool Invocation | ToolInvocationEvaluator |
Checks whether the correct tool was called with the right arguments |
| Tool Selection | ToolSelectionEvaluator |
Evaluates whether the right tool was selected for the task |
| Tool Response Handling | ToolResponseHandlingEvaluator |
Evaluates how well the model uses a tool's response |
| Exact Match | exact_match |
Checks for exact string equality between output and expected |
| Regex Match | MatchesRegex |
Checks whether the output matches a regular expression |
| Precision/Recall | PrecisionRecallFScore |
Computes precision, recall, and F-score for classification tasks |
from phoenix.evals.llm import LLM
from phoenix.evals.metrics import FaithfulnessEvaluator, exact_match, MatchesRegex
llm = LLM(provider="openai", model="gpt-4o")
# LLM-powered faithfulness evaluator
faithfulness = FaithfulnessEvaluator(llm=llm)
scores = faithfulness.evaluate({
"input": "What is the capital of France?",
"context": "Paris is the capital of France.",
"output": "The capital of France is Berlin.",
})
scores[0].pretty_print()
# Score(name='faithfulness', score=0.0, label='unfaithful', explanation='...')
# Code-based exact match
match_result = exact_match({"output": "Paris", "expected": "Paris"})
# Regex match
regex_result = MatchesRegex(pattern=r"^\d{4}-\d{2}-\d{2}$").evaluate({
"output": "2024-03-15"
})
LLM Providers
The LLM class supports multiple AI providers:
from phoenix.evals.llm import LLM
# OpenAI
llm = LLM(provider="openai", model="gpt-4o")
# Anthropic
llm = LLM(provider="anthropic", model="claude-3-5-sonnet-20241022")
# Google Gemini
llm = LLM(provider="google", model="gemini-1.5-pro")
# LiteLLM (unified interface for 100+ providers)
llm = LLM(provider="litellm", model="gpt-4o")
Evaluating Dataframes
import pandas as pd
from phoenix.evals import create_classifier, evaluate_dataframe, async_evaluate_dataframe
from phoenix.evals.llm import LLM
# Create an LLM instance
llm = LLM(provider="openai", model="gpt-4o")
# Create multiple evaluators
relevance_evaluator = create_classifier(
name="relevance",
prompt_template="Is the response relevant to the query?\n\nQuery: {input}\nResponse: {output}",
llm=llm,
choices={"relevant": 1.0, "irrelevant": 0.0},
)
helpfulness_evaluator = create_classifier(
name="helpfulness",
prompt_template="Is the response helpful?\n\nQuery: {input}\nResponse: {output}",
llm=llm,
choices={"helpful": 1.0, "not_helpful": 0.0},
)
# Prepare your dataframe
df = pd.DataFrame([
{"input": "How do I reset my password?", "output": "Go to settings > account > reset password."},
{"input": "What's the weather like?", "output": "I can help you with password resets."},
])
# Synchronous evaluation
results_df = evaluate_dataframe(
dataframe=df,
evaluators=[relevance_evaluator, helpfulness_evaluator],
)
print(results_df.head())
# Async evaluation (up to 20x faster with large dataframes)
import asyncio
results_df = asyncio.run(async_evaluate_dataframe(
dataframe=df,
evaluators=[relevance_evaluator, helpfulness_evaluator],
))
Documentation
- Full Documentation - Complete API reference and guides
- Phoenix Docs - Detailed use-cases and examples
- OpenInference - Auto-instrumentation libraries for frameworks
Community
Join our community to connect with thousands of AI builders:
- 🌍 Join our Slack community.
- 📚 Read the Phoenix documentation.
- 💡 Ask questions and provide feedback in the #phoenix-support channel.
- 🌟 Leave a star on our GitHub.
- 🐞 Report bugs with GitHub Issues.
- 𝕏 Follow us on 𝕏.
- 🗺️ Check out our roadmap to see where we're heading next.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file arize_phoenix_evals-2.13.0.tar.gz.
File metadata
- Download URL: arize_phoenix_evals-2.13.0.tar.gz
- Upload date:
- Size: 125.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b0a87ab66aabd6e7bb0982f52476cd4487996ab9e17524b205bdb4e3881d3b16
|
|
| MD5 |
5e5dda74ca3a47d9d567de4a0fd606ca
|
|
| BLAKE2b-256 |
0d1bad1a3906ab333ce5bf373639e51069ec3b47be7ad0f60528d522eabff7b7
|
Provenance
The following attestation bundles were made for arize_phoenix_evals-2.13.0.tar.gz:
Publisher:
publish.yaml on Arize-ai/phoenix
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
arize_phoenix_evals-2.13.0.tar.gz -
Subject digest:
b0a87ab66aabd6e7bb0982f52476cd4487996ab9e17524b205bdb4e3881d3b16 - Sigstore transparency entry: 1204151631
- Sigstore integration time:
-
Permalink:
Arize-ai/phoenix@0c1b9ad3a16cd09e474ffa2e9859d6f3e60bb1dd -
Branch / Tag:
refs/heads/main - Owner: https://github.com/Arize-ai
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yaml@0c1b9ad3a16cd09e474ffa2e9859d6f3e60bb1dd -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file arize_phoenix_evals-2.13.0-py3-none-any.whl.
File metadata
- Download URL: arize_phoenix_evals-2.13.0-py3-none-any.whl
- Upload date:
- Size: 183.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
db2aa81edc1d0226c3517ea55ed7fcb700c25865feb68e4c5536a9d259f5fbb2
|
|
| MD5 |
bb2896a75de76a4c0b2b8eaa0e1605dc
|
|
| BLAKE2b-256 |
cd5f439696a5ee5b4ba40ddb77808b3eb919694631008114487ec696c9808e64
|
Provenance
The following attestation bundles were made for arize_phoenix_evals-2.13.0-py3-none-any.whl:
Publisher:
publish.yaml on Arize-ai/phoenix
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
arize_phoenix_evals-2.13.0-py3-none-any.whl -
Subject digest:
db2aa81edc1d0226c3517ea55ed7fcb700c25865feb68e4c5536a9d259f5fbb2 - Sigstore transparency entry: 1204151650
- Sigstore integration time:
-
Permalink:
Arize-ai/phoenix@0c1b9ad3a16cd09e474ffa2e9859d6f3e60bb1dd -
Branch / Tag:
refs/heads/main - Owner: https://github.com/Arize-ai
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yaml@0c1b9ad3a16cd09e474ffa2e9859d6f3e60bb1dd -
Trigger Event:
workflow_dispatch
-
Statement type: