Skip to main content

LLM Evaluations

Project description

arize-phoenix-evals

PyPI Version Documentation

Phoenix Evals provides lightweight, composable building blocks for writing and running evaluations on LLM applications, including tools to determine relevance, toxicity, hallucination detection, and much more.

Features

  • Works with your preferred model SDKs via adapters (OpenAI, LiteLLM, LangChain)
  • Powerful input mapping and binding for working with complex data structures
  • Several pre-built metrics for common evaluation tasks like hallucination detection
  • Evaluators are natively instrumented via OpenTelemetry tracing for observability and dataset curation
  • Blazing fast performance - achieve up to 20x speedup with built-in concurrency and batching
  • Tons of convenience features to improve the developer experience!

Installation

Install Phoenix Evals 2.0 using pip:

pip install 'arize-phoenix-evals>=2.0.0' openai

Quick Start

from phoenix.evals import create_classifier
from phoenix.evals.llm import LLM

# Create an LLM instance
llm = LLM(provider="openai", model="gpt-4o")

# Create an evaluator
evaluator = create_classifier(
    name="helpfulness",
    prompt_template="Rate the response to the user query as helpful or not:\n\nQuery: {input}\nResponse: {output}",
    llm=llm,
    choices={"helpful": 1.0, "not_helpful": 0.0},
)

# Simple evaluation
scores = evaluator.evaluate({"input": "How do I reset?", "output": "Go to settings > reset."})
scores[0].pretty_print()

# With input mapping for nested data
scores = evaluator.evaluate(
    {"data": {"query": "How do I reset?", "response": "Go to settings > reset."}},
    input_mapping={"input": "data.query", "output": "data.response"}
)
scores[0].pretty_print()

Pre-Built Evaluators

The phoenix.evals.metrics module provides ready-to-use evaluators for common tasks:

Evaluator Class Description
Faithfulness FaithfulnessEvaluator Detects hallucinations — checks if output is grounded in context
Conciseness ConcisenessEvaluator Evaluates whether the response is appropriately concise
Correctness CorrectnessEvaluator Checks if the output is factually correct
Document Relevance DocumentRelevanceEvaluator Measures how relevant a retrieved document is to a query
Refusal RefusalEvaluator Detects whether the model refused to answer
Tool Invocation ToolInvocationEvaluator Checks whether the correct tool was called with the right arguments
Tool Selection ToolSelectionEvaluator Evaluates whether the right tool was selected for the task
Tool Response Handling ToolResponseHandlingEvaluator Evaluates how well the model uses a tool's response
Exact Match exact_match Checks for exact string equality between output and expected
Regex Match MatchesRegex Checks whether the output matches a regular expression
Precision/Recall PrecisionRecallFScore Computes precision, recall, and F-score for classification tasks
from phoenix.evals.llm import LLM
from phoenix.evals.metrics import FaithfulnessEvaluator, exact_match, MatchesRegex

llm = LLM(provider="openai", model="gpt-4o")

# LLM-powered faithfulness evaluator
faithfulness = FaithfulnessEvaluator(llm=llm)
scores = faithfulness.evaluate({
    "input": "What is the capital of France?",
    "context": "Paris is the capital of France.",
    "output": "The capital of France is Berlin.",
})
scores[0].pretty_print()
# Score(name='faithfulness', score=0.0, label='unfaithful', explanation='...')

# Code-based exact match
match_result = exact_match({"output": "Paris", "expected": "Paris"})

# Regex match
regex_result = MatchesRegex(pattern=r"^\d{4}-\d{2}-\d{2}$").evaluate({
    "output": "2024-03-15"
})

LLM Providers

The LLM class supports multiple AI providers:

from phoenix.evals.llm import LLM

# OpenAI
llm = LLM(provider="openai", model="gpt-4o")

# Anthropic
llm = LLM(provider="anthropic", model="claude-3-5-sonnet-20241022")

# Google Gemini
llm = LLM(provider="google", model="gemini-1.5-pro")

# LiteLLM (unified interface for 100+ providers)
llm = LLM(provider="litellm", model="gpt-4o")

Evaluating Dataframes

import pandas as pd
from phoenix.evals import create_classifier, evaluate_dataframe, async_evaluate_dataframe
from phoenix.evals.llm import LLM

# Create an LLM instance
llm = LLM(provider="openai", model="gpt-4o")

# Create multiple evaluators
relevance_evaluator = create_classifier(
    name="relevance",
    prompt_template="Is the response relevant to the query?\n\nQuery: {input}\nResponse: {output}",
    llm=llm,
    choices={"relevant": 1.0, "irrelevant": 0.0},
)

helpfulness_evaluator = create_classifier(
    name="helpfulness",
    prompt_template="Is the response helpful?\n\nQuery: {input}\nResponse: {output}",
    llm=llm,
    choices={"helpful": 1.0, "not_helpful": 0.0},
)

# Prepare your dataframe
df = pd.DataFrame([
    {"input": "How do I reset my password?", "output": "Go to settings > account > reset password."},
    {"input": "What's the weather like?", "output": "I can help you with password resets."},
])

# Synchronous evaluation
results_df = evaluate_dataframe(
    dataframe=df,
    evaluators=[relevance_evaluator, helpfulness_evaluator],
)
print(results_df.head())

# Async evaluation (up to 20x faster with large dataframes)
import asyncio
results_df = asyncio.run(async_evaluate_dataframe(
    dataframe=df,
    evaluators=[relevance_evaluator, helpfulness_evaluator],
))

Documentation

Community

Join our community to connect with thousands of AI builders:

Project details


Release history Release notifications | RSS feed

This version

3.1.0

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

arize_phoenix_evals-3.1.0.tar.gz (94.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

arize_phoenix_evals-3.1.0-py3-none-any.whl (121.4 kB view details)

Uploaded Python 3

File details

Details for the file arize_phoenix_evals-3.1.0.tar.gz.

File metadata

  • Download URL: arize_phoenix_evals-3.1.0.tar.gz
  • Upload date:
  • Size: 94.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for arize_phoenix_evals-3.1.0.tar.gz
Algorithm Hash digest
SHA256 7af3679a605ee5a111e771c6af6ab2bff22df86c45d1ffe4798d1c651e6812e7
MD5 69947f371be88c371a4c2be6e7e2fef4
BLAKE2b-256 d98df35d4f43e8ac5df628950fcf828671beb4d7f670eaf27ee180071b6daba7

See more details on using hashes here.

Provenance

The following attestation bundles were made for arize_phoenix_evals-3.1.0.tar.gz:

Publisher: publish.yaml on Arize-ai/phoenix

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file arize_phoenix_evals-3.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for arize_phoenix_evals-3.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e9f1aeadc48786a59acdf4da0b965004a4fdffc81a1337eb2a2ce8d0412e49df
MD5 8ee2ad1181b3559960441889c7179aed
BLAKE2b-256 f41cae08ebc6679d026e5d58f3ef1b661a8fc9ee63a0cc238fdeaaed4223a18d

See more details on using hashes here.

Provenance

The following attestation bundles were made for arize_phoenix_evals-3.1.0-py3-none-any.whl:

Publisher: publish.yaml on Arize-ai/phoenix

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page