Skip to main content

LLM Evaluations

Project description

arize-phoenix-evals

PyPI Version Documentation

Phoenix Evals provides lightweight, composable building blocks for writing and running evaluations on LLM applications, including tools to determine relevance, toxicity, hallucination detection, and much more.

Features

  • Works with your preferred model SDKs via adapters (OpenAI, LiteLLM, LangChain)
  • Powerful input mapping and binding for working with complex data structures
  • Several pre-built metrics for common evaluation tasks like hallucination detection
  • Evaluators are natively instrumented via OpenTelemetry tracing for observability and dataset curation
  • Blazing fast performance - achieve up to 20x speedup with built-in concurrency and batching
  • Tons of convenience features to improve the developer experience!

Installation

Install Phoenix Evals 2.0 using pip:

pip install 'arize-phoenix-evals>=2.0.0' openai

Quick Start

from phoenix.evals import create_classifier
from phoenix.evals.llm import LLM

# Create an LLM instance
llm = LLM(provider="openai", model="gpt-4o")

# Create an evaluator
evaluator = create_classifier(
    name="helpfulness",
    prompt_template="Rate the response to the user query as helpful or not:\n\nQuery: {input}\nResponse: {output}",
    llm=llm,
    choices={"helpful": 1.0, "not_helpful": 0.0},
)

# Simple evaluation
scores = evaluator.evaluate({"input": "How do I reset?", "output": "Go to settings > reset."})
scores[0].pretty_print()

# With input mapping for nested data
scores = evaluator.evaluate(
    {"data": {"query": "How do I reset?", "response": "Go to settings > reset."}},
    input_mapping={"input": "data.query", "output": "data.response"}
)
scores[0].pretty_print()

Evaluating Dataframes

import pandas as pd
from phoenix.evals import create_classifier, evaluate_dataframe
from phoenix.evals.llm import LLM

# Create an LLM instance
llm = LLM(provider="openai", model="gpt-4o")

# Create multiple evaluators
relevance_evaluator = create_classifier(
    name="relevance",
    prompt_template="Is the response relevant to the query?\n\nQuery: {input}\nResponse: {output}",
    llm=llm,
    choices={"relevant": 1.0, "irrelevant": 0.0},
)

helpfulness_evaluator = create_classifier(
    name="helpfulness",
    prompt_template="Is the response helpful?\n\nQuery: {input}\nResponse: {output}",
    llm=llm,
    choices={"helpful": 1.0, "not_helpful": 0.0},
)

# Prepare your dataframe
df = pd.DataFrame([
    {"input": "How do I reset my password?", "output": "Go to settings > account > reset password."},
    {"input": "What's the weather like?", "output": "I can help you with password resets."},
])

# Evaluate the dataframe
results_df = evaluate_dataframe(
    dataframe=df,
    evaluators=[relevance_evaluator, helpfulness_evaluator],
)

print(results_df.head())

Documentation

Community

Join our community to connect with thousands of AI builders:

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

arize_phoenix_evals-2.10.0.tar.gz (119.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

arize_phoenix_evals-2.10.0-py3-none-any.whl (173.8 kB view details)

Uploaded Python 3

File details

Details for the file arize_phoenix_evals-2.10.0.tar.gz.

File metadata

  • Download URL: arize_phoenix_evals-2.10.0.tar.gz
  • Upload date:
  • Size: 119.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.19

File hashes

Hashes for arize_phoenix_evals-2.10.0.tar.gz
Algorithm Hash digest
SHA256 47f9159893d4f3e03700ce2955abcc6838f11e43d3cf0e5ae56482d6a7d2df0c
MD5 c6099e133a8dc91fe4ea513d7985c8c3
BLAKE2b-256 1f997b3248ce3a5894dc91cbd3733ec54108231705558c3139a06bc1821bc6ef

See more details on using hashes here.

File details

Details for the file arize_phoenix_evals-2.10.0-py3-none-any.whl.

File metadata

File hashes

Hashes for arize_phoenix_evals-2.10.0-py3-none-any.whl
Algorithm Hash digest
SHA256 918ad922bf506b4f7aac95ca19dd9b2dff06097ac34d129ea608f619513696c9
MD5 2240ed1532472e20fba52708b64e54dc
BLAKE2b-256 9f8931557c6d6bd009c874c39501872669f4f90a7b51d5fa687ff77afe67b926

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page