Skip to main content

LLM Evaluations

Project description

arize-phoenix-evals

Phoenix provides tooling to evaluate LLM applications, including tools to determine the relevance or irrelevance of documents retrieved by retrieval-augmented generation (RAG) application, whether or not the response is toxic, and much more.

Phoenix's approach to LLM evals is notable for the following reasons:

  • Includes pre-tested templates and convenience functions for a set of common Eval “tasks”
  • Data science rigor applied to the testing of model and template combinations
  • Designed to run as fast as possible on batches of data
  • Includes benchmark datasets and tests for each eval function

Installation

Install the arize-phoenix sub-package via pip

pip install arize-phoenix-evals

Note you will also have to install the LLM vendor SDK you would like to use with LLM Evals. For example, to use OpenAI's GPT-4, you will need to install the OpenAI Python SDK:

pip install 'openai>=1.0.0'

Usage

Here is an example of running the RAG relevance eval on a dataset of Wikipedia questions and answers:

import os
from phoenix.evals import (
    RAG_RELEVANCY_PROMPT_TEMPLATE,
    RAG_RELEVANCY_PROMPT_RAILS_MAP,
    OpenAIModel,
    download_benchmark_dataset,
    llm_classify,
)
from sklearn.metrics import precision_recall_fscore_support, confusion_matrix, ConfusionMatrixDisplay

os.environ["OPENAI_API_KEY"] = "<your-openai-key>"

# Download the benchmark golden dataset
df = download_benchmark_dataset(
    task="binary-relevance-classification", dataset_name="wiki_qa-train"
)
# Sample and re-name the columns to match the template
df = df.sample(100)
df = df.rename(
    columns={
        "query_text": "input",
        "document_text": "reference",
    },
)
model = OpenAIModel(
    model="gpt-4",
    temperature=0.0,
)


rails =list(RAG_RELEVANCY_PROMPT_RAILS_MAP.values())
df[["eval_relevance"]] = llm_classify(df, model, RAG_RELEVANCY_PROMPT_TEMPLATE, rails)
#Golden dataset has True/False map to -> "irrelevant" / "relevant"
#we can then scikit compare to output of template - same format
y_true = df["relevant"].map({True: "relevant", False: "irrelevant"})
y_pred = df["eval_relevance"]

# Compute Per-Class Precision, Recall, F1 Score, Support
precision, recall, f1, support = precision_recall_fscore_support(y_true, y_pred)

To learn more about LLM Evals, see the LLM Evals documentation.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

arize_phoenix_evals-0.2.0.tar.gz (33.3 kB view details)

Uploaded Source

Built Distribution

arize_phoenix_evals-0.2.0-py3-none-any.whl (44.4 kB view details)

Uploaded Python 3

File details

Details for the file arize_phoenix_evals-0.2.0.tar.gz.

File metadata

  • Download URL: arize_phoenix_evals-0.2.0.tar.gz
  • Upload date:
  • Size: 33.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.12.2

File hashes

Hashes for arize_phoenix_evals-0.2.0.tar.gz
Algorithm Hash digest
SHA256 9548bcdb111eceb0ec8cc3f171600376d89d7a09887232db6a5e1315ca58f7bd
MD5 4d31544cfdcc5a01c7fa7fc7deaf69b3
BLAKE2b-256 27621764442447a550d2431e2f6fb91ade50e8f4eab8b7c43ee9ee15d0c2bd37

See more details on using hashes here.

Provenance

File details

Details for the file arize_phoenix_evals-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for arize_phoenix_evals-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6a619b7d90b86157e801f043b9732e4de6d5bd5e2e1508168a9e92ad0b8189e8
MD5 2358f5bf348fcee6d02153b71ae2253e
BLAKE2b-256 f2a1fb9b0799da4ac33055d0d9d42a4fc874f929b686c4a8d177043ffd555df6

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page