LLM Evaluations
Project description
arize-phoenix-evals
Phoenix provides tooling to evaluate LLM applications, including tools to determine the relevance or irrelevance of documents retrieved by retrieval-augmented generation (RAG) application, whether or not the response is toxic, and much more.
Phoenix's approach to LLM evals is notable for the following reasons:
- Includes pre-tested templates and convenience functions for a set of common Eval “tasks”
- Data science rigor applied to the testing of model and template combinations
- Designed to run as fast as possible on batches of data
- Includes benchmark datasets and tests for each eval function
Installation
Install the arize-phoenix sub-package via pip
pip install arize-phoenix-evals
Note you will also have to install the LLM vendor SDK you would like to use with LLM Evals. For example, to use OpenAI's GPT-4, you will need to install the OpenAI Python SDK:
pip install 'openai>=1.0.0'
Usage
Here is an example of running the RAG relevance eval on a dataset of Wikipedia questions and answers:
import os
from phoenix.evals import (
RAG_RELEVANCY_PROMPT_TEMPLATE,
RAG_RELEVANCY_PROMPT_RAILS_MAP,
OpenAIModel,
download_benchmark_dataset,
llm_classify,
)
from sklearn.metrics import precision_recall_fscore_support, confusion_matrix, ConfusionMatrixDisplay
os.environ["OPENAI_API_KEY"] = "<your-openai-key>"
# Download the benchmark golden dataset
df = download_benchmark_dataset(
task="binary-relevance-classification", dataset_name="wiki_qa-train"
)
# Sample and re-name the columns to match the template
df = df.sample(100)
df = df.rename(
columns={
"query_text": "input",
"document_text": "reference",
},
)
model = OpenAIModel(
model="gpt-4",
temperature=0.0,
)
rails =list(RAG_RELEVANCY_PROMPT_RAILS_MAP.values())
df[["eval_relevance"]] = llm_classify(df, model, RAG_RELEVANCY_PROMPT_TEMPLATE, rails)
#Golden dataset has True/False map to -> "irrelevant" / "relevant"
#we can then scikit compare to output of template - same format
y_true = df["relevant"].map({True: "relevant", False: "irrelevant"})
y_pred = df["eval_relevance"]
# Compute Per-Class Precision, Recall, F1 Score, Support
precision, recall, f1, support = precision_recall_fscore_support(y_true, y_pred)
To learn more about LLM Evals, see the LLM Evals documentation.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for arize_phoenix_evals-0.3.0.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | f9d6b4979fc80a709b208db727a9beb17b2da1281221f1abc1685fddac8d64a4 |
|
MD5 | a42297d03f694879ad69c79fbe718ccd |
|
BLAKE2b-256 | a73cbfd24bee0a49ff80b0ed69b34514b176ad7e4daaa351f3cc49c68d2fc5df |
Hashes for arize_phoenix_evals-0.3.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2f87c0cf4941d384dfb2044da74fb967a6c08af0b7e0f64ffeef3eb344d7d9eb |
|
MD5 | 80db62a9acdf1d1b8cbc39713b6bf209 |
|
BLAKE2b-256 | c06baf5a0b2f1159a5a4e8030631b56856ec79c5dd6361e57fb0752c20617c89 |