LLM Observability
Project description
Phoenix Client provides a interface for interacting with the Phoenix platform via its REST API, enabling you to manage datasets, run experiments, analyze traces, and collect feedback programmatically.
Features
- REST API Interface - Interact with Phoenix's OpenAPI REST interface
- Prompts - Create, version, and invoke prompt templates
- Datasets - Create and append to datasets from DataFrames, CSV files, or dictionaries
- Experiments - Run evaluations and track experiment results
- Spans - Query and analyze traces with powerful filtering
- Annotations - Add human feedback and automated evaluations
- Evaluation Helpers - Extract span data in formats optimized for RAG evaluation workflows
Installation
Install the Phoenix Client using pip:
pip install arize-phoenix-client
Getting Started
Environment Variables
Configure the Phoenix Client using environment variables for seamless use across different environments:
# For local Phoenix server (default)
export PHOENIX_BASE_URL="http://localhost:6006"
# Cloud Instance
export PHOENIX_API_KEY="your-api-key"
export PHOENIX_BASE_URL="https://app.phoenix.arize.com/s/your-space"
# For custom Phoenix instances with API key authentication
export PHOENIX_BASE_URL="https://your-phoenix-instance.com"
export PHOENIX_API_KEY="your-api-key"
# Customize headers
export PHOENIX_CLIENT_HEADERS="Authorization=Bearer your-api-key,custom-header=value"
Client Initialization
The client automatically reads environment variables, or you can override them:
from phoenix.client import Client, AsyncClient
# Automatic configuration from environment variables
client = Client()
client = Client(base_url="http://localhost:6006") # Local Phoenix server
# Cloud instance with API key
client = Client(
base_url="https://app.phoenix.arize.com/s/your-space",
api_key="your-api-key"
)
# Custom authentication headers
client = Client(
base_url="https://your-phoenix-instance.com",
headers={"Authorization": "Bearer your-api-key"}
)
# Asynchronous client (same configuration options)
async_client = AsyncClient()
async_client = AsyncClient(base_url="http://localhost:6006")
async_client = AsyncClient(
base_url="https://app.phoenix.arize.com/s/your-space",
api_key="your-api-key"
)
Resources
The Phoenix Client organizes functionality into resources that correspond to key Phoenix platform features. Each resource provides specialized methods for managing different types of data:
Prompts
Manage prompt templates and versions:
from phoenix.client import Client
from phoenix.client.types import PromptVersion
client = Client()
content = """
You're an expert educator in {{ topic }}. Summarize the following article
in a few concise bullet points that are easy for beginners to understand.
{{ article }}
"""
prompt = client.prompts.create(
name="article-bullet-summarizer",
version=PromptVersion(
messages=[{"role": "user", "content": content}],
model_name="gpt-4o-mini",
),
prompt_description="Summarize an article in a few bullet points"
)
# Retrieve and use prompts
prompt = client.prompts.get(prompt_identifier="article-bullet-summarizer")
# Format the prompt with variables
prompt_vars = {
"topic": "Sports",
"article": "Moises Henriques, the Australian all-rounder, has signed to play for Surrey in this summer's NatWest T20 Blast. He will join after the IPL and is expected to strengthen the squad throughout the campaign."
}
formatted_prompt = prompt.format(variables=prompt_vars)
# Make a request with your Prompt using OpenAI
from openai import OpenAI
oai_client = OpenAI()
resp = oai_client.chat.completions.create(**formatted_prompt)
print(resp.choices[0].message.content)
Datasets
Manage evaluation datasets and examples for experiments and evaluation:
from phoenix.client import Client
import pandas as pd
client = Client()
# List all available datasets
datasets = client.datasets.list()
for dataset in datasets:
print(f"Dataset: {dataset['name']} ({dataset['example_count']} examples)")
# Get a specific dataset with all examples
dataset = client.datasets.get_dataset(dataset="qa-evaluation")
print(f"Dataset {dataset.name} has {len(dataset)} examples")
# Convert dataset to pandas DataFrame for analysis
df = dataset.to_dataframe()
print(df.columns) # Index(['input', 'output', 'metadata'], dtype='object')
# Create a new dataset from dictionaries
dataset = client.datasets.create_dataset(
name="customer-support-qa",
dataset_description="Q&A dataset for customer support evaluation",
inputs=[
{"question": "How do I reset my password?"},
{"question": "What's your return policy?"},
{"question": "How do I track my order?"}
],
outputs=[
{"answer": "You can reset your password by clicking the 'Forgot Password' link on the login page."},
{"answer": "We offer 30-day returns for unused items in original packaging."},
{"answer": "You can track your order using the tracking number sent to your email."}
],
metadata=[
{"category": "account", "difficulty": "easy"},
{"category": "policy", "difficulty": "medium"},
{"category": "orders", "difficulty": "easy"}
]
)
# Create dataset from pandas DataFrame
df = pd.DataFrame({
"prompt": ["Hello", "Hi there", "Good morning"],
"response": ["Hi! How can I help?", "Hello! What can I do for you?", "Good morning! How may I assist?"],
"sentiment": ["neutral", "positive", "positive"],
"length": [20, 25, 30]
})
dataset = client.datasets.create_dataset(
name="greeting-responses",
dataframe=df,
input_keys=["prompt"], # Columns to use as input
output_keys=["response"], # Columns to use as expected output
metadata_keys=["sentiment", "length"] # Additional metadata columns
)
Traces
Retrieve traces for a project with optional filtering and sorting:
from phoenix.client import Client
client = Client()
# Get the latest 100 traces
traces = client.traces.get_traces(project_identifier="my-llm-app")
for trace in traces:
print(f"Trace {trace.trace_id}: {trace.status} ({trace.latency_ms}ms)")
# Filter by time range
from datetime import datetime, timedelta
traces = client.traces.get_traces(
project_identifier="my-llm-app",
start_time=datetime.now() - timedelta(hours=24),
end_time=datetime.now(),
sort="latency_ms",
order="desc",
limit=50,
)
# Include full span details
traces = client.traces.get_traces(
project_identifier="my-llm-app",
include_spans=True, # caution: can increase response size significantly
limit=10,
)
# Filter by session
traces = client.traces.get_traces(
project_identifier="my-llm-app",
session_id="my-session-id",
)
Async usage:
from phoenix.client import AsyncClient
async_client = AsyncClient()
traces = await async_client.traces.get_traces(
project_identifier="my-llm-app",
limit=50,
)
| Parameter | Type | Default | Description |
|---|---|---|---|
project_identifier |
str |
— | Project name or ID — required |
start_time |
datetime | None |
None |
Inclusive lower bound on trace start time |
end_time |
datetime | None |
None |
Exclusive upper bound on trace start time |
sort |
"start_time" | "latency_ms" | None |
None |
Sort field (server defaults to "start_time") |
order |
"asc" | "desc" | None |
None |
Sort direction (server defaults to "desc") |
include_spans |
bool |
False |
Include full span details for each trace |
session_id |
str | Sequence[str] | None |
None |
Filter by session ID(s) or GlobalID(s) |
limit |
int |
100 |
Maximum number of traces to return |
timeout |
int | None |
60 |
Request timeout in seconds |
Note: Requires Phoenix server >= 13.15.0.
Spans
Query for spans and annotations from your projects for custom evaluation and annotation workflows:
from phoenix.client import Client
from datetime import datetime, timedelta
client = Client()
# Get spans as pandas DataFrame for analysis
spans_df = client.spans.get_spans_dataframe(
project_identifier="my-llm-app",
limit=1000,
root_spans_only=True, # Only get top-level spans
start_time=datetime.now() - timedelta(hours=24)
)
# Get span annotations as DataFrame
annotations_df = client.spans.get_span_annotations_dataframe(
spans_dataframe=spans_df, # Use spans from previous query
project_identifier="my-llm-app",
include_annotation_names=["relevance", "accuracy"], # Only specific annotations
exclude_annotation_names=["note"] # Exclude UI notes
)
Annotations
Add annotations to spans for evaluation, user feedback, and custom annotation workflows:
from phoenix.client import Client
client = Client()
# Add a single annotation with human feedback
client.spans.add_span_annotation(
span_id="span-123",
annotation_name="helpfulness",
annotator_kind="HUMAN",
label="helpful",
score=0.9,
explanation="Response directly answered the user's question"
)
# Bulk annotation logging for multiple spans
annotations = [
{
"name": "sentiment",
"span_id": "span-123",
"annotator_kind": "LLM",
"result": {"label": "positive", "score": 0.8}
},
{
"name": "accuracy",
"span_id": "span-456",
"annotator_kind": "HUMAN",
"result": {"label": "accurate", "score": 0.95}
},
]
client.spans.log_span_annotations(span_annotations=annotations)
Sessions
Retrieve and annotate conversation sessions:
from phoenix.client import Client
client = Client()
# List sessions for a project
sessions = client.sessions.list(project_name="my-llm-app")
for session in sessions:
print(f"Session: {session['session_id']}")
# Get conversation turns for a session
turns = client.sessions.get_session_turns(session_id="my-session-id")
for turn in turns:
print(f"Input: {turn.get('input', {}).get('value')}")
print(f"Output: {turn.get('output', {}).get('value')}")
# Add a session-level annotation
client.sessions.add_session_annotation(
session_id="my-session-id",
annotation_name="user-satisfaction",
label="satisfied",
score=0.9,
annotator_kind="HUMAN",
)
Experiments
Run tasks across datasets and evaluate their outputs:
from phoenix.client import Client
client = Client()
# Get an existing dataset to run the experiment on
dataset = client.datasets.get_dataset(dataset="my-dataset")
# Define a task function
def my_task(example):
# Your LLM call or business logic here
return f"Result for: {example['input']['question']}"
# Run an experiment
experiment = client.experiments.run_experiment(
dataset=dataset,
task=my_task,
experiment_name="my-experiment",
)
# Retrieve an existing experiment
ran_experiment = client.experiments.get_experiment(experiment_id="my-experiment-id")
for run in ran_experiment["task_runs"]:
print(f"Output: {run['output']}, Error: {run['error']}")
Projects
Manage Phoenix projects that organize your AI application data:
from phoenix.client import Client
client = Client()
# List all projects
projects = client.projects.list()
for project in projects:
print(f"Project: {project['name']} (ID: {project['id']})")
# Create a new project
new_project = client.projects.create(
name="Customer Support Bot",
description="Traces and evaluations for our customer support chatbot"
)
print(f"Created project with ID: {new_project['id']}")
Documentation
- Full Documentation - Complete API reference and guides
- Phoenix Docs - Main Phoenix documentation
- GitHub Repository - Source code and examples
Community
Join our community to connect with thousands of AI builders:
- 🌍 Join our Slack community.
- 💡 Ask questions and provide feedback in the #phoenix-support channel.
- 🌟 Leave a star on our GitHub.
- 🐞 Report bugs with GitHub Issues.
- 𝕏 Follow us on 𝕏.
- 🗺️ Check out our roadmap to see where we're heading next.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file arize_phoenix_client-2.3.1.tar.gz.
File metadata
- Download URL: arize_phoenix_client-2.3.1.tar.gz
- Upload date:
- Size: 177.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
13571eddb280fb1a7bd4796c5809e71b583682aaf727463ee8325d51fab3b8de
|
|
| MD5 |
f36d14a0f6b73b4295ddaf1a896dcabe
|
|
| BLAKE2b-256 |
327cc867871049290e95529b6085514fb22d5dbf4c6ea06fce9f671d949752f5
|
Provenance
The following attestation bundles were made for arize_phoenix_client-2.3.1.tar.gz:
Publisher:
publish.yaml on Arize-ai/phoenix
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
arize_phoenix_client-2.3.1.tar.gz -
Subject digest:
13571eddb280fb1a7bd4796c5809e71b583682aaf727463ee8325d51fab3b8de - Sigstore transparency entry: 1247807984
- Sigstore integration time:
-
Permalink:
Arize-ai/phoenix@846b72711b742c673262aae298914a52b4265ea0 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/Arize-ai
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yaml@846b72711b742c673262aae298914a52b4265ea0 -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file arize_phoenix_client-2.3.1-py3-none-any.whl.
File metadata
- Download URL: arize_phoenix_client-2.3.1-py3-none-any.whl
- Upload date:
- Size: 173.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
705c8dde93e8f2eda9e3f981c39e3a910a4f008b40c7fa832ad5f54f195a0be4
|
|
| MD5 |
2d5f1fcfacdffd7908daedd0da1dc8d9
|
|
| BLAKE2b-256 |
c8908d3d1e13030fb977cc1d4ebdf2b88ed14d6502814d9a8824716b251cfa81
|
Provenance
The following attestation bundles were made for arize_phoenix_client-2.3.1-py3-none-any.whl:
Publisher:
publish.yaml on Arize-ai/phoenix
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
arize_phoenix_client-2.3.1-py3-none-any.whl -
Subject digest:
705c8dde93e8f2eda9e3f981c39e3a910a4f008b40c7fa832ad5f54f195a0be4 - Sigstore transparency entry: 1247807999
- Sigstore integration time:
-
Permalink:
Arize-ai/phoenix@846b72711b742c673262aae298914a52b4265ea0 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/Arize-ai
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yaml@846b72711b742c673262aae298914a52b4265ea0 -
Trigger Event:
workflow_dispatch
-
Statement type: