Evaluation framework for Strands
Project description
Strands Evals SDK
A comprehensive evaluation framework for AI agents and LLM applications.
Documentation ◆ Samples ◆ Python SDK ◆ Typescript SDK ◆ Tools ◆ Evaluations
Strands Evaluation is a powerful framework for evaluating AI agents and LLM applications. From simple output validation to complex multi-agent interaction analysis, trajectory evaluation, and automated experiment generation, Strands Evaluation provides comprehensive tools to measure and improve your AI systems.
Feature Overview
- Multiple Evaluation Types: Output evaluation, trajectory analysis, tool usage assessment, and interaction evaluation
- Dynamic Simulators: Multi-turn conversation simulation with realistic user behavior and goal-oriented interactions
- LLM-as-a-Judge: Built-in evaluators using language models for sophisticated assessment with structured scoring
- Trace-based Evaluation: Analyze agent behavior through OpenTelemetry execution traces
- Automated Experiment Generation: Generate comprehensive test suites from context descriptions
- Custom Evaluators: Extensible framework for domain-specific evaluation logic
- Experiment Management: Save, load, and version your evaluation experiments with JSON serialization
- Built-in Scoring Tools: Helper functions for exact, in-order, and any-order trajectory matching
Quick Start
# Install Strands Evals SDK
pip install strands-agents-evals
from strands import Agent
from strands_evals import Case, Experiment
from strands_evals.evaluators import OutputEvaluator
# Create test cases
test_cases = [
Case[str, str](
name="knowledge-1",
input="What is the capital of France?",
expected_output="The capital of France is Paris.",
metadata={"category": "knowledge"}
)
]
# Create evaluators with custom rubric
evaluators = [
OutputEvaluator(
rubric="""
Evaluate based on:
1. Accuracy - Is the information correct?
2. Completeness - Does it fully answer the question?
3. Clarity - Is it easy to understand?
Score 1.0 if all criteria are met excellently.
Score 0.5 if some criteria are partially met.
Score 0.0 if the response is inadequate.
"""
)
]
# Create experiment and run evaluation
experiment = Experiment[str, str](cases=test_cases, evaluators=evaluators)
def get_response(case: Case) -> str:
agent = Agent(callback_handler=None)
return str(agent(case.input))
# Run evaluations
reports = experiment.run_evaluations(get_response)
reports[0].run_display()
Installation
Ensure you have Python 3.10+ installed, then:
# Create and activate virtual environment
python -m venv .venv
source .venv/bin/activate # On Windows use: .venv\Scripts\activate
# Install in development mode
pip install -e .
# Install with test dependencies
pip install -e ".[test]"
# Install with both test and dev dependencies
pip install -e ".[test,dev]"
Features at a Glance
Output Evaluation with Custom Rubrics
Evaluate agent responses using LLM-as-a-judge with flexible scoring criteria:
from strands_evals.evaluators import OutputEvaluator
evaluator = OutputEvaluator(
rubric="Score 1.0 for accurate, complete responses. Score 0.5 for partial answers. Score 0.0 for incorrect or unhelpful responses.",
include_inputs=True, # Include context in evaluation
model="us.anthropic.claude-sonnet-4-20250514-v1:0" # Custom judge model
)
Trajectory Evaluation with Built-in Scoring
Analyze agent tool usage and action sequences with helper scoring functions:
from strands_evals.evaluators import TrajectoryEvaluator
from strands_evals.extractors import tools_use_extractor
from strands_tools import calculator
def get_response_with_tools(case: Case) -> dict:
agent = Agent(tools=[calculator])
response = agent(case.input)
# Extract trajectory efficiently to prevent context overflow
trajectory = tools_use_extractor.extract_agent_tools_used_from_messages(agent.messages)
# Update evaluator with tool descriptions
evaluator.update_trajectory_description(
tools_use_extractor.extract_tools_description(agent, is_short=True)
)
return {"output": str(response), "trajectory": trajectory}
# Evaluator includes built-in scoring tools: exact_match_scorer, in_order_match_scorer, any_order_match_scorer
evaluator = TrajectoryEvaluator(
rubric="Score 1.0 if correct tools used in proper sequence. Use scoring tools to verify trajectory matches."
)
Trace-based Helpfulness Evaluation
Evaluate agent helpfulness using OpenTelemetry traces with seven-level scoring:
from strands_evals.evaluators import HelpfulnessEvaluator
from strands_evals.telemetry import StrandsEvalsTelemetry
from strands_evals.mappers import StrandsInMemorySessionMapper
# Setup telemetry for trace capture
telemetry = StrandsEvalsTelemetry().setup_in_memory_exporter()
def user_task_function(case: Case) -> dict:
telemetry.memory_exporter.clear()
agent = Agent(
trace_attributes={"session.id": case.session_id},
callback_handler=None
)
response = agent(case.input)
# Map spans to session for evaluation
spans = telemetry.memory_exporter.get_finished_spans()
mapper = StrandsInMemorySessionMapper()
session = mapper.map_to_session(spans, session_id=case.session_id)
return {"output": str(response), "trajectory": session}
# Seven-level scoring: Not helpful (0.0) to Above and beyond (1.0)
evaluators = [HelpfulnessEvaluator()]
experiment = Experiment[str, str](cases=test_cases, evaluators=evaluators)
# Run evaluations
reports = experiment.run_evaluations(user_task_function)
reports[0].run_display()
Multi-turn Conversation Simulation
Simulate realistic user interactions with dynamic, goal-oriented conversations using ActorSimulator:
from strands import Agent
from strands_evals import Case, Experiment, ActorSimulator
from strands_evals.evaluators import HelpfulnessEvaluator, GoalSuccessRateEvaluator
from strands_evals.mappers import StrandsInMemorySessionMapper
from strands_evals.telemetry import StrandsEvalsTelemetry
# Setup telemetry
telemetry = StrandsEvalsTelemetry().setup_in_memory_exporter()
memory_exporter = telemetry.in_memory_exporter
def task_function(case: Case) -> dict:
# Create simulator to drive conversation
simulator = ActorSimulator.from_case_for_user_simulator(
case=case,
max_turns=10
)
# Create agent to evaluate
agent = Agent(
trace_attributes={
"gen_ai.conversation.id": case.session_id,
"session.id": case.session_id
},
callback_handler=None
)
# Run multi-turn conversation
all_spans = []
user_message = case.input
while simulator.has_next():
memory_exporter.clear()
agent_response = agent(user_message)
turn_spans = list(memory_exporter.get_finished_spans())
all_spans.extend(turn_spans)
user_result = simulator.act(str(agent_response))
user_message = str(user_result.structured_output.message)
# Map to session for evaluation
mapper = StrandsInMemorySessionMapper()
session = mapper.map_to_session(all_spans, session_id=case.session_id)
return {"output": str(agent_response), "trajectory": session}
# Use evaluators to assess simulated conversations
evaluators = [
HelpfulnessEvaluator(),
GoalSuccessRateEvaluator()
]
experiment = Experiment(cases=test_cases, evaluators=evaluators)
reports = experiment.run_evaluations(task_function)
Key Benefits:
- Dynamic Interactions: Simulator adapts responses based on agent behavior
- Goal-Oriented Testing: Verify agents can complete user objectives through dialogue
- Realistic Conversations: Generate authentic multi-turn interaction patterns
- No Predefined Scripts: Test agents without hardcoded conversation paths
- Comprehensive Evaluation: Combine with trace-based evaluators for full assessment
Automated Experiment Generation
Generate comprehensive test suites automatically from context descriptions:
from strands_evals.generators import ExperimentGenerator
from strands_evals.evaluators import TrajectoryEvaluator
# Define available tools and context
tool_context = """
Available tools:
- calculator(expression: str) -> float: Evaluate mathematical expressions
- web_search(query: str) -> str: Search the web for information
- file_read(path: str) -> str: Read file contents
"""
# Generate experiment with multiple test cases
generator = ExperimentGenerator[str, str](str, str)
experiment = await generator.from_context_async(
context=tool_context,
num_cases=10,
evaluator=TrajectoryEvaluator,
task_description="Math and research assistant with tool usage",
num_topics=3 # Distribute cases across multiple topics
)
# Save generated experiment
experiment.to_file("generated_experiment", "json")
Custom Evaluators with Structured Output
Create domain-specific evaluation logic with standardized output format:
from strands_evals.evaluators import Evaluator
from strands_evals.types import EvaluationData, EvaluationOutput
class PolicyComplianceEvaluator(Evaluator[str, str]):
def evaluate(self, evaluation_case: EvaluationData[str, str]) -> EvaluationOutput:
# Custom evaluation logic
response = evaluation_case.actual_output
# Check for policy violations
violations = self._check_policy_violations(response)
if not violations:
return EvaluationOutput(
score=1.0,
test_pass=True,
reason="Response complies with all policies",
label="compliant"
)
else:
return EvaluationOutput(
score=0.0,
test_pass=False,
reason=f"Policy violations: {', '.join(violations)}",
label="non_compliant"
)
def _check_policy_violations(self, response: str) -> list[str]:
# Implementation details...
return []
Tool Usage and Parameter Evaluation
Evaluate specific aspects of tool usage with specialized evaluators:
from strands_evals.evaluators import ToolSelectionAccuracyEvaluator, ToolParameterAccuracyEvaluator
# Evaluate if correct tools were selected
tool_selection_evaluator = ToolSelectionAccuracyEvaluator(
rubric="Score 1.0 if optimal tools selected, 0.5 if suboptimal but functional, 0.0 if wrong tools"
)
# Evaluate if tool parameters were correct
tool_parameter_evaluator = ToolParameterAccuracyEvaluator(
rubric="Score based on parameter accuracy and appropriateness for the task"
)
Available Evaluators
Output-Based Evaluators
These evaluators work directly with inputs and outputs without requiring OpenTelemetry traces:
- OutputEvaluator: Flexible LLM-based evaluation with custom rubrics
- TrajectoryEvaluator: Action sequence evaluation with built-in scoring tools (supports both list-based trajectories and Session traces via extractors)
- InteractionsEvaluator: Multi-agent interaction and handoff evaluation
- Custom Evaluators: Extensible base class for domain-specific logic
Trace-Based Evaluators
These evaluators require OpenTelemetry traces (Session objects) to analyze agent behavior:
Tool-Level Evaluators
Evaluate individual tool calls within a conversation:
- ToolSelectionAccuracyEvaluator: Evaluates appropriateness of tool choices at specific points
- ToolParameterAccuracyEvaluator: Evaluates correctness of tool parameters based on context
Trace-Level Evaluators
Evaluate the most recent turn in a conversation:
- HelpfulnessEvaluator: Seven-level helpfulness assessment from user perspective
- FaithfulnessEvaluator: Evaluates if responses are grounded in conversation history
- CoherenceEvaluator: Assesses logical cohesion and reasoning quality with five-level scoring
- ConcisenessEvaluator: Evaluates response brevity with three-level scoring
- ResponseRelevanceEvaluator: Evaluates relevance of responses to user questions
- HarmfulnessEvaluator: Binary evaluation for harmful content detection
Session-Level Evaluators
Evaluate entire conversation sessions:
- GoalSuccessRateEvaluator: Measures if user goals were achieved across the full conversation
Experiment Management and Serialization
Save, load, and version experiments for reproducibility:
# Save experiment with metadata
experiment.to_file("customer_service_eval", "json")
# Load experiment from file
loaded_experiment = Experiment.from_file("./experiment_files/customer_service_eval.json", "json")
# Experiment files include:
# - Test cases with metadata
# - Evaluator configuration
# - Expected outputs and trajectories
# - Versioning information
Evaluation Metrics and Analysis
Track comprehensive metrics across multiple dimensions:
# Built-in metrics to consider:
metrics = {
"accuracy": "Factual correctness of responses",
"task_completion": "Whether agent completed the task",
"tool_selection": "Appropriateness of tool choices",
"response_time": "Agent response latency",
"hallucination_rate": "Frequency of fabricated information",
"token_usage": "Efficiency of token consumption",
"user_satisfaction": "Subjective helpfulness ratings"
}
# Generate analysis reports
reports = experiment.run_evaluations(task_function)
reports[0].run_display() # Interactive display with metrics breakdown
Best Practices
Evaluation Strategy
- Diversify Test Cases: Cover knowledge, reasoning, tool usage, conversation, edge cases, and safety scenarios
- Use Statistical Baselines: Run multiple evaluations to account for LLM non-determinism
- Combine Multiple Evaluators: Use output, trajectory, and helpfulness evaluators together
- Regular Evaluation Cadence: Implement consistent evaluation schedules for continuous improvement
Performance Optimization
- Use Extractors: Always use
tools_use_extractorfunctions to prevent context overflow - Update Descriptions Dynamically: Call
update_trajectory_description()with tool descriptions - Choose Appropriate Judge Models: Use stronger models for complex evaluations
- Batch Evaluations: Process multiple test cases efficiently
Experiment Design
- Write Clear Rubrics: Include explicit scoring criteria and examples
- Include Expected Trajectories: Define exact sequences for trajectory evaluation
- Use Appropriate Matching: Choose between exact, in-order, or any-order matching
- Version Control: Track agent configurations alongside evaluation results
Documentation
For detailed guidance & examples, explore our documentation:
Contributing ❤️
We welcome contributions! See our Contributing Guide for details on:
- Development setup
- Contributing via Pull Requests
- Code of Conduct
- Reporting of security issues
License
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Security
See CONTRIBUTING for more information.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file strands_agents_evals-0.1.13.tar.gz.
File metadata
- Download URL: strands_agents_evals-0.1.13.tar.gz
- Upload date:
- Size: 192.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d33d0c284e9571f73b6414e9aa589979b5c316cba6e2f7132ac6afa96fec077d
|
|
| MD5 |
1ac37e63433bd3fa0b28a08f4f6f481d
|
|
| BLAKE2b-256 |
655009d306be4db19c2094d368c99302aa630c169cf6ea137d0c1ae6e9b4036f
|
Provenance
The following attestation bundles were made for strands_agents_evals-0.1.13.tar.gz:
Publisher:
pypi-publish-on-release.yml on strands-agents/evals
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
strands_agents_evals-0.1.13.tar.gz -
Subject digest:
d33d0c284e9571f73b6414e9aa589979b5c316cba6e2f7132ac6afa96fec077d - Sigstore transparency entry: 1203653691
- Sigstore integration time:
-
Permalink:
strands-agents/evals@5067a5b2732a364f7e18b29f2d3789949385bc41 -
Branch / Tag:
refs/tags/v0.1.13 - Owner: https://github.com/strands-agents
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
pypi-publish-on-release.yml@5067a5b2732a364f7e18b29f2d3789949385bc41 -
Trigger Event:
release
-
Statement type:
File details
Details for the file strands_agents_evals-0.1.13-py3-none-any.whl.
File metadata
- Download URL: strands_agents_evals-0.1.13-py3-none-any.whl
- Upload date:
- Size: 155.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a4ae7c323d8528dbb8f97728fdc62c45c6f29f6e10929602953a6443cf54dc08
|
|
| MD5 |
0ad742945c392ac0d678bfc30a0338d9
|
|
| BLAKE2b-256 |
c2e64823f9d282e97a8cb649b8c16bbf81231d3283a6297cb960402855fa347f
|
Provenance
The following attestation bundles were made for strands_agents_evals-0.1.13-py3-none-any.whl:
Publisher:
pypi-publish-on-release.yml on strands-agents/evals
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
strands_agents_evals-0.1.13-py3-none-any.whl -
Subject digest:
a4ae7c323d8528dbb8f97728fdc62c45c6f29f6e10929602953a6443cf54dc08 - Sigstore transparency entry: 1203653693
- Sigstore integration time:
-
Permalink:
strands-agents/evals@5067a5b2732a364f7e18b29f2d3789949385bc41 -
Branch / Tag:
refs/tags/v0.1.13 - Owner: https://github.com/strands-agents
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
pypi-publish-on-release.yml@5067a5b2732a364f7e18b29f2d3789949385bc41 -
Trigger Event:
release
-
Statement type: