Evaluation framework for Letta AI agents
Project description
Letta Evals
Letta Evals provides a framework for evaluating AI agents built with Letta. We offer a flexible evaluation system to test different dimensions of agent behavior and the ability to write your own custom evals for use cases you care about. You can use your own datasets to build private evals that represent common patterns in your agentic workflows.
If you are building with agentic systems, creating high quality evals is one of the most impactful things you can do. Without evals, it can be very difficult and time intensive to understand how agent configurations, model versions, or prompt changes might affect your use case. In the words of OpenAI's President Greg Brockman:
Setup
To run evals against Letta agents, you will need a running Letta server. You can either:
- Self-hosted: Follow the Letta installation guide to get started with self-hosting your server.
- Letta Cloud: Create an account at app.letta.com and configure your environment:
export LETTA_API_KEY=your-api-key # Get from Letta Cloud dashboard export LETTA_PROJECT_ID=your-project-id # Get from Letta Cloud dashboard Then set `base_url: https://api.letta.com/` in your suite YAML.
If you plan to use LLM-based grading (rubric graders), you'll also need to configure API keys for your chosen provider (e.g., OPENAI_API_KEY).
Minimum Required Version: Python 3.9
Installing Letta Evals
If you are going to be creating custom evals or contributing to this repository, clone the repo directly from GitHub and install using:
# we recommend uv
uv sync --extra dev
Using the editable install, changes you make to your evals will be reflected immediately without having to reinstall.
Running Evals Only
If you simply want to run existing evals locally, you can install the package via pip:
pip install letta-evals
Quick Start
- Create a test dataset (
dataset.jsonl):
{"input": "What's the capital of France?", "ground_truth": "Paris"}
{"input": "Calculate 2+2", "ground_truth": "4"}
- Write a suite configuration (
suite.yaml):
name: my-eval-suite
dataset: dataset.jsonl
target:
kind: letta_agent
agent_file: my_agent.af # or use agent_id for existing agents
base_url: http://localhost:8283
graders:
quality:
kind: tool
function: contains # or exact_match
extractor: last_assistant
gate:
kind: simple
metric_key: quality
aggregation: avg_score
op: gte
value: 0.75 # require average score >= 0.75
- Run the evaluation:
letta-evals run suite.yaml
Running Evals
The core evaluation flow is:
Dataset → Target (Agent) → Extractor → Grader → Gate → Result
# run an evaluation suite with real-time progress
letta-evals run suite.yaml
# save results to a directory (header.json, summary.json, results.jsonl)
letta-evals run suite.yaml --output results
# run multiple times for statistical analysis
letta-evals run suite.yaml --num-runs 5
# validate suite configuration before running
letta-evals validate suite.yaml
# list available components
letta-evals list-extractors
letta-evals list-graders
See the examples/ directory for complete working examples of different eval types.
Writing Evals
Letta Evals supports multiple approaches for creating evaluations, from simple YAML-based configs to fully custom Python implementations.
Getting Started
We suggest getting started with these examples:
- Basic tool grading:
examples/simple-tool-grader/- Simple string matching withexact_matchandcontainsfunctions - LLM-as-judge grading:
examples/simple-rubric-grader/- Using rubric graders with custom prompts for nuanced evaluation - Agent-as-judge grading:
examples/letta-agent-rubric-grader/- Using a Letta agent as an LLM judge (no API keys required!) - Multi-grader gates:
examples/multi-grader-gate/- Combining multiple graders with logical AND/OR gates, weighted averages, and advanced aggregation functions - Memory block extraction:
examples/multiturn-memory-block-extractor/- Extracting and evaluating agent memory across multiturn conversations - Multi-model evaluation:
examples/multi-model-simple-rubric-grader/- Testing across multiple LLM configurations - Programmatic agent creation:
examples/programmatic-agent-creation/- Using agent factories to create agents dynamically per sample - Custom graders and extractors:
examples/custom-tool-grader-and-extractor/- Implementing custom evaluation logic with Python decorators - Letta Code CLI evaluation:
examples/letta-code-simple-edit/- Testing autonomous coding agents with async graders and subprocess execution, including multi-model evaluation support
Writing Custom Components
Letta Evals provides Python decorators for extending the framework:
- @grader: Register custom scoring functions for domain-specific evaluation logic
- @extractor: Create custom extractors to parse agent responses in specialized ways
- @agent_factory: Define programmatic agent creation for dynamic instantiation per sample
- @suite_setup: Run initialization code before evaluation starts. Supports three signatures:
() -> None- Run once at the start with no parameters(client: AsyncLetta) -> None- Run once at the start with client access(client: AsyncLetta, model_name: str) -> None- Run once per model when evaluating multiple models (useful for model-specific setup like creating isolated working directories)
See examples/custom-tool-grader-and-extractor/ for implementation examples.
FAQ
Do you have examples of different eval types?
- Yes! See the
examples/directory. Each subdirectory contains a complete working example with dataset, suite config, and any custom components.
Can I use this without writing any Python code?
- Absolutely! You can create powerful evals using just YAML configs and JSONL datasets. See
examples/simple-tool-grader/orexamples/simple-rubric-grader/for code-free examples.
How do I evaluate multi-turn agent interactions?
- Letta Evals natively supports multiturn conversations! Simply provide
inputas a list of strings in your dataset instead of a single string. The framework will send each message sequentially and capture the full trajectory. Use extractors likelast_turn,all_assistant, ormemory_blockto evaluate different aspects of the multiturn interaction. Seeexamples/multiturn-memory-block-extractor/for a complete example testing memory updates across conversation turns.
Can I test the same agent with different LLM models?
- Yes! Use the multi-model configuration feature. See
examples/multi-model-simple-rubric-grader/for an example that tests one agent with multiple model configurations.
Can I run evaluations multiple times to measure consistency?
-
Yes! Run evaluations multiple times to measure consistency and variance. See
examples/simple-tool-grader/multi_run_tool_output_suite.yamlfor an example.# run 5 times and get mean/std dev statistics letta-evals run suite.yaml --num-runs 5 --output results/
Results include aggregate statistics across runs with mean and standard deviation for all metrics.
Can I monitor long-running evaluations in real-time?
- Yes! Results are written incrementally as JSONL, allowing you to monitor evaluations in real-time and resume interrupted runs.
Can I reuse agent trajectories when testing different graders?
- Yes! Use
--cached-resultsto reuse agent trajectories across evaluations, avoiding redundant agent runs when testing different graders.
Can I evaluate Letta Code agents across different models?
- Yes! The Letta Code target supports evaluating multiple models. In your suite YAML, specify multiple model handles:
target: kind: letta_code model_handles: - anthropic/claude-sonnet-4-5-20250929 - gpt-5-low
The framework automatically creates isolated working directories for each model to prevent interference between concurrent evaluations. When combined with@suite_setupfunctions that acceptmodel_name, you can perform model-specific initialization for each evaluation run.
Can I use this in CI/CD pipelines?
- Absolutely! Letta Evals is designed to integrate seamlessly into continuous integration workflows. Check out our
.github/workflows/e2e-tests.ymlfor an example of running evaluations in GitHub Actions. The workflow automatically discovers and runs all suite files, making it easy to gate releases or validate changes to your agents.
I don't have access to LLM provider API keys - can I still use LLM-as-judge / rubric grading?
-
Yes! Use the agent-as-judge feature instead of the standard rubric grader. With agent-as-judge, you configure a Letta agent (with its own LLM access) to act as the evaluator. This is perfect for:
- Teams without direct LLM API access (using Letta Cloud or managed instances)
- Scenarios where you want the judge to use tools (e.g., web search, database queries) during evaluation
- Organizations with centralized LLM access through Letta
See
examples/letta-agent-rubric-grader/for a complete working example. The judge agent just needs asubmit_grade(score: float, rationale: str)tool, and the framework handles the rest!
Contributing
Contributions are welcome! If you have an interesting eval or feature, please submit an issue or contact us on Discord.
License
This project is licensed under the MIT License. By contributing to evals, you are agreeing to make your evaluation logic and data under the same MIT license as this repository. You must have adequate rights to upload any data used in an eval. Letta reserves the right to use this data in future service improvements to our product.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file letta_evals-0.7.0.tar.gz.
File metadata
- Download URL: letta_evals-0.7.0.tar.gz
- Upload date:
- Size: 55.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
54d179be88e0c914bc5337866cdb3ca0b0a4d32bc05df487a3236901439c91ef
|
|
| MD5 |
8913d52c0d1b704664d558273709c26a
|
|
| BLAKE2b-256 |
3dbd4726efa213f26c6fe1c7cfd07e434ddf921ff18cd21254c80faa43ce6f6d
|
Provenance
The following attestation bundles were made for letta_evals-0.7.0.tar.gz:
Publisher:
publish.yml on letta-ai/letta-evals
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
letta_evals-0.7.0.tar.gz -
Subject digest:
54d179be88e0c914bc5337866cdb3ca0b0a4d32bc05df487a3236901439c91ef - Sigstore transparency entry: 668091321
- Sigstore integration time:
-
Permalink:
letta-ai/letta-evals@b3df7d889639fb92cbac775744898236e24c3f37 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/letta-ai
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@b3df7d889639fb92cbac775744898236e24c3f37 -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file letta_evals-0.7.0-py3-none-any.whl.
File metadata
- Download URL: letta_evals-0.7.0-py3-none-any.whl
- Upload date:
- Size: 82.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
36174cbd950ed422fb9cde17a91b089de76d6b377916cfefd0299fe57099f271
|
|
| MD5 |
ce21c2567777b4f4c770512000fc961f
|
|
| BLAKE2b-256 |
f262672e9227a0b48eb579a82fb55377e62283292d6ba254d5a090efc30a1287
|
Provenance
The following attestation bundles were made for letta_evals-0.7.0-py3-none-any.whl:
Publisher:
publish.yml on letta-ai/letta-evals
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
letta_evals-0.7.0-py3-none-any.whl -
Subject digest:
36174cbd950ed422fb9cde17a91b089de76d6b377916cfefd0299fe57099f271 - Sigstore transparency entry: 668091324
- Sigstore integration time:
-
Permalink:
letta-ai/letta-evals@b3df7d889639fb92cbac775744898236e24c3f37 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/letta-ai
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@b3df7d889639fb92cbac775744898236e24c3f37 -
Trigger Event:
workflow_dispatch
-
Statement type: