Universal library for evaluating AI models
Project description
AutoEvals
AutoEvals is a tool to quickly and easily evaluate AI model outputs.
It bundles together a variety of automatic evaluation methods including:
- Heuristic (e.g. Levenshtein distance)
- Statistical (e.g. BLEU)
- Model-based (using LLMs)
AutoEvals is developed by the team at BrainTrust.
AutoEvals uses model-graded evaluation for a variety of subjective tasks including fact checking, safety, and more. Many of these evaluations are adapted from OpenAI's excellent evals project but are implemented so you can flexibly run them on individual examples, tweak the prompts, and debug their outputs.
You can also create your own model-graded evaluations with AutoEvals. It's easy to add custom prompts, parse outputs, and manage exceptions.
Installation
To install AutoEvals, run the following command:
pip install autoevals
Example
Use AutoEvals to model-grade an example LLM completion using the factuality prompt.
from autoevals.llm import *
# Create a new LLM-based evaluator
evaluator = Factuality()
# Evaluate an example LLM completion
input = "Which country has the highest population?"
output = "People's Republic of China"
expected = "China"
result = evaluator(output, expected, input=input)
# The evaluator returns a score from [0,1] and includes the raw outputs from the evaluator
print(f"Factuality score: {result.score}")
print(f"Factuality metadata: {result.metadata['rationale']}")
Using Braintrust with AutoEvals
Once you grade an output using AutoEvals, it's convenient to use BrainTrust to log and compare your evaluation results.
from autoevals.llm import *
import braintrust
# Create a new LLM-based evaluator
evaluator = Factuality()
# Evaluate an example LLM completion
input = "Which country has the highest population?"
output = "People's Republic of China"
expected = "China"
result = evaluator(output, expected, input=input)
# The evaluator returns a score from [0,1] and includes the raw outputs from the evaluator
print(f"Factuality score: {result.score}")
print(f"Factuality metadata: {result.metadata['rationale']}")
# Log the evaluation results to BrainTrust
experiment = braintrust.init(
project="AutoEvals", api_key="YOUR_BRAINTRUST_API_KEY"
)
experiment.log(
inputs={"query": input},
output=output,
expected=expected,
scores={
"factuality": result.score,
},
metadata={
"factuality": result.metadata,
},
)
print(experiment.summarize())
Supported Evaluation Methods
Model-Based Classification
- Battle
- ClosedQA
- Humor
- Factuality
- Security
- Summarization
- SQL
- Translation
- Fine-tuned binary classifiers
Embeddings
- BERTScore
- Ada Embedding distance
Heuristic
- Levenshtein distance
- Jaccard distance
- JSON diff
Statistical
- BLEU
- ROUGE
- METEOR
Custom Evaluation Prompts
AutoEvals supports custom evaluation prompts for model-graded evaluation. To use them, simply pass in a prompt and scoring mechanism:
from autoevals import LLMClassifier
# Define a prompt prefix for a LLMClassifier (returns just one answer)
prompt_prefix = """
You are a technical project manager who helps software engineers generate better titles for their GitHub issues.
You will look at the issue description, and pick which of two titles better describes it.
I'm going to provide you with the issue description, and two possible titles.
Issue Description: {{input}}
1: {{output}}
2: {{expected}}
"""
# Define the scoring mechanism
# 1 if the generated answer is better than the expected answer
# 0 otherwise
output_scores = {"1": 1, "2": 0}
evaluator = LLMClassifier(
prompt_prefix,
output_scores,
use_cot=False,
)
# Evaluate an example LLM completion
page_content = """
As suggested by Nicolo, we should standardize the error responses coming from GoTrue, postgres, and realtime (and any other/future APIs) so that it's better DX when writing a client,
We can make this change on the servers themselves, but since postgrest and gotrue are fully/partially external may be harder to change, it might be an option to transform the errors within the client libraries/supabase-js, could be messy?
Nicolo also dropped this as a reference: http://spec.openapis.org/oas/v3.0.3#openapi-specification"""
output = (
"Standardize error responses from GoTrue, Postgres, and Realtime APIs for better DX"
)
expected = "Standardize Error Responses across APIs"
response = evaluator(output, expected, input=page_content)
print(f"Score: {response.score}")
print(f"Metadata: {response.metadata}")
Typescript / Node Support
Since AutoEvals has a very simple prompt template format, it is easy to support in other languages, like Typescript (and eventually others). We'll support an npm package soon, but in the meantime, feel free to grab model templates from the prompt templates directory.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file autoevals-0.0.3.tar.gz
.
File metadata
- Download URL: autoevals-0.0.3.tar.gz
- Upload date:
- Size: 10.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | a01fdfa9b73aa3b9185f4ce53e8f35b2878828d734ea8ed4b72d90b9bd4be1fe |
|
MD5 | b293a1ea992ee650671999c9c685474e |
|
BLAKE2b-256 | b49c40626c5fc30424df615964a317e5faac650a1a4670d44a46a52e11cedc92 |
File details
Details for the file autoevals-0.0.3-py3-none-any.whl
.
File metadata
- Download URL: autoevals-0.0.3-py3-none-any.whl
- Upload date:
- Size: 10.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2b3f6873f6ec1ab8c28b6bf2862125fd0e9d3c0e595da8af85b220c0c13c12c8 |
|
MD5 | a30905b86d3b047bf0a35191a5bf49de |
|
BLAKE2b-256 | d5fc6c56a7270b80267bf40bdb62f5a7726447b28b388ae1ec58b49f227dcf46 |