Open-Source Evaluation for GenAI Application Pipelines.
Project description
Open-Source Evaluation for GenAI Application Pipelines
Overview
continuous-eval
is an open-source package created for granular and holistic evaluation of GenAI application pipelines.
How is continuous-eval different?
-
Modularized Evaluation: Measure each module in the pipeline with tailored metrics.
-
Comprehensive Metric Library: Covers Retrieval-Augmented Generation (RAG), Code Generation, Agent Tool Use, Classification and a variety of other LLM use cases. Mix and match Deterministic, Semantic and LLM-based metrics.
-
Leverage User Feedback in Evaluation: Easily build a close-to-human ensemble evaluation pipeline with mathematical guarantees.
-
Synthetic Dataset Generation: Generate large-scale synthetic dataset to test your pipeline.
Getting Started
This code is provided as a PyPi package. To install it, run the following command:
python3 -m pip install continuous-eval
if you want to install from source:
git clone https://github.com/relari-ai/continuous-eval.git && cd continuous-eval
poetry install --all-extras
To run LLM-based metrics, the code requires at least one of the LLM API keys in .env
. Take a look at the example env file .env.example
.
Run a single metric
Here's how you run a single metric on a datum. Check all available metrics here: link
from continuous_eval.metrics.retrieval import PrecisionRecallF1
datum = {
"question": "What is the capital of France?",
"retrieved_context": [
"Paris is the capital of France and its largest city.",
"Lyon is a major city in France.",
],
"ground_truth_context": ["Paris is the capital of France."],
"answer": "Paris",
"ground_truths": ["Paris"],
}
metric = PrecisionRecallF1()
print(metric(**datum))
Available Metrics
Module | Category | Metrics |
---|---|---|
Retrieval | Deterministic | PrecisionRecallF1, RankedRetrievalMetrics |
LLM-based | LLMBasedContextPrecision, LLMBasedContextCoverage | |
Text Generation | Deterministic | DeterministicAnswerCorrectness, DeterministicFaithfulness, FleschKincaidReadability |
Semantic | DebertaAnswerScores, BertAnswerRelevance, BertAnswerSimilarity | |
LLM-based | LLMBasedFaithfulness, LLMBasedAnswerCorrectness, LLMBasedAnswerRelevance, LLMBasedStyleConsistency | |
Classification | Deterministic | ClassificationAccuracy |
Code Generation | Deterministic | CodeStringMatch, PythonASTSimilarity |
LLM-based | LLMBasedCodeGeneration | |
Agent Tools | Deterministic | ToolSelectionAccuracy |
Custom | Define your own metrics |
To define your own metrics, you only need to extend the Metric class implementing the __call__
method.
Optional methods are batch
(if it is possible to implement optimizations for batch processing) and aggregate
(to aggregate metrics results over multiple samples_).
Run evaluation on pipeline modules
Define modules in your pipeline and select corresponding metrics.
from continuous_eval.eval import Module, ModuleOutput, Pipeline, Dataset
from continuous_eval.metrics.retrieval import PrecisionRecallF1, RankedRetrievalMetrics
from continuous_eval.metrics.generation.text import DeterministicAnswerCorrectness
from typing import List, Dict
dataset = Dataset("dataset_folder")
# Simple 3-step RAG pipeline with Retriever->Reranker->Generation
retriever = Module(
name="Retriever",
input=dataset.question,
output=List[str],
eval=[
PrecisionRecallF1().use(
retrieved_context=ModuleOutput(),
ground_truth_context=dataset.ground_truth_context,
),
],
)
reranker = Module(
name="reranker",
input=retriever,
output=List[Dict[str, str]],
eval=[
RankedRetrievalMetrics().use(
retrieved_context=ModuleOutput(),
ground_truth_context=dataset.ground_truth_context,
),
],
)
llm = Module(
name="answer_generator",
input=reranker,
output=str,
eval=[
FleschKincaidReadability().use(answer=ModuleOutput()),
DeterministicAnswerCorrectness().use(
answer=ModuleOutput(), ground_truth_answers=dataset.ground_truths
),
],
)
pipeline = Pipeline([retriever, reranker, llm], dataset=dataset)
print(pipeline.graph_repr()) # optional: visualize the pipeline
Now you can run the evaluation on your pipeline
eval_manager.start_run()
while eval_manager.is_running():
if eval_manager.curr_sample is None:
break
q = eval_manager.curr_sample["question"] # get the question or any other field
# run your pipeline ...
eval_manager.next_sample()
To log the results you just need to call the eval_manager.log
method with the module name and the output, for example:
eval_manager.log("answer_generator", response)
The evaluator manager also offers
eval_manager.run_metrics()
to run all the metrics defined in the pipelineeval_manager.run_tests()
to run the tests defined in the pipeline (see the documentation docs for more details)
Synthetic Data Generation
Ground truth data, or reference data, is important for evaluation as it can offer a comprehensive and consistent measurement of system performance. However, it is often costly and time-consuming to manually curate such a golden dataset. We have created a synthetic data pipeline that can custom generate user interaction data for a variety of use cases such as RAG, agents, copilots. They can serve a starting point for a golden dataset for evaluation or for other training purposes. Below is an example for Coding Agents. Try out this demo: Synthetic Data Demo
Resources
- Docs: link
- Examples Repo: end-to-end example repo
- Blog Posts:
- Practical Guide to RAG Pipeline Evaluation: Part 1: Retrieval, Part 2: Generation
- How important is a Golden Dataset for LLM evaluation? (link)
- How to evaluate complex GenAI Apps: a granular approach (link)
- Discord: Join our community of LLM developers Discord
- Reach out to founders: Email or Schedule a chat
License
This project is licensed under the Apache 2.0 - see the LICENSE file for details.
Open Analytics
We monitor basic anonymous usage statistics to understand our users' preferences, inform new features, and identify areas that might need improvement. You can take a look at exactly what we track in the telemetry code
To disable usage-tracking you set the CONTINUOUS_EVAL_DO_NOT_TRACK
flag to true
.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file continuous_eval-0.3.6.tar.gz
.
File metadata
- Download URL: continuous_eval-0.3.6.tar.gz
- Upload date:
- Size: 44.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.7.1 CPython/3.12.2 Darwin/23.1.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 99e4484f9501e50da20bedd2af645fad11efa6d5e1ef6249d6762858b8d37d38 |
|
MD5 | 0d3aa0e252e4cd87c125b281fdad3219 |
|
BLAKE2b-256 | dc030f11225797a092c5d233f115992f85118a58875031bdd81fd0094f9edfab |
File details
Details for the file continuous_eval-0.3.6-py3-none-any.whl
.
File metadata
- Download URL: continuous_eval-0.3.6-py3-none-any.whl
- Upload date:
- Size: 54.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.7.1 CPython/3.12.2 Darwin/23.1.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6ad4b6523b80459e07e10ac9e049497500ebdbe8eab7e0da80a8bc713b8108a4 |
|
MD5 | 3a55e8bb47b0c1583179b9534c4f20cf |
|
BLAKE2b-256 | 835b8faaba3d29c86fc1de35af0d5b742e20905cf021fe38a7a427722b832997 |