Skip to main content

llama-index packs rag_evaluator integration

Project description

Retrieval-Augmented Generation (RAG) Evaluation Pack

Get benchmark scores on your own RAG pipeline (i.e. QueryEngine) on a RAG dataset (i.e., LabelledRagDataset). Specifically this pack takes in as input a query engine and a LabelledRagDataset, which can also be downloaded from llama-hub.

CLI Usage

You can download llamapacks directly using llamaindex-cli, which comes installed with the llama-index python package:

llamaindex-cli download-llamapack RagEvaluatorPack --download-dir ./rag_evaluator_pack

You can then inspect the files at ./rag_evaluator_pack and use them as a template for your own project!

Code Usage

You can download the pack to the ./rag_evaluator_pack directory through python code as well. The sample script below demonstrates how to construct RagEvaluatorPack using a LabelledRagDataset downloaded from llama-hub and a simple RAG pipeline built off of its source documents.

from llama_index.core.llama_dataset import download_llama_dataset
from llama_index.core.llama_pack import download_llama_pack
from llama_index.core import VectorStoreIndex

# download a LabelledRagDataset from llama-hub
rag_dataset, documents = download_llama_dataset(
    "PaulGrahamEssayDataset", "./paul_graham"
)

# build a basic RAG pipeline off of the source documents
index = VectorStoreIndex.from_documents(documents=documents)
query_engine = index.as_query_engine()

# Time to benchmark/evaluate this RAG pipeline
# Download and install dependencies
RagEvaluatorPack = download_llama_pack(
    "RagEvaluatorPack", "./rag_evaluator_pack"
)

# construction requires a query_engine, a rag_dataset, and optionally a judge_llm
rag_evaluator_pack = RagEvaluatorPack(
    query_engine=query_engine, rag_dataset=rag_dataset
)

# PERFORM EVALUATION
benchmark_df = rag_evaluator_pack.run()  # async arun() also supported
print(benchmark_df)

Output:

rag                            base_rag
metrics
mean_correctness_score         4.511364
mean_relevancy_score           0.931818
mean_faithfulness_score        1.000000
mean_context_similarity_score  0.945952

Note that rag_evaluator_pack.run() will also save two files in the same directory in which the pack was invoked:

.
├── benchmark.csv (CSV format of the benchmark scores)
└── _evaluations.json (raw evaluation results for all examples & predictions)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_packs_rag_evaluator-0.3.0.tar.gz (5.7 kB view details)

Uploaded Source

Built Distribution

File details

Details for the file llama_index_packs_rag_evaluator-0.3.0.tar.gz.

File metadata

File hashes

Hashes for llama_index_packs_rag_evaluator-0.3.0.tar.gz
Algorithm Hash digest
SHA256 b4a093f127dabab5434fc69c1fb35ab6a35b55de1004b2551c14ea2abb69a742
MD5 e1156ebacef06a1974f2d255719b2b98
BLAKE2b-256 346ae9f47b7bc45739ae59e1fd2195173e0b359b245d32915db155a527e5862f

See more details on using hashes here.

File details

Details for the file llama_index_packs_rag_evaluator-0.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_packs_rag_evaluator-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 d2626989b21c0ce4bf2fdbc866598639b9ae3a05b0baea3a9bbad58855f8cd56
MD5 2155965141359bb6a8220774ac6e7abe
BLAKE2b-256 a4bc2f64714f10a10338d3cd39496ed453a4649fc9d2ae6aefa683cddf720bfc

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page