Skip to main content

llama-index packs rag_evaluator integration

Project description

Retrieval-Augmented Generation (RAG) Evaluation Pack

Get benchmark scores on your own RAG pipeline (i.e. QueryEngine) on a RAG dataset (i.e., LabelledRagDataset). Specifically this pack takes in as input a query engine and a LabelledRagDataset, which can also be downloaded from llama-hub.

CLI Usage

You can download llamapacks directly using llamaindex-cli, which comes installed with the llama-index python package:

llamaindex-cli download-llamapack RagEvaluatorPack --download-dir ./rag_evaluator_pack

You can then inspect the files at ./rag_evaluator_pack and use them as a template for your own project!

Code Usage

You can download the pack to the ./rag_evaluator_pack directory through python code as well. The sample script below demonstrates how to construct RagEvaluatorPack using a LabelledRagDataset downloaded from llama-hub and a simple RAG pipeline built off of its source documents.

from llama_index.core.llama_dataset import download_llama_dataset
from llama_index.core.llama_pack import download_llama_pack
from llama_index.core import VectorStoreIndex

# download a LabelledRagDataset from llama-hub
rag_dataset, documents = download_llama_dataset(
    "PaulGrahamEssayDataset", "./paul_graham"
)

# build a basic RAG pipeline off of the source documents
index = VectorStoreIndex.from_documents(documents=documents)
query_engine = index.as_query_engine()

# Time to benchmark/evaluate this RAG pipeline
# Download and install dependencies
RagEvaluatorPack = download_llama_pack(
    "RagEvaluatorPack", "./rag_evaluator_pack"
)

# construction requires a query_engine, a rag_dataset, and optionally a judge_llm
rag_evaluator_pack = RagEvaluatorPack(
    query_engine=query_engine, rag_dataset=rag_dataset
)

# PERFORM EVALUATION
benchmark_df = rag_evaluator_pack.run()  # async arun() also supported
print(benchmark_df)

Output:

rag                            base_rag
metrics
mean_correctness_score         4.511364
mean_relevancy_score           0.931818
mean_faithfulness_score        1.000000
mean_context_similarity_score  0.945952

Note that rag_evaluator_pack.run() will also save two files in the same directory in which the pack was invoked:

.
├── benchmark.csv (CSV format of the benchmark scores)
└── _evaluations.json (raw evaluation results for all examples & predictions)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_packs_rag_evaluator-0.2.1.tar.gz (5.7 kB view details)

Uploaded Source

Built Distribution

File details

Details for the file llama_index_packs_rag_evaluator-0.2.1.tar.gz.

File metadata

File hashes

Hashes for llama_index_packs_rag_evaluator-0.2.1.tar.gz
Algorithm Hash digest
SHA256 f3e3ed527b77fb726213a8efedd40f1db84c4d577a50cf8e40ab57cd0a6aacf3
MD5 febfd335572bc8e4f93f51cd74a930ec
BLAKE2b-256 fa8a424e9d199012a8f271771c96186ef4f08cd676270a65703eca721f97f248

See more details on using hashes here.

File details

Details for the file llama_index_packs_rag_evaluator-0.2.1-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_packs_rag_evaluator-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 bae050606a7d3dff82358bd4d1a1d747a9e65bcf2eab24962e1cbe48e0427aa4
MD5 82937db2e41c538390e9f95ecbebe32d
BLAKE2b-256 e3d23e5bf5c8a1918f8ac6a91a402a12dfcd41f0b52678480769f4d15f5b3b87

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page