Skip to main content

llama-index packs rag_evaluator integration

Project description

Retrieval-Augmented Generation (RAG) Evaluation Pack

Get benchmark scores on your own RAG pipeline (i.e. QueryEngine) on a RAG dataset (i.e., LabelledRagDataset). Specifically this pack takes in as input a query engine and a LabelledRagDataset, which can also be downloaded from llama-hub.

CLI Usage

You can download llamapacks directly using llamaindex-cli, which comes installed with the llama-index python package:

llamaindex-cli download-llamapack RagEvaluatorPack --download-dir ./rag_evaluator_pack

You can then inspect the files at ./rag_evaluator_pack and use them as a template for your own project!

Code Usage

You can download the pack to the ./rag_evaluator_pack directory through python code as well. The sample script below demonstrates how to construct RagEvaluatorPack using a LabelledRagDataset downloaded from llama-hub and a simple RAG pipeline built off of its source documents.

from llama_index.core.llama_dataset import download_llama_dataset
from llama_index.core.llama_pack import download_llama_pack
from llama_index.core import VectorStoreIndex

# download a LabelledRagDataset from llama-hub
rag_dataset, documents = download_llama_dataset(
    "PaulGrahamEssayDataset", "./paul_graham"
)

# build a basic RAG pipeline off of the source documents
index = VectorStoreIndex.from_documents(documents=documents)
query_engine = index.as_query_engine()

# Time to benchmark/evaluate this RAG pipeline
# Download and install dependencies
RagEvaluatorPack = download_llama_pack(
    "RagEvaluatorPack", "./rag_evaluator_pack"
)

# construction requires a query_engine, a rag_dataset, and optionally a judge_llm
rag_evaluator_pack = RagEvaluatorPack(
    query_engine=query_engine, rag_dataset=rag_dataset
)

# PERFORM EVALUATION
benchmark_df = rag_evaluator_pack.run()  # async arun() also supported
print(benchmark_df)

Output:

rag                            base_rag
metrics
mean_correctness_score         4.511364
mean_relevancy_score           0.931818
mean_faithfulness_score        1.000000
mean_context_similarity_score  0.945952

Note that rag_evaluator_pack.run() will also save two files in the same directory in which the pack was invoked:

.
├── benchmark.csv (CSV format of the benchmark scores)
└── _evaluations.json (raw evaluation results for all examples & predictions)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_packs_rag_evaluator-0.4.1.tar.gz (7.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file llama_index_packs_rag_evaluator-0.4.1.tar.gz.

File metadata

File hashes

Hashes for llama_index_packs_rag_evaluator-0.4.1.tar.gz
Algorithm Hash digest
SHA256 d25bb5ebce19a929d3d3deeafa8ada7f3e9fef57589f84f502a6508a9e08e1d8
MD5 901b6db9e4be698beb01bc938994cda7
BLAKE2b-256 8db11136bacaaa4f73cefd5611ad2466fbf45a57f01e87fe868fae7b13e41ce3

See more details on using hashes here.

File details

Details for the file llama_index_packs_rag_evaluator-0.4.1-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_packs_rag_evaluator-0.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 7fc8191db05a246b44f19dfb5d444fb18e6637f9199ee67c90130bf4b8f254b9
MD5 07698597bf8f918e3b8d267694f942b3
BLAKE2b-256 ffad2ccda50a369ac6d63008221a6d146fec205a5b79a22c469fc46e5ed766eb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page