Skip to main content

A RAG Checker Python package

Project description

RAGChecker: A Fine-grained Framework For Diagnosing RAG

RAGChecker is an advanced automatic evaluation framework designed to assess and diagnose Retrieval-Augmented Generation (RAG) systems. It provides a comprehensive suite of metrics and tools for in-depth analysis of RAG performance.

RefChecker Metrics
Figure: RAGChecker Metrics

馃専 Highlighted Features

  • Holistic Evaluation: RAGChecker offers Overall Metrics for an assessment of the entire RAG pipeline.

  • Diagnostic Metrics: Diagnostic Retriever Metrics for analyzing the retrieval component. Diagnostic Generator Metrics for evaluating the generation component. These metrics provide valuable insights for targeted improvements.

  • Fine-grained Evaluation: Utilizes claim-level entailment operations for fine-grained evaluation.

  • Benchmark Dataset: A comprehensive RAG benchmark dataset with 4k questions covering 10 domains (upcoming).

  • Meta-Evaluation: A human-annotated preference dataset for evaluating the correlations of RAGChecker's results with human judgments.

RAGChecker empowers developers and researchers to thoroughly evaluate, diagnose, and enhance their RAG systems with precision and depth.

馃敟 News

鉂わ笍 Citation

RAGChecker paper: https://arxiv.org/pdf/2408.08067

If you use RAGChecker in your work, please cite us:

@misc{ru2024ragcheckerfinegrainedframeworkdiagnosing,
      title={RAGChecker: A Fine-grained Framework for Diagnosing Retrieval-Augmented Generation}, 
      author={Dongyu Ru and Lin Qiu and Xiangkun Hu and Tianhang Zhang and Peng Shi and Shuaichen Chang and Jiayang Cheng and Cunxiang Wang and Shichao Sun and Huanyu Li and Zizhao Zhang and Binjie Wang and Jiarong Jiang and Tong He and Zhiguo Wang and Pengfei Liu and Yue Zhang and Zheng Zhang},
      year={2024},
      eprint={2408.08067},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2408.08067}, 
}

馃殌 Quick Start

Setup Environment

pip install ragchecker
python -m spacy download en_core_web_sm

Run the Checking Pipeline with CLI

Please process your own data with the same format as examples/checking_inputs.json. The only required annotation for each query is the ground truth answer (gt_answer).

{
  "results": [
    {
      "query_id": "<query id>", # string
      "query": "<input query>", # string
      "gt_answer": "<ground truth answer>", # string
      "response": "<response generated by the RAG generator>", # string
      "retrieved_context": [ # a list of retrieved chunks by the retriever
        {
          "doc_id": "<doc id>", # string, optional
          "text": "<content of the chunk>" # string
        },
        ...
      ]
    },
    ...
  ]
}

If you are using AWS Bedrock version of Llama3 70B for the claim extractor and checker, use the following command to run the checking pipeline, the checking results as well as intermediate results will be saved to --output_path:

ragchecker-cli \
    --input_path=examples/checking_inputs.json \
    --output_path=examples/checking_outputs.json \
    --extractor_name=bedrock/meta.llama3-70b-instruct-v1:0 \
    --checker_name=bedrock/meta.llama3-70b-instruct-v1:0 \
    --batch_size_extractor=64 \
    --batch_size_checker=64 \
    --metrics all_metrics \
    # --disable_joint_check  # uncomment this line for one-by-one checking, slower but slightly more accurate

Please refer to RefChecker's guidance for setting up the extractor and checker models.

It will output the values for the metrics like follows:

Results for examples/checking_outputs.json:
{
  "overall_metrics": {
    "precision": 73.3,
    "recall": 62.5,
    "f1": 67.3
  },
  "retriever_metrics": {
    "claim_recall": 61.4,
    "context_precision": 87.5
  },
  "generator_metrics": {
    "context_utilization": 87.5,
    "noise_sensitivity_in_relevant": 22.5,
    "noise_sensitivity_in_irrelevant": 0.0,
    "hallucination": 4.2,
    "self_knowledge": 25.0,
    "faithfulness": 70.8
  }
}

Run the Checking Pipeline with Python

from ragchecker import RAGResults, RAGChecker
from ragchecker.metrics import all_metrics


# initialize ragresults from json/dict
with open("examples/checking_inputs.json") as fp:
    rag_results = RAGResults.from_json(fp.read())

# set-up the evaluator
evaluator = RAGChecker(
    extractor_name="bedrock/meta.llama3-70b-instruct-v1:0",
    checker_name="bedrock/meta.llama3-70b-instruct-v1:0",
    batch_size_extractor=32,
    batch_size_checker=32
)

# evaluate results with selected metrics or certain groups, e.g., retriever_metrics, generator_metrics, all_metrics
evaluator.evaluate(rag_results, all_metrics)
print(rag_results)

"""Output
RAGResults(
  2 RAG results,
  Metrics:
  {
    "overall_metrics": {
      "precision": 76.4,
      "recall": 62.5,
      "f1": 68.3
    },
    "retriever_metrics": {
      "claim_recall": 61.4,
      "context_precision": 87.5
    },
    "generator_metrics": {
      "context_utilization": 87.5,
      "noise_sensitivity_in_relevant": 19.1,
      "noise_sensitivity_in_irrelevant": 0.0,
      "hallucination": 4.5,
      "self_knowledge": 27.3,
      "faithfulness": 68.2
    }
  }
)
"""

Meta-Evaluation

Please refer to data/meta_evaluation on meta-evaluation for the effectiveness of RAGChecker.

Work with LlamaIndex

RAGChecker now integrates with LlamaIndex, providing a powerful evaluation tool for RAG applications built with LlamaIndex. For detailed instructions on how to use RAGChecker with LlamaIndex, please refer to the LlamaIndex documentation on RAGChecker integration. This integration allows LlamaIndex users to leverage RAGChecker's comprehensive metrics to evaluate and improve their RAG systems.

Security

See CONTRIBUTING for more information.

License

This project is licensed under the Apache-2.0 License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ragcheckerzsv3-0.1.0.tar.gz (9.6 kB view details)

Uploaded Source

Built Distribution

RAGCheckerZsV3-0.1.0-py3-none-any.whl (10.2 kB view details)

Uploaded Python 3

File details

Details for the file ragcheckerzsv3-0.1.0.tar.gz.

File metadata

  • Download URL: ragcheckerzsv3-0.1.0.tar.gz
  • Upload date:
  • Size: 9.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.6

File hashes

Hashes for ragcheckerzsv3-0.1.0.tar.gz
Algorithm Hash digest
SHA256 dbb42e18d5610b48fee542c9c513f2b6227b5c3f7de91a952254a1d5b49648b8
MD5 39786b745095002ddacc34557fc7720e
BLAKE2b-256 85d9a5054960764bedbe371d6d9563f631e2a664f88dc8dc585c91c17ba47231

See more details on using hashes here.

File details

Details for the file RAGCheckerZsV3-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for RAGCheckerZsV3-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 9aee55fd19b80fcd81808c1249e900dc7153c46422ada6d37c6ff19ea1a37dc9
MD5 6330d1310e6574fba9479cd44ef1bb3b
BLAKE2b-256 cb32c955f883a8a362834ea9a5a2a0054e42a9c420a1c5b77e46d9e032617f74

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page