Skip to main content

A library for evaluating Retrieval-Augmented Generation (RAG) systems

Project description

RAG Evaluator

Overview

RAG Evaluator is a Python library for evaluating Retrieval-Augmented Generation (RAG) systems. It provides various metrics to evaluate the quality of generated text against reference text.

Installation

You can install the library using pip:

pip install evaluatorspk

Usage

Here's how to use the RAG Evaluator library:

from evaluatorspk import RAGEvaluator

# Initialize the evaluator
evaluator = RAGEvaluator()

# Input data
question = "What are the causes of climate change?"
response = "Climate change is caused by human activities."
reference = "Human activities such as burning fossil fuels cause climate change."

# Evaluate the response
metrics = evaluator.evaluate_all(question, response, reference)

# Print the results
print(metrics)

Streamlit Web App

To run the web app:

  • cd into streamlit app folder.
  • Create a virtual env
  • Activate
  • Install all dependencies
  • and run
streamlit run app.py

Metrics

The following metrics are provided by the library:

  • BLEU: Measures the overlap between the generated output and reference text based on n-grams.
  • ROUGE-1: Measures the overlap of unigrams between the generated output and reference text.
  • BERT Score: Evaluates the semantic similarity between the generated output and reference text using BERT embeddings.
  • Perplexity: Measures how well a language model predicts the text.
  • Diversity: Measures the uniqueness of bigrams in the generated output.
  • Racial Bias: Detects the presence of biased language in the generated output.

Testing

To run the tests, use the following command:

python -m unittest discover -s rag_evaluator -p "test_*.py"

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

evaluatorspk-0.0.1.tar.gz (4.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

evaluatorspk-0.0.1-py3-none-any.whl (5.1 kB view details)

Uploaded Python 3

File details

Details for the file evaluatorspk-0.0.1.tar.gz.

File metadata

  • Download URL: evaluatorspk-0.0.1.tar.gz
  • Upload date:
  • Size: 4.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.0

File hashes

Hashes for evaluatorspk-0.0.1.tar.gz
Algorithm Hash digest
SHA256 f8c149543751e0736f3f714833221a3ea1aa7aeb26f0dc6f9674930021c78c9f
MD5 b9c2cd2a3190dfe3cbea32b91e3d5980
BLAKE2b-256 ce7134d8f027076b1eaad613f1ac2656d445e9fc9d7a38a18b3ae1665350a1de

See more details on using hashes here.

File details

Details for the file evaluatorspk-0.0.1-py3-none-any.whl.

File metadata

  • Download URL: evaluatorspk-0.0.1-py3-none-any.whl
  • Upload date:
  • Size: 5.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.0

File hashes

Hashes for evaluatorspk-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 961b967582e168b2ebb64491f0e7076efc8087994ace78a212852b97ff45cb0c
MD5 1ef89ec42edb468c414ea2dd76ec6573
BLAKE2b-256 c8c32a939696dc15ef4bdae3eb3f0431a4b75bbe751603c01fe971234ec87af2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page