Skip to main content

A comprehensive evaluation toolkit for assessing Retrieval-Augmented Generation (RAG) outputs using linguistic, semantic, and fairness metrics

Project description

RAG Evaluator

Overview

Eval RAG is a comprehensive evaluation toolkit for assessing Retrieval-Augmented Generation (RAG) outputs using linguistic, semantic, and fairness metrics.

Installation

You can install the library using pip:

pip install eval-rag

Usage

Here's how to use the Eval RAG library:

from eval_rag import EvalRAG

# Initialize the evaluator
evaluator = EvalRAG()

# Input data
question = "What are the causes of climate change?"
response = "Climate change is caused by human activities."
reference = "Human activities such as burning fossil fuels cause climate change."

# Evaluate the response
metrics = evaluator.evaluate_all(question, response, reference)

# Print the results
print(metrics)

Metrics

The RAG Evaluator provides the following metrics:

  1. BLEU (0-100): Measures the overlap between the generated output and reference text based on n-grams.

    • 0-20: Low similarity, 20-40: Medium-low, 40-60: Medium, 60-80: High, 80-100: Very high
  2. ROUGE-1 (0-1): Measures the overlap of unigrams between the generated output and reference text.

    • 0.0-0.2: Poor overlap, 0.2-0.4: Fair, 0.4-0.6: Good, 0.6-0.8: Very good, 0.8-1.0: Excellent
  3. BERT Score (0-1): Evaluates the semantic similarity using BERT embeddings (Precision, Recall, F1).

    • 0.0-0.5: Low similarity, 0.5-0.7: Moderate, 0.7-0.8: Good, 0.8-0.9: High, 0.9-1.0: Very high
  4. Perplexity (1 to ∞, lower is better): Measures how well a language model predicts the text.

    • 1-10: Excellent, 10-50: Good, 50-100: Moderate, 100+: High (potentially nonsensical)
  5. Diversity (0-1): Measures the uniqueness of bigrams in the generated output.

    • 0.0-0.2: Very low, 0.2-0.4: Low, 0.4-0.6: Moderate, 0.6-0.8: High, 0.8-1.0: Very high
  6. Racial Bias (0-1): Detects the presence of biased language in the generated output.

    • 0.0-0.2: Low probability, 0.2-0.4: Moderate, 0.4-0.6: High, 0.6-0.8: Very high, 0.8-1.0: Extreme
  7. MAUVE (0-1): MAUVE captures contextual meaning, coherence, and fluency while measuring both semantic similarity and stylistic alignment .

    • 0.0-0.2 (Poor), 0.2-0.4 (Fair), 0.4-0.6 (Good), 0.6-0.8 (Very good), 0.8-1.0 (Excellent).
  8. METEOR (0-1): Calculates semantic similarity considering synonyms and paraphrases.

    • 0.0-0.2: Poor, 0.2-0.4: Fair, 0.4-0.6: Good, 0.6-0.8: Very good, 0.8-1.0: Excellent
  9. CHRF (0-1): Computes Character n-gram F-score for fine-grained text similarity.

    • 0.0-0.2: Low, 0.2-0.4: Moderate, 0.4-0.6: Good, 0.6-0.8: High, 0.8-1.0: Very high
  10. Flesch Reading Ease (0-100): Assesses text readability.

  • 0-30: Very difficult, 30-50: Difficult, 50-60: Fairly difficult, 60-70: Standard, 70-80: Fairly easy, 80-90: Easy, 90-100: Very easy
  1. Flesch-Kincaid Grade (0-18+): Indicates the U.S. school grade level needed to understand the text.
    • 1-6: Elementary, 7-8: Middle school, 9-12: High school, 13+: College level

Testing

To run the tests, use the following command:

python -m unittest discover -s eval_rag -p "test_*.py"

License

This project is licensed under the MIT License. See the LICENSE file for details.

Contributing

Contributions are welcome! If you have any improvements, suggestions, or bug fixes, feel free to create a pull request (PR) or open an issue on GitHub. Please ensure your contributions adhere to the project's coding standards and include appropriate tests.

How to Contribute

  1. Fork the repository.
  2. Create a new branch for your feature or bug fix.
  3. Make your changes.
  4. Run tests to ensure everything is working.
  5. Commit your changes and push to your fork.
  6. Create a pull request (PR) with a detailed description of your changes.

Contact

If you have any questions or need further assistance, feel free to reach out via email.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

eval_rag-0.0.1.tar.gz (5.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

eval_rag-0.0.1-py3-none-any.whl (5.8 kB view details)

Uploaded Python 3

File details

Details for the file eval_rag-0.0.1.tar.gz.

File metadata

  • Download URL: eval_rag-0.0.1.tar.gz
  • Upload date:
  • Size: 5.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.4

File hashes

Hashes for eval_rag-0.0.1.tar.gz
Algorithm Hash digest
SHA256 b3acd2ce21127c24f1ea617a6e6801d5f4936ddc0e0bebf65782584be68327b1
MD5 035c50610baa662846f8c5c8970e2614
BLAKE2b-256 2472e458b0bfbdefa96cfcecbbbdaecf74f9604cb244ecb2a965a70c570dac5d

See more details on using hashes here.

File details

Details for the file eval_rag-0.0.1-py3-none-any.whl.

File metadata

  • Download URL: eval_rag-0.0.1-py3-none-any.whl
  • Upload date:
  • Size: 5.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.4

File hashes

Hashes for eval_rag-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 00a3172a824cb9609f89911a3d0406ab075023b500557a903accc30ebd26ba6d
MD5 0305e0deeefc1aa54d42ba0b442fbf94
BLAKE2b-256 c9dc1bc7bce3679faf624ea707840c21251c3b16f7d3d8e22a13b8fdeffbc72b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page