Skip to main content

RURAGE (Robust Universal RAG Evaluation) is a Python library developed to speed-up evaluation of RAG systems with Correctness, Faithfulness and Relevance axes using a variety of deterministic and model-based metrics.

Project description

RURAGE - Robust Universal RAG Evaluation

RURAGE (Robust Universal RAG Evaluation) is a Python library developed to speed-up evaluation of RAG systems with Correctness, Faithfulness and Relevance axes using a variety of deterministic and model-based metrics.

Keypoints:

  • If there are many weak metrics, you can combine them into an ensemble
  • We train ensemble on the necessary nature of the data, carefully without the data leaks of validation
  • It is necessary to prepare a Golden-set with standard answers. It is needed for both Judge LLM and RURAGE
  • The resulting usefulness of deterministic metrics almost doubles

Metrics by Mistral 7B top-10 of both approaches evaluated on the Golden set and compared using the best thresholds in classification with usefulness labels by human evaluation. Each metric has its own marker, with classes grouped by color. The strongest metrics are located at the top right according to the axes Recall and Precision. Alt text

Metrics of both approaches evaluated on the golden set and compared using Pearson’s correlation with human evaluation (Usefulness) labels. Top-5 and top-10 indicate the number of search engine snippets passed to the model as context. Variations with different refuse rates (No info) from Mistral 7B are included. Alt text

Unfortunately, ensemble creation hasn't been added yet, but you can independently experiment with different boostings on decision trees yourself.

Features

  • Deterministic Metrics:

    • ROUGE
    • BLEU
    • Bigram overlap Precision
    • Bigram overlap Recall
    • Bigram overlap F1
    • Unigram overlap Precision
    • Unigram overlap Recall
    • Unigram overlap F1
  • Model-based Metrics:

    • NLI Scores using Transformer models
    • Cosine Similarity using Transformer models
    • Uncertainty (soon)
  • Ensemble Creation:

    • Combine scores from multiple metrics to create a robust evaluation ensemble.

Installation

You can install RURAGE from PyPI:

pip install rurage

Basic Usage

See detailed example usage here If you are looking for RURAGE ensemble train/inference, click here

Example of how to use RURAGE evaluation:

import pandas as pd

from rurage import RAGEModelConfig, RAGESetConfig, RAGEvaluator

# For each model that needs to be evaluated, you need to initialize a config containing:
# * the name of the column with the context on which the answer was generated
# * the name of the column with the generated model answer
models_cfg = []
models_cfg.append(
    RAGEModelConfig(context_col="example_context_top5", answer_col="model_1_answer")
)
models_cfg.append(
    RAGEModelConfig(context_col="example_context_top5", answer_col="model_2_answer")
)

# Initialize the configuration of the evaluation set:
# * Validation set pd.Dataframe
# * Name of the question column
# * Name of the golden answer column
# * List of model configs
validation_set = pd.read_csv("example_set.csv")
validation_set_cfg = RAGESetConfig(
    golden_set=validation_set,
    question_col="question",
    golden_answer_col="golden_answer",
    models_cfg=models_cfg,
)

# Initialize the evaluator
rager = RAGEvaluator(golden_set_cfg=validation_set_cfg)

# Run a comprehensive evalution (Correctness, Faithfulness, Relevance) for each model
correctness_report, faithfulness_report, relevance_report = (
    rager.comprehensive_evaluation()
)

# Or you can run a separate evaluation
correctness_report = rager.evaluate_correctness()
faithfulness_report = rager.evaluate_faithfulness()
relevance_report = rager.evaluate_relevance()

# For each evaluation method, it is possible to print a report, as well as receive a pointwise report:
# print_report : bool, optional
# Whether to print the output to the console. Defaults to False.

# pointwise_report : bool, optional
# Whether to return pointwise report. Defaults to False.

To-Do List

By the End of Q3

  • Automatic Ensemble Creation: Implement functionality for automatic creation of evaluation ensembles.
  • Auto-adaptive thresholds: Implement functionality for automatic creation thresholds for features in ensemble.
  • Multiclass Labels: Extend support to work with multiclass usefulness labels.

By the End of the Year

  • Uncertainty scores: Uncertainty scores to ensemble.
  • Judge LLM: Introduce our proprietary Judge LLM model for enhanced evaluation.

Contributing

We welcome contributions from the community. Please read our contributing guidelines and code of conduct.

License

RURAGE is licensed under the MIT License. See the LICENSE file for more information.

Contact

For any questions, issues, or suggestions, please open an issue on our GitHub repository.

Acknowledgments

RURAGE presented in PyCon 2024 by MTS AI Search Group.

Developed by MTS AI Search Group (Krayko Nikita, Laputin Fedor, Sidorov Ivan)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

rurage-1.1.1.tar.gz (13.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

RURAGE-1.1.1-py3-none-any.whl (14.4 kB view details)

Uploaded Python 3

File details

Details for the file rurage-1.1.1.tar.gz.

File metadata

  • Download URL: rurage-1.1.1.tar.gz
  • Upload date:
  • Size: 13.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for rurage-1.1.1.tar.gz
Algorithm Hash digest
SHA256 89bbc50ad4ffd8e5fa94743ec2714c0150d1d66b1ba963916b1ce298ac256806
MD5 0490145526af9c42456c5449fa935cc4
BLAKE2b-256 976a9b4027b376c2d3c87c77f2351605db9648ecf657717cb3c851b76f5ab457

See more details on using hashes here.

Provenance

The following attestation bundles were made for rurage-1.1.1.tar.gz:

Publisher: publish.yml on mts-ai/rurage

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file RURAGE-1.1.1-py3-none-any.whl.

File metadata

  • Download URL: RURAGE-1.1.1-py3-none-any.whl
  • Upload date:
  • Size: 14.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for RURAGE-1.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 4c108598840ec878d8c3c21c738e2f94c631b1f7a54cffeba4207c79599716b6
MD5 c9dbc5c26dc49fc4ca62e5bf211004fe
BLAKE2b-256 10ac9bce1554297e2edce145740f2f1753f5805475fad4b80b1f136cef932f63

See more details on using hashes here.

Provenance

The following attestation bundles were made for RURAGE-1.1.1-py3-none-any.whl:

Publisher: publish.yml on mts-ai/rurage

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page