Metric to measure RAG responses with inline citations
Project description
Trust Eval
Welcome to Trust Eval! 🌟
A comprehensive tool for evaluating the trustworthiness of inline-cited outputs generated by large language models (LLMs) within the Retrieval-Augmented Generation (RAG) framework. Our suite of metrics measures correctness, citation quality, and groundedness.
This is the official implementation of the metrics introduced in the paper "Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Refuse" (accepted at ICLR '25).
Installation 🛠️
Prerequisites
- OS: Linux
- Python: Versions 3.10 – 3.12 (preferably 3.10.13)
- GPU: Compute capability 7.0 or higher (e.g., V100, T4, RTX20xx, A100, L4, H100)
Steps
-
Set up a Python environment
conda create -n trust_eval python=3.10.13 conda activate trust_eval
-
Install dependencies
pip install trust_eval
Note: that vLLM will be installed with CUDA 12.1. Please ensure your CUDA setup is compatible.
-
Set up NLTK
import nltk nltk.download('punkt_tab')
-
Download benchmark datasets Please download the evaluation dataset from Huggingface and place the folder as the same level as the prompt folder (see demo for example).
Quickstart 🔥
Evaluate your RAG setup with these main 8 lines.
Generating Responses
from config import EvaluationConfig, ResponseGeneratorConfig
from evaluator import Evaluator
from logging_config import logger
from response_generator import ResponseGenerator
# Configure the response generator
generator_config = ResponseGeneratorConfig.from_yaml(yaml_path="generator_config.yaml")
# Generate and save responses
generator = ResponseGenerator(generator_config)
generator.generate_responses()
generator.save_responses()
Evaluating Responses
# Configure the evaluator
evaluation_config = EvaluationConfig.from_yaml(yaml_path="eval_config.yaml")
# Compute and save evaluation metrics
evaluator = Evaluator(evaluation_config)
evaluator.compute_metrics()
evaluator.save_results()
Please refer to quickstart for the complete guide.
Contact 📬
For questions or feedback, reach out to Shang Hong (simshanghong@gmail.com).
Citation 📝
If you use this software in your research, please cite the Trust-Eval paper as below.
@misc{song2024measuringenhancingtrustworthinessllms,
title={Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Refuse},
author={Maojia Song and Shang Hong Sim and Rishabh Bhardwaj and Hai Leong Chieu and Navonil Majumder and Soujanya Poria},
year={2024},
eprint={2409.11242},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2409.11242},
}
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file trust_eval-0.1.1.tar.gz.
File metadata
- Download URL: trust_eval-0.1.1.tar.gz
- Upload date:
- Size: 20.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.5 CPython/3.12.2 Linux/6.2.0-32-generic
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9ac6073b364013ae218c90f918c0fb2b8179ec8ab77fbc2e52c5e45b1636c7fa
|
|
| MD5 |
9b619baadb772d590ce3e9b8cc4ce966
|
|
| BLAKE2b-256 |
5592733742819e0df0f192534e0587b0a9fd81e8c737905fc23b9bb1d24f39b3
|
File details
Details for the file trust_eval-0.1.1-py3-none-any.whl.
File metadata
- Download URL: trust_eval-0.1.1-py3-none-any.whl
- Upload date:
- Size: 22.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.5 CPython/3.12.2 Linux/6.2.0-32-generic
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e56c59cd3c2c8e62a1df2fc9f6598bc65b18382c7bf3549b3181de505662c3ae
|
|
| MD5 |
bdaf14981175b71929d89946fad0609f
|
|
| BLAKE2b-256 |
4f2c0d1392a7b23c2ebc82e02249318f75c071b72a7e378338e16967823e6626
|