A Python package for LLM and RAG testing in high-energy physics applications (Originally for ATLAS in the chATLAS project).
Project description
chATLAS Benchmark
A Python package for LLM and RAG benchmarking and testing built for the ATLAS experiment as part of the chATLAS project
📚 Overview
chATLAS_Benchmark provides a flexible framework for testing and benchmarking LLMs and RAG models with robust storage and retrieval of past runs for comparison using SQL.🌟 Key Features
- Set of benchmarking tests for RAG comparison
- Semantic Similarity Score
- F-1 Testing
- ROUGE 1, 2 and l testing
- Document Match scores
- SQL storage of results for robust database operations
🚀 Quick Start
📥 Installation
pip install chATLAS-Benchmark
For Semantic Similarity test:
import nltk
nltk.download('punkt_tab')
For new QA pair Generation:
export OPENAI_API_KEY="<Your OpenAI Api Key>"
💡 Basic Usage
# Import the benchmarking module
from chATLAS_Benchmark import BenchmarkTest, fetch_metrics_results
# Initialize the test set
test = BenchmarkTest("/path/to/test.json")
# --- Run the RAG on the questions ---
# Assuming RAG.run() returns an answer and list of docs for each question
gen_answers = []
gen_docs = []
for q in test.questions:
answer, docs = RAG.run(q)
gen_answers.append(answer)
gen_docs.append(docs)
# Set generated answers and documents on the test instance
test.set_generated_data(gen_answers, gen_docs)
# Run the scoring with any metrics you want
scores = test.score_test_set("LexicalMetrics", "SemanticSimilarity", "DocumentMatch")
# Save the results to the db
test.store_results(scores, db_name="database.db", name="NameOfRAG")
# See at all previous scored results in the db
df = fetch_metrics_results()
Contents
- Installation
- Requirements
- Extending chATLAS_Benchmark
- Current Metrics Overview
- Project Structure and Imports
Installation
To install the package:
pip install chATLAS_Benchmark
Requirements
Python Dependencies
Dependencies should be installed automatically when installing the package, but a full list of requirements is given in requirements.txt in module root for reference.
Document Format
For DocumentMatch scoring it expects documents to be in Documentformat:
from chATLAS_Benchmark import Document
But any document with:
name_of_document = document.metadata["name"]
Would work (so langchain document format also compatible as long as the documents have their name in metadata).
Extending chATLAS_Benchmark
Adding New Tests
Currently, new tests can be added functionally but could(/should) be updated to use class inheritance.
Each testing method is implemented as a separate script in the /tests directory. To add a new test:
1. Write new test metric scoring method - return pandas DF
import pandas as pd
def myNewTestMetric(data:dict):
"""
:param data: (dict) -
{
"questions": List[str], # Original questions
"answers": List[str], # Generated answers
"documents": List[List[str]], # Retrieved documents
"test_answers": List[str], # Expected answers
"test_documents": List[List[str]] # Documents that generated the expected answers
}
"""
return pd.DataFrame()
2. Add the test name and function to the BaseBenchmark.implemented_tests dict
from chATLAS_Benchmark import BenchmarkTest
myTest = BenchmarkTest("testSet.json")
myTest.implemented_tests["MyNewTest"] = myNewTestMetric
3. Run The Testing
gen_answers = []
gen_docs = []
for q in myTest.questions:
answer, docs = RAG.run(q)
gen_answers.append(answer)
gen_docs.append(docs)
# Set generated answers and documents on the test instance
myTest.set_generated_data(gen_answers, gen_docs)
# Run the scoring with any metrics you want
scores = myTest.score_test_set("MyNewTest")
4. Storing New Test to DB
Now the package was not built in the most scalable way, so this cannot be done without explicitly editing the source code
for the chATLAS_Benchmark.test_utils.database_utils.py script.
You can follow the setup for the other tables in this script to add your own to it.
chATLAS_Benchmark Metrics Overview
The chATLAS_Benchmark package provides several evaluation metrics to assess the performance of Retrieval-Augmented Generation (RAG) systems. These metrics help analyze the quality of retrieved documents and generated answers against a ground truth. The defined metrics fall into three main categories:
1. DocumentMatch
- Purpose: Evaluates whether the RAG system successfully retrieves the correct document that was originally used to generate the test question.
- How it works: The metric checks if the correct document is present within the set of documents retrieved by the system.
2. LexicalMetrics
These metrics assess the generated answer's textual similarity to the reference answer using lexical comparison techniques.
-
Exact Match:
- Compares the RAG/LLM-generated answer to the ground truth after stemming words to account for variations.
- Returns
Trueif the answers are identical post-stemming, otherwiseFalse.
-
F1 Score:
- Measures the overlap between the true and generated answers using precision and recall.
- The F1 score is the harmonic mean of precision and recall, capturing both false positives and false negatives.
-
ROUGE Scores: The package computes the following ROUGE scores to evaluate overlap between the generated and true answers:
- ROUGE-1: Measures overlap of unigrams (single words).
- ROUGE-2: Measures overlap of bigrams (two consecutive words).
- ROUGE-L: Measures the longest common subsequence (LCS) between the true and generated answers.
3. SemanticSimilarity
- Purpose: Evaluates the semantic similarity between the generated and true answers using embedding-based methods.
- How it works:
- Both the generated and ground-truth answers are converted into vector embeddings using a pre-trained model.
- The cosine similarity is computed between these embeddings, providing a score between
-1(completely dissimilar) and1(identical), with higher values indicating closer semantic meaning.
These metrics provide a comprehensive evaluation framework to analyze both lexical accuracy and semantic understanding of RAG-generated responses, ensuring robust performance assessment.
Project Structure and imports
Project Structure:
chATLAS_Benchmark/
│ _version.py
│ __init__.py
│
├───Generation
│ │ gen_qa_pairs.py
│ │ gen_simple_qa_pairs.py
│ │ __init__.py
├───tests
│ │ DocumentMatch.py
│ │ LexicalMetrics.py
│ │ README.md
│ │ semanticSimilarity.py
│ │ test_benchmark_metrics.py
│ │ __init__.py
├───test_utils
│ │ database_utils.py
│ │ __init__.py
Module Imports:
# Standard Imports
from chATLAS_Benchmark import (
BenchmarkTest,
Document,
fetch_metrics_results
)
# Sub Imports
from chATLAS_Benchmark.Generation import (
generate_qa,
generate_qa_from_named_files
)
🔧 Development Status
Current development priorities:
- More modular design
- Integration of answer correctness test
- Additional testing
📄 License
chATLAS_Benchmark is released under Apache v2.0 license.
Made with ❤️ by the ATLAS Collaboration
For questions and support, please contact
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file chatlas_benchmark-0.0.1.tar.gz.
File metadata
- Download URL: chatlas_benchmark-0.0.1.tar.gz
- Upload date:
- Size: 28.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
97734e5e51d1b09ee02f1081dbc939e6e7adecd44872e50b18a7bc77ba838aa4
|
|
| MD5 |
902bf1fdbcc2d9aeddb38ec867c25b46
|
|
| BLAKE2b-256 |
66ef98535ccee59e80dc2a057ab9f9ed66a0728839eb48f9b280c6c70ae2acc5
|
File details
Details for the file chATLAS_Benchmark-0.0.1-py3-none-any.whl.
File metadata
- Download URL: chATLAS_Benchmark-0.0.1-py3-none-any.whl
- Upload date:
- Size: 29.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
703477bafc7f95167f2ec86c7af4c83c003eba3d736913c6493c9f9df0136f29
|
|
| MD5 |
2a179fe379d83caf8c2d96a347b75a47
|
|
| BLAKE2b-256 |
a625db06cb108b4912e328a5a0937eba418c01283bd07fe29c8e2d20f76e1b7b
|