Skip to main content

A Python package for LLM and RAG testing in high-energy physics applications (Originally for ATLAS in the chATLAS project).

Project description

chATLAS Benchmark

ATLAS

Python SQL

A Python package for LLM and RAG benchmarking and testing built for the ATLAS experiment as part of the chATLAS project

📚 Overview

chATLAS_Benchmark provides a flexible framework for testing and benchmarking LLMs and RAG models with robust storage and retrieval of past runs for comparison using SQL.

🌟 Key Features

  • Set of benchmarking tests for RAG comparison
    • Semantic Similarity Score
    • F-1 Testing
    • ROUGE 1, 2 and l testing
    • Document Match scores
  • SQL storage of results for robust database operations

🚀 Quick Start

📥 Installation

pip install chATLAS-Benchmark

For Semantic Similarity test:

import nltk
nltk.download('punkt_tab')

For new QA pair Generation:

export OPENAI_API_KEY="<Your OpenAI Api Key>"

💡 Basic Usage

# Import the benchmarking module
from chATLAS_Benchmark import BenchmarkTest, fetch_metrics_results

# Initialize the test set
test = BenchmarkTest("/path/to/test.json")

# --- Run the RAG on the questions ---
# Assuming RAG.run() returns an answer and list of docs for each question
gen_answers = []
gen_docs = []
for q in test.questions:
    answer, docs = RAG.run(q)
    gen_answers.append(answer)
    gen_docs.append(docs)

# Set generated answers and documents on the test instance
test.set_generated_data(gen_answers, gen_docs)

# Run the scoring with any metrics you want
scores = test.score_test_set("LexicalMetrics", "SemanticSimilarity", "DocumentMatch")

# Save the results to the db
test.store_results(scores, db_name="database.db", name="NameOfRAG")


# See at all previous scored results in the db
df = fetch_metrics_results()

Contents

  1. Installation
  2. Requirements
  3. Extending chATLAS_Benchmark
  4. Current Metrics Overview
  5. Project Structure and Imports

Installation

To install the package:

pip install chATLAS_Benchmark

Requirements

Python Dependencies

Dependencies should be installed automatically when installing the package, but a full list of requirements is given in requirements.txt in module root for reference.

Document Format

For DocumentMatch scoring it expects documents to be in Documentformat:

from chATLAS_Benchmark import Document

But any document with:

name_of_document = document.metadata["name"]

Would work (so langchain document format also compatible as long as the documents have their name in metadata).


Extending chATLAS_Benchmark

Adding New Tests

Currently, new tests can be added functionally but could(/should) be updated to use class inheritance.

Each testing method is implemented as a separate script in the /tests directory. To add a new test:

1. Write new test metric scoring method - return pandas DF

import pandas as pd


def myNewTestMetric(data:dict):
  """
  :param data: (dict) -
  {
    "questions": List[str],       # Original questions
    "answers": List[str],         # Generated answers
    "documents": List[List[str]], # Retrieved documents
    "test_answers": List[str],    # Expected answers
    "test_documents": List[List[str]] # Documents that generated the expected answers
    }
  """
  return pd.DataFrame()

2. Add the test name and function to the BaseBenchmark.implemented_tests dict

from chATLAS_Benchmark import BenchmarkTest

myTest = BenchmarkTest("testSet.json")

myTest.implemented_tests["MyNewTest"] = myNewTestMetric

3. Run The Testing

gen_answers = []
gen_docs = []
for q in myTest.questions:
    answer, docs = RAG.run(q)
    gen_answers.append(answer)
    gen_docs.append(docs)

# Set generated answers and documents on the test instance
myTest.set_generated_data(gen_answers, gen_docs)

# Run the scoring with any metrics you want
scores = myTest.score_test_set("MyNewTest")

4. Storing New Test to DB

Now the package was not built in the most scalable way, so this cannot be done without explicitly editing the source code for the chATLAS_Benchmark.test_utils.database_utils.py script.

You can follow the setup for the other tables in this script to add your own to it.


chATLAS_Benchmark Metrics Overview

The chATLAS_Benchmark package provides several evaluation metrics to assess the performance of Retrieval-Augmented Generation (RAG) systems. These metrics help analyze the quality of retrieved documents and generated answers against a ground truth. The defined metrics fall into three main categories:

1. DocumentMatch

  • Purpose: Evaluates whether the RAG system successfully retrieves the correct document that was originally used to generate the test question.
  • How it works: The metric checks if the correct document is present within the set of documents retrieved by the system.

2. LexicalMetrics

These metrics assess the generated answer's textual similarity to the reference answer using lexical comparison techniques.

  • Exact Match:

    • Compares the RAG/LLM-generated answer to the ground truth after stemming words to account for variations.
    • Returns True if the answers are identical post-stemming, otherwise False.
  • F1 Score:

    • Measures the overlap between the true and generated answers using precision and recall.
    • The F1 score is the harmonic mean of precision and recall, capturing both false positives and false negatives.
  • ROUGE Scores: The package computes the following ROUGE scores to evaluate overlap between the generated and true answers:

    • ROUGE-1: Measures overlap of unigrams (single words).
    • ROUGE-2: Measures overlap of bigrams (two consecutive words).
    • ROUGE-L: Measures the longest common subsequence (LCS) between the true and generated answers.

3. SemanticSimilarity

  • Purpose: Evaluates the semantic similarity between the generated and true answers using embedding-based methods.
  • How it works:
    • Both the generated and ground-truth answers are converted into vector embeddings using a pre-trained model.
    • The cosine similarity is computed between these embeddings, providing a score between -1 (completely dissimilar) and 1 (identical), with higher values indicating closer semantic meaning.

These metrics provide a comprehensive evaluation framework to analyze both lexical accuracy and semantic understanding of RAG-generated responses, ensuring robust performance assessment.

Project Structure and imports

Project Structure:

chATLAS_Benchmark/
│   _version.py
│   __init__.py
│
├───Generation
│   │   gen_qa_pairs.py
│   │   gen_simple_qa_pairs.py
│   │   __init__.py
├───tests
│   │   DocumentMatch.py
│   │   LexicalMetrics.py
│   │   README.md
│   │   semanticSimilarity.py
│   │   test_benchmark_metrics.py
│   │   __init__.py
├───test_utils
│   │   database_utils.py
│   │   __init__.py

Module Imports:

# Standard Imports
from chATLAS_Benchmark import (
    BenchmarkTest,
    Document,
    fetch_metrics_results
)

# Sub Imports
from chATLAS_Benchmark.Generation import (
    generate_qa,
    generate_qa_from_named_files
)

🔧 Development Status

Current development priorities:

  • More modular design
  • Integration of answer correctness test
  • Additional testing

CHANGELOG

0.0.2

Fixing loading file encoding for json

Fixing docstring for what type of file BenchmarkTest accepts

0.0.1

Initial Release


📄 License

chATLAS_Benchmark is released under Apache v2.0 license.


Made with ❤️ by the ATLAS Collaboration

For questions and support, please contact

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

chatlas_benchmark-0.0.2.tar.gz (29.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

chATLAS_Benchmark-0.0.2-py3-none-any.whl (30.3 kB view details)

Uploaded Python 3

File details

Details for the file chatlas_benchmark-0.0.2.tar.gz.

File metadata

  • Download URL: chatlas_benchmark-0.0.2.tar.gz
  • Upload date:
  • Size: 29.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.11

File hashes

Hashes for chatlas_benchmark-0.0.2.tar.gz
Algorithm Hash digest
SHA256 c1d0cc9acfeb11a00fabf782e44b986aac637ff4b04bf80f76b43728f98aad20
MD5 d89808b59ea778b63917a15260f2acde
BLAKE2b-256 addf5099f474678f053b4b3c4b0537cdcf5517e19dc2ac6bc374d8780426a5a4

See more details on using hashes here.

File details

Details for the file chATLAS_Benchmark-0.0.2-py3-none-any.whl.

File metadata

File hashes

Hashes for chATLAS_Benchmark-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 fc57f43f5bca5330944bb9989a9c3888f63de333682079c050d162ee9cf5faa9
MD5 fd09cc8d0572ec6116645fba66b1babf
BLAKE2b-256 259ce58b4230a2ac73293e452cf1c46ba6fde8548684b861d3b78f1c6c79cdf9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page